HA+Federation集群实现(七)

集群规划

HA+Federation集群实现(七)_第1张图片

配置步骤
1、core-site.xml
1)整合Federation和HA的配置
2、hdfs-site.xml
1)添加新增节点配置
3、启动服务
1)zookeeper
2)journalnode
3)datanode
4)namenode
5)zkfc

实施步骤

在原有的ha基础上进行配置,即基于《基于ZK自动切换模式的实现(六)》的基础之上进行配置


1)停止所有服务

[hadoop@master ~]$ hadoop-daemons.sh --hostnames 'master hadoop04' stop zkfc
[hadoop@master ~]$ hadoop-daemons.sh stop datanode
[hadoop@master ~]$ hadoop-daemons.sh --hostnames 'master hadoop04' stop namenode
[hadoop@master ~]$ hadoop-daemons.sh --hostnames 'slave1 slave2 hadoop04' stop journalnode
[hadoop@slave1 ~]$ zkServer.sh stop    《===注意标红处的主机
[hadoop@slave2 ~]$ zkServer.sh stop
[hadoop@hadoop04 ~]$ zkServer.sh stop


2)vi core-site.xml


 
 
  fs.defaultFS
  viewfs://nsX
 


 
  hadoop.tmp.dir
  /opt/hadoop/tmp
 


 
  dfs.journalnode.edits.dir
  /opt/hadoop/journalnode/data
 

 
  ha.zookeeper.quorum
  hadoop04:2181,slave1:2181,slave2:2181
 



3)vi hdfs-site.xml



 
   dfs.namenode.name.dir
   file:/opt/hadoop/dfs/name
 


 
  dfs.datanode.data.dir
  file:/opt/hadoop/dfs/data
 


 
  dfs.replication
  2
 


 
  dfs.webhdfs.enabled
  true
 

 
   
  dfs.nameservices
  ns1,ns2
 


 
  dfs.ha.namenodes.ns1
  nn1,nn2
 

 
 
  dfs.ha.namenodes.ns2
  nn3,nn4
 

 
 
  dfs.namenode.rpc-address.ns1.nn1
  master:9000
 

 
 
  dfs.namenode.rpc-address.ns2.nn3
  slave1:9000
 

 
 
  dfs.namenode.rpc-address.ns1.nn2
  hadoop04:9000
 

 
 
  dfs.namenode.rpc-address.ns2.nn4
  slave2:9000
 

 
 
  dfs.namenode.http-address.ns1.nn1
  master:50070
 

 
 
  dfs.namenode.http-address.ns1.nn2
  hadoop04:50070
 

 
 
  dfs.namenode.http-address.ns2.nn3
  slave1:50070
 

 
 
  dfs.namenode.http-address.ns2.nn4
  slave2:50070
 

 
 
  dfs.namenode.shared.edits.dir
  qjournal://hadoop04:8485;slave1:8485;slave2:8485/ns1  《==在slave1和slave2上修改为  ns2
 

 
 
  dfs.client.failover.proxy.provider.ns1
  org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 


 
  dfs.client.failover.proxy.provider.ns2
  org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 

 
 
  dfs.ha.fencing.methods
  sshfence
 

 
 
  dfs.ha.fencing.ssh.private-key-files
  /home/hadoop/.ssh/id_rsa
 

 
 
  dfs.ha.fencing.ssh.connect-timeout
  30000
 

 
 
  dfs.ha.automatic-failover.enabled
  true
 




4)同步配置

[hadoop@slave2 ~]$ cd /opt/hadoop/etc/hadoop
[hadoop@slave2 ~]$  scp –r *.xml slave1:/opt/hadoop/etc/hadoop
[hadoop@slave2 ~]$  scp –r *.xml slave2:/opt/hadoop/etc/hadoop
[hadoop@slave2 ~]$  scp -r *.xml hadoop04:/opt/hadoop/etc/hadoop


5)修改slave1、slave2的hdfs-site.xml

 
  dfs.namenode.shared.edits.dir
  qjournal://hadoop04:8485;slave1:8485;slave2:8485/ns1  《==在slave1和slave2上修改为  ns2
 

6)启动zookeeper集群
[hadoop@slave1 hadoop]$ zkServer.sh start
[hadoop@slave2 hadoop]$ zkServer.sh start
[hadoop@hadoop04  hadoop]$ zkServer.sh start


7)zkfc重新格式化
ON MASTER   《===理论上可以不做,只做slave1上的格式化
[hadoop@master hadoop]$ hdfs zkfc –formatZK


On slave1
[hadoop@slave1 ~]$ hdfs zkfc –formatZK


8)启动journalnode服务
[hadoop@master hadoop]$  hadoop-daemons.sh --hostnames 'slave1 slave2 hadoop04' start journalnode

9)在slave1上执行journalnode的初始化
[hadoop@slave1 ~]$  hdfs namenode –initializeSharedEdits
在以前的ha配置过程中master节点也执行过这个操作


10)格式化slave1
[hadoop@slave1 ~]$ hdfs namenode -format -clusterid hd260

11)启动master的namenode服务
[hadoop@master hadoop]$ hadoop-daemon.sh start namenode

12)hadoop04节点namenode信息同步
[hadoop@hadoop04 hadoop]$ hdfs namenode –bootstrapStandby

13)hadoop04节点 namenode服务启动
[hadoop@hadoop04 hadoop]$ hadoop-daemon.sh start namenode

14) slave1节点namenode服务启动 

[hadoop@slave1 ~]$ hadoop-daemon.sh start namenode

15)在slave2上进行namenode信息同步  《==在这里同步跟格式化是一个效果

[hadoop@slave2 hadoop]$  hdfs namenode –bootstrapStandby

16)在slave2上启动namenode
[hadoop@slave2 hadoop]$ hadoop-daemon.sh start namenode

17)启动datanode
[hadoop@master hadoop]$ hadoop-daemons.sh start datanode

18)

[hadoop@master hadoop]$ hadoop-daemons.sh --hostnames 'master slave1 slave2 hadoop04' start zkfc

19)

HA+Federation集群实现(七)_第2张图片


HA+Federation集群实现(七)_第3张图片


HA+Federation集群实现(七)_第4张图片HA+Federation集群实现(七)_第5张图片


状态正确,配置成功

你可能感兴趣的:(hadoop)