HDFS HA搭建 (ZKFC自动故障转移)

基本集群搭建见这篇博客:hadoop集群搭建笔记
在基本集群搭建上配置下述文件
HA搭建配置:
hdfs-site.xml


  

	dfs.replication
     3


  dfs.nameservices
  mycluster



  dfs.ha.namenodes.mycluster
  nn1,nn2



  dfs.namenode.rpc-address.mycluster.nn1
  chdp11:8020


  dfs.namenode.rpc-address.mycluster.nn2
  chdp12:8020


  dfs.namenode.http-address.mycluster.nn1
  chdp11:50070


  dfs.namenode.http-address.mycluster.nn2
  chdp12:50070


  dfs.namenode.shared.edits.dir
  qjournal://chdp11:8485;chdp12:8485;chdp13:8485/mycluster


  dfs.client.failover.proxy.provider.mycluster
  org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

   
      dfs.ha.fencing.methods
      sshfence
    
    
      dfs.ha.fencing.ssh.private-key-files
      /home/root/.ssh/id_rsa
    

        dfs.ha.automatic-failover.enabled
        true


core-site.xml (配置了trash,不需要的直接去掉)



  fs.defaultFS
  hdfs://mycluster


  dfs.journalnode.edits.dir
  /usr/SFT/HA/hadoop-2.7.2/data/jn


        ha.zookeeper.quorum
        chdp11:2181,chdp12:2181,chdp13:2181



        hadoop.tmp.dir
       /usr/SFT/HA/hadoop-2.7.2/data/tmp



  fs.trash.interval
  60
  Number of minutes after which the checkpoint
  gets deleted.  If zero, the trash feature is disabled.
  This option may be configured both on the server and the
  client. If trash is disabled server side then the client
  side configuration is checked. If trash is enabled on the
  server side then the value configured on the server is
  used and the client configuration value is ignored.
  


  fs.trash.checkpoint.interval
  0
  Number of minutes between trash checkpoints.
  Should be smaller or equal to fs.trash.interval. If zero,
  the value is set to the value of fs.trash.interval.
  Every time the checkpointer runs it creates a new checkpoint 
  out of current and removes checkpoints created more than 
  fs.trash.interval minutes ago.
  


        ha.zookeeper.quorum
        chdp11:2181,chdp12:2181,chdp13:2181


后续操作(我用的全路径操作)
(1)关闭所有HDFS服务:
/usr/SFT/HA/hadoop-2.7.2/sbin/stop-dfs.sh
(2)启动Zookeeper集群:
/usr/SFT/HA/hadoop-2.7.2/bin/zkServer.sh start
(3)初始化HA在Zookeeper中状态:
/usr/SFT/HA/hadoop-2.7.2/bin/hdfs zkfc -formatZK
(4)启动HDFS服务:
/usr/SFT/HA/hadoop-2.7.2/sbin/start-dfs.sh
(5)在备机上启动namenode
/usr/SFT/HA/hadoop-2.7.2/sbin/hadoop-daemon.sh start namenode
HDFS HA搭建 (ZKFC自动故障转移)_第1张图片
HDFS HA搭建 (ZKFC自动故障转移)_第2张图片
HDFS HA搭建 (ZKFC自动故障转移)_第3张图片

你可能感兴趣的:(Bigdata,#,install)