Hadoop 2.2.0版本HDFSHA配置

注:以下配置描述的是HDFSQJM方式的HA配置。

 

1.1 zookeeper集群配置

这里我使用了4台机器部署zookeeper集群,机器IP分别是:

10.0.0.131 Namenode1-v2

10.0.0.132 Namenode2-v2

10.0.0.133 Datanode1-v2

10.0.0.134 Datanode2-v2

 

 

首先建立zookeeper的数据目录,比如:

mkdir -p /local/zookeeper

同时建立日志存放目录:

mkdir –p /local/zookeeper/log

然后修改环境变量(比如修改~/.bash_profile),增加如下变量:

exportZOO_HOME=/home/oracle/zookeeper-3.4.5

export ZOO_LOG_DIR=/local/zookeeper/log

$ZOO_HOME/conf下创建配置文件:

touch zoo.cfg

zoo.cfg文件是加入以下配置:

tickTime=2000

dataDir=/local/zookeeper

clientPort=2181

initLimit=5

syncLimit=2

server.1=10.0.0.131Namenode1-v2:31316:31317

server.2=10.0.0.132Namenode2-v2:31316:31317

server.3=10.0.0.133Datanode1-v2:31316:31317

server.4=10.0.0.134Datanode2-v2:31316:31317

 

4台服务器的目录/opt/hadoop/data/zookeeper下分别创建一个叫myid的文件,内容分别是1234如:Namenode1-v2在上执行如下命令

echo 1 >/local/zookeeper/myid

#Namenode2-v2上执行如下命令

echo 2 >/local/zookeeper/myid

#Datanode1-v2上执行如下命令

echo 3 >/local/zookeeper/myid

#Datanode2-v2上执行如下命令

echo 4 >/local/zookeeper/myid

最后就是分别启动zookeeper服务了:

cd $ZOO_HOME

./bin/zkServer.sh start

通过jps命令可以检查是否启动成功:

$ jps

4567 QuorumPeerMain

看到QuorumPeerMain进程就表示zookeeper启动成功了。

测试zookeeper集群是否建立成功,在$ZOO_HOME目录下执行以下命令即可,如无报错表示集群创建成功:

zkCli.sh -server localhost:2181

 

1.2 HDFS2.2.0HA配置

 

1.2.1 core-site.xml

 

        

                   fs.defaultFS

                   hdfs://mycluster

        

                 

        

                   io.file.buffer.size

                   131072

        

                    

        

                   hadoop.tmp.dir

                   /local/temp

                   Abasefor other temporarydirectories.

        

                            

        

                   hadoop.proxyuser.hadoop.hosts

                   *

        

                    

        

                   hadoop.proxyuser.hadoop.groups

                   *

        

                    

        

                   ha.zookeeper.quorum

                   Namenode1-v2:2181,Namenode2-v2:2181,Datanode1-v2:2181,Datanode2-v2:2181

        

                 

        

                   ha.zookeeper.session-timeout.ms

                   1000

                   ms

        

 

1.2.2 hdfs-site.xml

        

                   dfs.namenode.name.dir

                   /local/dfs/name

        

 

        

                   dfs.datanode.data.dir

                   /local/dfs/data

        

 

        

                   dfs.replication

                   3

                  

 

        

                   dfs.webhdfs.enabled

                   true

        

 

        

                   dfs.permissions

                   false

        

 

        

                   dfs.permissions.enabled

                   false

        

 

        

                   dfs.nameservices

                   mycluster

                   Logicalname for this newnameservice

        

 

        

                   dfs.ha.namenodes.mycluster

                   nn1,nn2

                   Uniqueidentifiers for each NameNode in thenameservice

        

 

        

                   dfs.namenode.rpc-address.mycluster.nn1

                   Namenode1-v2:8020

        

 

        

                   dfs.namenode.rpc-address.mycluster.nn2

                   Namenode2-v2:8020

        

 

        

                   dfs.namenode.servicerpc-address.mycluster.nn1

                   Namenode1-v2:53310

        

 

        

                   dfs.namenode.servicerpc-address.mycluster.nn2

                   Namenode2-v2:53310

        

 

        

                   dfs.namenode.http-address.mycluster.nn1

                   Namenode1-v2:50070

        

 

        

                   dfs.namenode.http-address.mycluster.nn2

                   Namenode2-v2:50070

        

 

        

                   dfs.namenode.shared.edits.dir

                   qjournal://Namenode1-v2:8485;Namenode2-v2:8485;Datanode1-v2:8485/mycluster

        

 

        

                   dfs.client.failover.proxy.provider.mycluster

                   org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

        

 

        

                   dfs.ha.fencing.methods

                   sshfence

        

 

        

                   dfs.ha.fencing.ssh.private-key-files

                   /home/oracle/.ssh/id_rsa_nn2

        

 

        

                   dfs.ha.fencing.ssh.connect-timeout

                   30000

        

 

        

                   dfs.journalnode.edits.dir

                   /local/journaldata

        

 

        

                   dfs.ha.automatic-failover.enabled

                   true

        

 

        

                   ha.failover-controller.cli-check.rpc-timeout.ms

                   60000

        

 

        

                   ipc.client.connect.timeout

                   60000

        

 

        

                   dfs.p_w_picpath.transfer.bandwidthPerSec

                   4194304

        

 

在上面的配置中有一个地方要特别说明一下,dfs.ha.fencing.ssh.private-key-files这里指向的是一个本地文件。上面我们是配置了两个namenode来实现HDFSHA的,分别是nn1nn2,在nn2~/.ssh/目录下需要将nn1~/.ssh/目录下的id_rsa文件copy过来,并且应该重命名成如id_rsa_nn1这样的文件名,以免覆盖了本地的文件。

 

1.2.3 yarn-site.xml

 

        

                   yarn.nodemanager.aux-services

                   mapreduce_shuffle

        

 

        

                   yarn.nodemanager.aux-services.mapreduce.shuffle.class

                   org.apache.hadoop.mapred.ShuffleHandler

        

 

        

                   yarn.resourcemanager.address

                   Namenode1-v2:8032

        

 

        

                   yarn.resourcemanager.scheduler.address

                   Namenode1-v2:8030

        

 

        

                   yarn.resourcemanager.resource-tracker.address

                   Namenode1-v2:8031

        

 

        

                   yarn.resourcemanager.admin.address

                   Namenode1-v2:8033

        

 

        

                   yarn.resourcemanager.webapp.address

                   Namenode1-v2:8088

        

 

 

1.3 installphase

 

 

 

安装环境:

 

硬件环境:5台服务器,2namenode3datanode,分别如下:

10.0.0.131 Namenode1-v2 namenodezookeeperjournalnodezkfcResourceManager

10.0.0.132 Namenode2-v2 namenodezookeeperjournalnodezkfc

10.0.0.133 Datanode1-v2   datanodezookeeperjournalnodeNodeManagerHRegionServer

10.0.0.134 Datanode2-v2   datanodezookeeperjournalnodeNodeManagerHRegionServer

10.0.0.135 Datanode3-v2   datanode  hive    journalnodeNodeManager HRegionServer

 

0、首先把各个zookeeper起来

./bin/zkServer.sh start

 

1、然后在某一个namenode节点执行如下命令,创建命名空间

$HADOOP_HOME/bin/hdfs zkfc -formatZK

 

2、在各个节点用如下命令启日志程序

$HADOOP_HOME/sbin/hadoop-daemon.sh startjournalnode

 

3、在主namenode节点用./bin/hadoopnamenode-format格式化namenodejournalnode目录

$HADOOP_HOME/bin/hadoop namenode -formatmycluster

 

4、在主namenode节点启动./sbin/hadoop-daemon.shstartnamenode进程

$HADOOP_HOME/sbin/hadoop-daemon.sh startnamenode

 

5、在备节点执行第一行命令,这个是把备namenode节点的目录格式化并把元数据从主namenode节点copy过来,并且这个命令不会把journalnode目录再格式化了!然后用第二个命令启动备namenode进程!

$HADOOP_HOME/bin/hdfs namenode–bootstrapStandby

$HADOOP_HOME/sbin/hadoop-daemon.sh startnamenode

 

6、在两个namenode节点都执行以下命令

./sbin/hadoop-daemon.sh start zkfc

 

7、在所有datanode节点都执行以下命令启动datanode

$HADOOP_HOME/sbin/hadoop-daemon.sh start datanode

 

1.4 startup phase

下次启动的时候,就直接执行以下命令就可以全部启动所有进程和服务了:

$HADOOP_HOME/sbin/start-dfs.sh

 

然后访问以下两个地址查看启动的两个namenode的状态:

http://Namenode1-v2:50070/dfshealth.jsp

http://Namenode2-v2:50070/dfshealth.jsp

 

1.5 stop phase

停止所有HDFS相关的进程服务,执行以下命令:

$HADOOP_HOME/sbin/stop-dfs.sh

 

1.6 测试HDFSHA功能

在任意一台namenode机器上通过jps命令查找到namenode的进程号,然后通过kill -9的方式杀掉进程,观察另一个namenode节点是否会从状态standby变成active状态。

$ jps

7456 JournalNode

7783 QuorumPeerMain

6783 NameNode

4090 Jps

5567 DFSZKFailoverController

$ kill -9 6783

然后观察原来是standby状态的namenode机器的zkfc日志,若最后一行出现如下日志,则表示切换成功:

2013-12-31 16:14:41,114INFOorg.apache.hadoop.ha.ZKFailoverController: Successfully transitionedNameNodeat :53310 to Namenode2-v2 active state

这时再通过命令启动被kill掉的namenode进程

$HADOOP_HOME/sbin/hadoop-daemon.sh startnamenode

对应进程的zkfc最后一行日志如下:

2013-12-31 16:14:55,683INFOorg.apache.hadoop.ha.ZKFailoverController: Successfully transitionedNameNodeat Namenode1-v2:53310 to standby state

可以在两台namenode机器之间来回killnamenode进程以检查HDFSHA配置!

 

1.7转换active

hadoop100执行命令  hdfs  haadmin -transitionToActive  Namenode2-v2

再使用浏览器访问 http://Namenode1-v2:50070 http://Namenode2-v2:50070,会发现Namenode2-v2节点变为activeNamenode1-v2还是standby

你如果想实验一下NameNode切换,执行命令  hdfs  haadmin failover forceactiveNamenode1-v2 Namenode2-v2

 

如果向上传数据,还需要修改core-site.xml中的fs.default.name的值,改为hdfs://Namenode1-v2:9000 才行。

现在把我环境中的hadoop-ha部分分享给大家,希望我和大家的保持这份学习热情,后续会把hbase hive sqoop相关文档发给大家。然后把flume作为一个案例共享给大家。最后把整个hadoop体系的统一管理脚本分享给大家。

 

 

一起共建我们的《云络智慧城市》,欢迎大家和大家的技术发烧友一起加入我们的qq262407268

请大家自觉修改备注,谢谢。申明该群是一个公益性社区,我们愿意承接一些架构的设计,建设和咨询业务,为您和您的企业改善业务架构。