1.集群规划:
drguo5 192.168.80.153 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
排的好好的,显示出来就乱了!!!
2.前期准备:
准备五台机器,修改静态IP、主机名、主机名与IP的映射,关闭防火墙,安装JDK并配置环境变量(不会请看这http://blog.csdn.net/dr_guo/article/details/50886667),创建用户:用户组,SSH免密码登录SSH免密码登录(报错请看这http://blog.csdn.net/dr_guo/article/details/50967442)。
注意:要把127.0.1.1那一行注释掉,要不然会出现jps显示有datanode,但网页显示live nodes为0;
注释之后就正常了,好像有人没注释也正常,我也不知道为什么0.0
3.搭建zookeeper集群(drguo3/drguo4/drguo5)
见:ZooKeeper完全分布式集群搭建
4.正式开始搭建Hadoop HA集群
去官网下最新的Hadoop(http://apache.opencas.org/hadoop/common/stable/),目前最新的是2.7.2,下载完之后把它放到/opt/Hadoop下
修改opt目录所有者(用户:用户组)直接把opt目录的所有者/组换成了guo。具体情况在ZooKeeper完全分布式集群搭建说过。
设置环境变量
修改/opt/Hadoop/hadoop-2.7.2/etc/hadoop下的hadoop-env.sh
fs.defaultFS
hdfs://ns1/
hadoop.tmp.dir
/opt/Hadoop/hadoop-2.7.2/tmp
ha.zookeeper.quorum
drguo3:2181,drguo4:2181,drguo5:2181
修改hdfs-site.xml
dfs.nameservices
ns1
dfs.ha.namenodes.ns1
nn1,nn2
dfs.namenode.rpc-address.ns1.nn1
drguo1:9000
dfs.namenode.http-address.ns1.nn1
drguo1:50070
dfs.namenode.rpc-address.ns1.nn2
drguo2:9000
dfs.namenode.http-address.ns1.nn2
drguo2:50070
dfs.namenode.shared.edits.dir
qjournal://drguo3:8485;drguo4:8485;drguo5:8485/ns1
dfs.journalnode.edits.dir
/opt/Hadoop/hadoop-2.7.2/journaldata
dfs.ha.automatic-failover.enabled
true
dfs.client.failover.proxy.provider.ns1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
shell(/bin/true)
dfs.ha.fencing.ssh.private-key-files
/home/guo/.ssh/id_rsa
dfs.ha.fencing.ssh.connect-timeout
30000
先将mapred-site.xml.template改名为mapred-site.xml然后修改mapred-site.xml
mapreduce.framework.name
yarn
yarn.resourcemanager.ha.enabled
true
yarn.resourcemanager.cluster-id
yrc
yarn.resourcemanager.ha.rm-ids
rm1,rm2
yarn.resourcemanager.hostname.rm1
drguo1
yarn.resourcemanager.hostname.rm2
drguo2
yarn.resourcemanager.zk-address
drguo3:2181,drguo4:2181,drguo5:2181
yarn.nodemanager.aux-services
mapreduce_shuffle
修改slaves
drguo3
drguo4
drguo5
把Hadoop整个目录拷贝到drguo2/3/4/5,拷之前把share下doc删了(文档不用拷),这样会快点。
guo@drguo3:~$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
guo@drguo3:~$ jps
2005 Jps
1994 QuorumPeerMain
guo@drguo3:~$ ssh drguo4
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Fri Mar 25 14:04:43 2016 from 192.168.80.151
guo@drguo4:~$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
guo@drguo4:~$ jps
1977 Jps
1966 QuorumPeerMain
guo@drguo4:~$ exit
注销
Connection to drguo4 closed.
guo@drguo3:~$ ssh drguo5
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Fri Mar 25 14:04:56 2016 from 192.168.80.151
guo@drguo5:~$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
guo@drguo5:~$ jps
2041 Jps
2030 QuorumPeerMain
guo@drguo5:~$ exit
注销
Connection to drguo5 closed.
guo@drguo3:~$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: leader
6.启动journalnode(分别在drguo3、drguo4、drguo5上启动journalnode)注意只有第一次需要这么启动,之后启动hdfs会包含journalnode。
guo@drguo3:~$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo3.out
guo@drguo3:~$ jps
2052 Jps
2020 JournalNode
1963 QuorumPeerMain
guo@drguo3:~$ ssh drguo4
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Fri Mar 25 00:09:08 2016 from 192.168.80.149
guo@drguo4:~$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo4.out
guo@drguo4:~$ jps
2103 Jps
2071 JournalNode
1928 QuorumPeerMain
guo@drguo4:~$ exit
注销
Connection to drguo4 closed.
guo@drguo3:~$ ssh drguo5
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Thu Mar 24 23:52:17 2016 from 192.168.80.152
guo@drguo5:~$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo5.out
guo@drguo5:~$ jps
2276 JournalNode
2308 Jps
1959 QuorumPeerMain
guo@drguo5:~$ exit
注销
Connection to drguo5 closed.
guo@drguo1:/opt$ hdfs namenode -format
这回又出问题了,还是汉语注释闹得,drguo1/2/3也全删了,问题解决。
guo@drguo1:/opt/Hadoop/hadoop-2.7.2$ scp -r tmp/ drguo2:/opt/Hadoop/hadoop-2.7.2/
guo@drguo1:/opt$ hdfs zkfc -formatZK
guo@drguo1:/opt$ start-dfs.sh
guo@drguo1:/opt$ start-yarn.sh