机器规划:
10.241.95.109 master jdk,hadoop namenode,ZKFC,Resourcemanager
10.241.95.107 h107 jdk,hadoop namenode,ZKFC,Resourcemanager,zookeeper,Journalnode,
10.241.95.110 slave1 jdk,hadoop natanode, nodemanager
10.241.95.111 slave2 jdk,hadoop, natanode,nodemanager
10.241.95.105 h105 jdk,hadoop, natanode,nodemanager,zookeeper,Journalnode,
10.241.95.106 h106 jdk, hadoop, natanode,nodemanager,zookeeper,Journalnode
1:设置服务器的hostname
目标文件:/etc/hosts 对象: 6台机器通用
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 salve2
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.241.95.109 master
10.241.95.110 slave1
10.241.95.111 slave2
10.241.95.105 h105
10.241.95.106 h106
10.241.95.107 h107
2:设置javahome和hadoophome
目标文件:/etc/profile 对象: 6台机器通用
JAVA_HOME=/usr/java/jdk1.8.0_201
HADOOP_HOME=/opt/app/hadoop-3.1.2
CLASSPATH=PATH:HADOOP_HOME/bin:$HADOOP_HOME/sbin
3:设置ssh免密码登陆
执行:ssh-keygen 生成密钥
/root/.ssh/id_rsa.pub中生成的内容粘贴到 /root/.ssh/authorized_keys中,然后复制到每一台机器,6台机器就是6套密钥
4:hadoop配置文件
对象:6台机器通用
core-site.xml
hdfs-site.xml
sshfence
shell(/bin/true)
mapred-site.xml
yarn-site.xml
5:设置从属文件
对象:master,h107
works.xml
slave1
slave2
h106
h105
6:格式化HDFS
对象master:hdfs namenode -format
然后把主节点的数据copy到standby机器上
格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是主节点/home/hadoop/hadoop-3.1.2/tmp,然后将/home/hadoop/hadoop-3.1.2/tmp拷贝到从节点的/home/hadoop/hadoop-3.1.2/下。
7:初始化zk
对象:master
hdfs zkfc -formatZK
8:启动hadoop
对象:集群中任意一台机器
start-dfs.sh
start-yarn.sh
至此hadoop高可用集群搭建完毕