192.168.182.12 (bigdata12)
192.168.182.13 (bigdata13)
192.168.182.14 (bigdata14)
192.168.182.12 (bigdata12) NameNode1主节点 ResourceManager1主节点 Journalnode
192.168.182.13 (bigdata13) NameNode2备用主节点 ResourceManager2备用主节点 Journalnode
192.168.182.14 (bigdata14) DataNode1 NodeManager1
192.168.182.15 (bigdata15) DataNode2 NodeManager2
我这里使用的是jdk-8u152-linux-x64.tar.gz安装包
tar -zxvf jdk-8u144-linux-x64.tar.gz -C ~/training
vi ~/.bash_profile
export JAVA_HOME=/root/training/jdk1.8.0_144
export PATH=$JAVA_HOME/bin:$PATH
source ~/.bash_profile
java -version
vi /etc/hosts
192.168.182.13 bigdata13
192.168.182.14 bigdata14
192.168.182.15 bigdata15
ssh-keygen -t rsa
含义:通过ssh协议采用非对称加密算法的rsa算法生成一组密钥对:公钥和私钥
注:以下四个命令需要在每台机器上都运行一遍
ssh-copy-id -i .ssh/id_rsa.pub root@bigdata12
ssh-copy-id -i .ssh/id_rsa.pub root@bigdata13
ssh-copy-id -i .ssh/id_rsa.pub root@bigdata14
ssh-copy-id -i .ssh/id_rsa.pub root@bigdata15
在主节点(bigdata12)上安装和配置ZooKeeper
我这里使用的是zookeeper-3.4.10.tar.gz安装
tar -zxvf zookeeper-3.4.10.tar.gz -C ~/training
export ZOOKEEPER_HOME=/root/training/zookeeper-3.4.10
export PATH=$ZOOKEEPER_HOME/bin:$PATH
source ~/.bash_profile
vi /root/training/zookeeper-3.4.10/conf/zoo.cfg
dataDir=/root/training/zookeeper-3.4.10/tmp
server.1=bigdata12:2888:3888
server.2=bigdata13:2888:3888
server.3=bigdata14:2888:3888
在/root/training/zookeeper-3.4.10/tmp目录下创建一个myid的空文件:
mkdir /root/training/zookeeper-3.4.10/tmp/myid
echo 1 > /root/training/zookeeper-3.4.10/tmp/myid
scp -r /root/training/zookeeper-3.4.10/ bigdata13:/root/training
scp -r /root/training/zookeeper-3.4.10/ bigdata14:/root/training
进入bigdata13和bigdata14两台机器中,找到myid文件,将其中的1分别修改为2和3:
vi myid
在bigdata13中输入:2
在bigdata14中输入:3
export JAVA_HOME=/root/training/jdk1.8.0_144
fs.defaultFS
hdfs://ns1
hadoop.tmp.dir
/root/training/hadoop-2.7.3/tmp
ha.zookeeper.quorum
bigdata12:2181,bigdata13:2181,bigdata14:2181
dfs.nameservices
ns1
dfs.ha.namenodes.ns1
nn1,nn2
dfs.namenode.rpc-address.ns1.nn1
bigdata12:9000
dfs.namenode.http-address.ns1.nn1
bigdata12:50070
dfs.namenode.rpc-address.ns1.nn2
bigdata13:9000
dfs.namenode.http-address.ns1.nn2
bigdata13:50070
dfs.namenode.shared.edits.dir
qjournal://bigdata12:8485;bigdata13:8485;/ns1
dfs.journalnode.edits.dir
/root/training/hadoop-2.7.3/journal
dfs.ha.automatic-failover.enabled
true
dfs.client.failover.proxy.provider.ns1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
shell(/bin/true)
dfs.ha.fencing.ssh.private-key-files
/root/.ssh/id_rsa
dfs.ha.fencing.ssh.connect-timeout
30000
mapreduce.framework.name
yarn
配置Yarn的HA
yarn.resourcemanager.ha.enabled
true
yarn.resourcemanager.cluster-id
yrc
yarn.resourcemanager.ha.rm-ids
rm1,rm2
yarn.resourcemanager.hostname.rm1
bigdata12
yarn.resourcemanager.hostname.rm2
bigdata13
yarn.resourcemanager.zk-address
bigdata12:2181,bigdata13:2181,bigdata14:2181
yarn.nodemanager.aux-services
mapreduce_shuffle
bigdata14
bigdata15
scp -r /root/training/hadoop-2.7.3/ root@bigdata13:/root/training/
scp -r /root/training/hadoop-2.7.3/ root@bigdata14:/root/training/
scp -r /root/training/hadoop-2.7.3/ root@bigdata15:/root/training/
在每一台机器上输入:
zkServer.sh start
在bigdata12和bigdata13两台节点上启动journalnode节点:
hadoop-daemon.sh start journalnode
hdfs namenode -format
将/root/training/hadoop-2.7.3/tmp拷贝到bigdata13的/root/training/hadoop-2.7.3/tmp下
scp -r dfs/ root@bigdata13:/root/training/hadoop-2.7.3/tmp
hdfs zkfc -formatZK
日志:INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns1 in ZK.
以上日志表明在Zookeeper的文件系统中创建了/hadoop-ha/ns1的子目录用于保存Namenode的结构信息
start-all.sh
日志:
Starting namenodes on [bigdata12 bigdata13]
bigdata12: starting namenode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop113.out
bigdata13: starting namenode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop112.out
bigdata14: starting datanode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop115.out
bigdata15: starting datanode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop114.out
bigdata13: starting zkfc, logging to /root/training/hadoop-2.7.3/logs/hadoop-root-zkfc- bigdata13.out
bigdata12: starting zkfc, logging to /root/training/hadoop-2.7.3/logs/hadoop-root-zkfc-bigdata12.out
yarn-daemon.sh start resourcemanager
至此,Hadoop集群的HA架构就已经搭建成功。
版权声明:本文为博主原创文章,未经博主允许不得转载。