Hadoop HA集群简介
本教程用于搭建Hadoop HA集群,关于HA集群有以下几点说明:
- 在hadoop2.0中通常由两个NameNode组成,一个处于active状态,另一个处于standby状态。 Active NameNode对外提供服务,Standby NameNode不对外提供服务,仅同步active namenode的状态,以便能够在它失败时快速进行切换。 hadoop2.0官方提供了两种HDFS HA的解决方案,一种是NFS,另一种是QJM。这里我们使用简单的QJM。 在该方案中,主备NameNode之间通过一组JournalNode同步元数据信息,一条数据只要成功写入多数JournalNode即认为写入成功。 通常配置奇数个JournalNode。
- hadoop-2.2.0中依然存在一个问题,就是ResourceManager只有一个,存在单点故障,从hadoop-2.4.1开始解决了这个问题,有两个ResourceManager,一个是Active,一个是Standby,状态由zookeeper进行协调。
- 这里还配置了一个zookeeper集群,用于ZKFC(DFSZKFailoverController)故障转移,当Active NameNode/ResourceManager挂掉了,会自动切换Standby NameNode/ResourceManager为Active状态
版本介绍
software | version |
---|---|
OS | CentOS-7-x86_64-DVD-1810.iso |
Hadoop | hadoop-2.8.4 |
Zookeeper | zookeeper-3.4.10 |
系统设置
集群角色分配
node | actor |
---|---|
master1 | NameNode、DFSZKFailoverController(zkfc)、ResourceManager |
master2 | NameNode、DFSZKFailoverController(zkfc)、ResourceManager |
node1 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
node2 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
node3 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
配置hosts [all]
192.168.56.101 node1
192.168.56.102 node2
192.168.56.103 node3
192.168.56.201 master1
192.168.56.202 master2
新增Hadoop用户 [all]
useradd hadoop
passwd hadoop
chmod -v u+w /etc/sudoers
vi /etc/sudoers
hadoop ALL=(ALL) ALL
chmod -v u-w /etc/sudoers
修改hostname [all]
hostnamectl set-hostname $hostname [master1|master2|node1|node2|node3]
systemctl reboot -i
免密登录 [all]
ssh-keygen -t rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
scp .ssh/authorized_keys $next_node:~/.ssh/
sudo vi /etc/ssh/sshd_config
RSAAuthentication yes
StrictModes no
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
systemctl restart sshd.service
关闭并禁用防火墙 [all]
sudo systemctl stop firewalld
sudo firewall-cmd --state
systemctl disable firewalld.service
security策略 [all]
vi /etc/selinux/config
SELINUX=disabled
软件安装
安装JDK [all]
sudo mkdir -p /opt/env
sudo chown -R hadoop:hadoop /opt/env
tar -xvf jdk-8u121-linux-i586.tar.gz
sudo vi /etc/profile
export JAVA_HOME=/opt/env/jdk1.8.0_121
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile
安装ZK [node1 node2 node3]
sudo mkdir -p /opt/zookeeper
sudo chown -R hadoop:hadoop /opt/zookeeper
tar -zxvf /tmp/zookeeper-3.4.10.tar.gz -C /opt/zookeeper/
sudo chown -R hadoop:hadoop /opt/zookeeper
vi conf/zoo.cfg
dataDir=/opt/zookeeper/zookeeper-3.4.10/data
......
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
mkdir data
echo $zk_id [1|2|3] > data/myid
安装Hadoop [all]
sudo mkdir -p /opt/hadoop/data
sudo chown -R hadoop:hadoop /opt/hadoop/
tar -zxvf hadoop-2.8.4.tar.gz -C /opt/hadoop/
mkdir journaldata
配置Hadoop [all]
vi core-site.xml
fs.defaultFS
hdfs://ns1/
hadoop.tmp.dir
/opt/hadoop/data
ha.zookeeper.quorum
node1:2181,node2:2181,node3:2181
ha.zookeeper.session-timeout.ms
3000
vi hdfs-site.xml
dfs.nameservices
ns1
dfs.ha.namenodes.ns1
nn1,nn2
dfs.namenode.rpc-address.ns1.nn1
master1:9000
dfs.namenode.http-address.ns1.nn1
master1:50070
dfs.namenode.rpc-address.ns1.nn2
master2:9000
dfs.namenode.http-address.ns1.nn2
master2:50070
dfs.namenode.shared.edits.dir
qjournal://node1:8485;node2:8485;node3:8485/ns1
dfs.journalnode.edits.dir
/opt/hadoop/journaldata
dfs.ha.automatic-failover.enabled
true
dfs.client.failover.proxy.provider.ns1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
shell(/bin/true)
dfs.ha.fencing.ssh.private-key-files
/home/hadoop/.ssh/id_rsa
dfs.ha.fencing.ssh.connect-timeout
30000
dfs.namenode.name.dir
file:///opt/hadoop/hdfs/name
dfs.datanode.data.dir
file:///opt/hadoop/hdfs/data
dfs.replication
3
vi mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
0.0.0.0:10020
mapreduce.jobhistory.webapp.address
0.0.0.0:19888
vi yarn-site.xml
yarn.resourcemanager.ha.enabled
true
yarn.resourcemanager.recovery.enabled
true
yarn.resourcemanager.cluster-id
yrc
yarn.resourcemanager.ha.rm-ids
rm1,rm2
yarn.resourcemanager.hostname.rm1
master1
yarn.resourcemanager.hostname.rm2
master2
yarn.resourcemanager.ha.id
$ResourceManager_Id [rm1|rm2]
If we want to launch more than one RM in single node,we need this configuration
yarn.resourcemanager.zk-address
node1:2181,node2:2181,node3:2181
ha.zookeeper.quorum
node1:2181,node2:2181,node3:2181
yarn.resourcemanager.zk-state-store.address
node1:2181,node2:2181,node3:2181
yarn.resourcemanager.store.class
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
yarn.resourcemanager.ha.automatic-failover.zk-base-path
/yarn-leader-election
Optionalsetting.Thedefaultvalueis/yarn-leader-election
yarn.nodemanager.aux-services
mapreduce_shuffle
vi hadoop-env.sh
vi mapred-env.sh
vi yarn-env.sh
export JAVA_HOME=/opt/env/jdk1.8.0_121
export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_HOME=/opt/hadoop/hadoop-2.8.4
export HADOOP_PID_DIR=/opt/hadoop/hadoop-2.8.4/pids
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="$HADOOP_OPTS-Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
vi masters
master2
vi slaves
node1
node2
node3
启动集群
启动zookeeper [node1 node2 node3]
./zkServer.sh start
[hadoop@node1 bin]$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
[hadoop@node2 bin]$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[hadoop@node3 bin]$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
格式化NameNode [master1]
bin/hdfs namenode -format
[hadoop@master2 ~]$ ll /opt/hadoop/data/dfs/name/current/
total 16
-rw-rw-r--. 1 hadoop hadoop 323 Jul 11 01:17 fsimage_0000000000000000000
-rw-rw-r--. 1 hadoop hadoop 62 Jul 11 01:17 fsimage_0000000000000000000.md5
-rw-rw-r--. 1 hadoop hadoop 2 Jul 11 01:17 seen_txid
-rw-rw-r--. 1 hadoop hadoop 219 Jul 11 01:17 VERSION
格式化zkfc [master1 master2]
bin/hdfs zkfc -formatZK
启动NameNode/ResourceManager [master1]
sbin/start-dfs.sh
[hadoop@master1 sbin]$ sh start-dfs.sh
which: no start-dfs.sh in (/opt/env/jdk1.8.0_121/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin)
Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
19/07/25 01:00:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master1 master2]
master2: starting namenode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-namenode-master2.out
master1: starting namenode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-namenode-master1.out
node2: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node2.out
node1: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node1.out
node3: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node3.out
Starting journal nodes [node1 node2 node3]
node2: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node2.out
node3: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node3.out
node1: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node1.out
Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
19/07/25 01:01:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [master1 master2]
master2: starting zkfc, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-zkfc-master2.out
master1: starting zkfc, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-zkfc-master1.out
[hadoop@master1 sbin]$ jps
5552 NameNode
5940 Jps
5869 DFSZKFailoverController
sbin/start-yarn.sh
[hadoop@master1 sbin]$ sh start-yarn.sh
starting yarn daemons
which: no start-yarn.sh in (/opt/env/jdk1.8.0_121/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin)
starting resourcemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-resourcemanager-master1.out
node2: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node2.out
node3: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node3.out
node1: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node1.out
[hadoop@master1 sbin]$ jps
5552 NameNode
5994 ResourceManager
6092 Jps
5869 DFSZKFailoverController
此时DataNode
[hadoop@node1 hadoop-2.8.4]$ jps
3808 QuorumPeerMain
5062 Jps
4506 DataNode
4620 JournalNode
4732 NodeManager
此时master2
[hadoop@master2 sbin]$ jps
6092 Jps
5869 DFSZKFailoverController
格式化NameNode Standby [master2]
bin/hdfs namenode -bootstrapStandby
启动NameNode Standby [master2]
sbin/hadoop-daemon.sh start namenode
启动ResourceManager Standby [master2]
sbin/yarn-daemon.sh start resourcemanager
[hadoop@master2 hadoop-2.8.4]$ jps
4233 Jps
3885 DFSZKFailoverController
4189 ResourceManager
4030 NameNode
ResourceManager状态
[hadoop@master2 hadoop-2.8.4]$ bin/yarn rmadmin -getServiceState rm2
Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
19/07/25 01:48:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
active
[hadoop@master2 hadoop-2.8.4]$ bin/yarn rmadmin -getServiceState rm1
Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
19/07/25 01:48:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
standby
enjoy