hadoop3.2+Centos7+5个节点主从模式配置

准备工作:

hadoop3.2.0+jdk1.8+centos7+zookeeper3.4.5

以上是我搭建集群使用的基础包

一、环境准备

master1 master2 slave1 slave2 slave3
jdk、NameNode、DFSZKFailoverController(zkfc) jdk、NameNode、DFSZKFailoverController(zkfc) jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain

说明:

在hadoop集群中通常由两个namenode组成,一个处于active状态,一个处于stanbdy状态,Active NameNode对外提供服务,而Standby NameNode则不对外提供服务,仅同步active namenode的状态,以便能够在它失败时快速进行切换。

hadoop官方提供了两种HDFS HA的解决方案,一种是NFS,另一种是QJM。这里我们使用简单的QJM。在该方案中,主备NameNode之间通过一组JournalNode同步元数据信息,一条数据只要成功写入多数JournalNode即认为写入成功。通常配置奇数个JournalNode这里还配置了一个zookeeper集群,用于ZKFC(DFSZKFailoverController)故障转移,当Active NameNode挂掉了,会自动切换Standby NameNode为standby状态。

hadoop中依然存在一个问题,就是ResourceManager只有一个,存在单点故障,hadoop-3.2.0解决了这个问题,有两个ResourceManager,一个是Active,一个是Standby,状态由zookeeper进行协调。

将五个虚拟机分别关闭防火墙,更改主机名:

systemctl stop firewalld
systemctl disabled firewalld

vim /etc/hostname
在五台虚拟机依次修改,保存
master1
master2
slave1
slave2
slave3

配置hosts文件:

vim /etc/hosts
#添加内容
master1 192.168.60.10
master2 192.168.60.11
slave1 192.168.60.12
slave2 192.168.60.13
slave3 192.168.60.14

配置免密登录:

ssh-keygen -t rsa  #在每台虚拟机执行
cd /root/.ssh/
cat id_rsa.pub >> authorized_keys
scp authorized_keys root@master2:/root/.ssh/
#一次执行上述步骤,最后分发 authorized_keys 文件到各个节点

二、安装步骤

jdk1.8安装:

1.解压文件

tar -zxvf jdk1.8.tar.gz -C /usr/local  #自己定义目录

2.配置环境变量

vim /etc/profile

export JAVA_HOME=/usr/local/jdk
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH

source /etc/profile  #更新资源
java -version  #验证

zookeeper安装:

1.解压zookeeper

tar -zxvf zookeeper-3.4.5.tar.gz -C /usr/local/soft  #自己定义目录

2.修改配置文件

cd /usr/local/zookeeper-3.4.5/conf/
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg
#修改一下内容
 
  
dataDir=/usr/local/soft/zookeeper-3.4.5/tmp
在后面添加:
server.1=slave1:2888:3888
server.2=slave2:2888:3888
server.3=slave3:2888:3888
#保存退出

3.在zookeeper目录下创建tmp文件夹

mkdir /usr/local/soft/zookeeper-3.4.5/tmp
再创建一个空文件
touch /usr/local/soft/zookeeper-3.4.5/tmp/myid
最后向该文件写入ID
echo 1 > /usr/local/soft/zookeeper-3.4.5/tmp/myid

4.将配置好的zookeeper拷贝到其他节点(首先分别在slave2、slave3根目录下创建一个soft目录:mkdir /usr/local/soft/)

scp -r /usr/local/soft/zookeeper-3.4.5/ itcast06:/usr/local/soft
scp -r /usr/local/soft/zookeeper-3.4.5/ itcast07:/usr/local/soft

5.注意要修改myid内容

slave2:
echo 2 > /usr/local/soft/zookeeper-3.4.5/tmp/myid
slave3:
echo 3 > /usr/local/soft/zookeeper-3.4.5/tmp/myid

6.启动zookeeperokeeper集群(三台机器都要启动)

cd 到zookeeper/conf下
./zdServer.sh start

hadoop集群配置:

1.解压文件

tar -zxvf hadoop-3.2.0.tar.gz -C /usr/local/soft/

2.添加环境变量

vim /etc/profile

export HADOOP_HOME=/usr/local/soft/hadoop-3.2.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/bin:$PATH

source /etc/profile  #更新资源
hadoop version  #验证

3.配置hadoop-env.sh,添加JAVA_HOME 

export JAVA_HOME=/usr/local/jdk

4.配置core-site.xml




fs.defaultFS
hdfs://ns1



hadoop.tmp.dir
/usr/local/soft/hadoop-3.2.0/tmp



ha.zookeeper.quorum
slave1:2181,slave2:2181,slave3:2181

5.配置hdfs-site.xml




dfs.nameservices
ns1



dfs.ha.namenodes.ns1
nn1,nn2



dfs.namenode.rpc-address.ns1.nn1
master1:9000



dfs.namenode.http-address.ns1.nn1
master1:50070



dfs.namenode.rpc-address.ns1.nn2
master2:9000



dfs.namenode.http-address.ns1.nn2
master2:50070



dfs.namenode.shared.edits.dir
qjournal://slave1:8485;slave2:8485;slave3:8485/ns1



dfs.journalnode.edits.dir
/usr/local/soft/hadoop-3.2.0/journal



dfs.ha.automatic-failover.enabled
true



dfs.client.failover.proxy.provider.ns1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider



dfs.ha.fencing.methods

    sshfence
    shell(/bin/true)




dfs.ha.fencing.ssh.private-key-files
/home/hadoop/.ssh/id_rsa



dfs.ha.fencing.ssh.connect-timeout
30000

6.配置mapred-site.xml




mapreduce.framework.name
yarn

7.配置yarn-site.xml




yarn.resourcemanager.ha.enabled
true


yarn.resourcemanager.cluster-id yrc yarn.resourcemanager.ha.rm-ids rm1,rm2,rm3 yarn.resourcemanager.hostname.rm1 slave1 yarn.resourcemanager.hostname.rm2 slave2

yarn.resourcemanager.hostname.rm3
slave3
yarn.resourcemanager.zk-address slave1:2181,slave2:2181,slave3:2181 yarn.nodemanager.aux-services mapreduce_shuffle

8.配置workers

slave1
slave2
slave3

9.配置sbin/start-yarn.sh、sbin/stop-yarn.sh 和 sbin/start-dfs.sh sbin/stop-dfs.sh

dfs添加:
HDFS_NAMENODE_USER=root HDFS_DATANODE_USER=root HDFS_JOURNALNODE_USER=root HDFS_ZKFC_USER=root
yarn添加:

 YARN_RESOURCEMANAGER_USER=root
 HADOOP_SECURE_DN_USER=yarn
 YARN_NODEMANAGER_USER=root

10.将Hadoop3.2.0分发到各个节点

scp -r hadoop3.2.0 root@master2:/usr/local/soft/
scp -r hadoop3.2.0 root@slave1:/usr/local/soft/
scp -r hadoop3.2.0 root@slave2:/usr/local/soft/
scp -r hadoop3.2.0 root@slave3:/usr/local/soft/

三、启动集群

zookeeper集群已经启动

cd /usr/local/soft/zookeeper-3.4.5/bin/
./zkServer.sh start
#查看状态:一个leader,两个follower
./zkServer.sh status

启动journalnode(分别在在slave1、slave2、slave3上执行)

cd /usr/local/soft/hadoop-3.2.0
sbin/hadoop-daemon.sh start journalnode
#运行jps命令检验,多了JournalNode进程

格式化HDFS

#在master1上执行命令:
hdfs namenode -format
#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/usr/local/soft/hadoop-3.2.0/tmp,然后将/usr/local/soft/hadoop-3.2.0/tmp拷贝到itcast02的/usr/local/soft/hadoop-3.2.0/tmp下。 
scp -r tmp/ root@master2:/usr/local/soft/hadoop-3.2.0  #分发到各个节点

格式化ZK(在master1)上执行

hdfs zkfc -formatZK

 

启动集群(在master1)执行

sbin/start-dfs.sh
sbin/start-yarn.sh

 

 

验证YARN:
运行一下hadoop提供的demo中的WordCount程序:

hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar wordcount /profile /out

你可能感兴趣的:(hadoop3.2+Centos7+5个节点主从模式配置)