使用阿里云,在docker中搭建hadoop集群环境。
1. OS环境
CentOS8 64位
hostname | 说明 |
---|---|
root@dfxMachine | 宿主机 |
root@cluster-master | hadoop master |
root@cluster-slave1 | hadoop slave1 |
root@cluster-slave2 | hadoop slave2 |
root@cluster-slave3 | hadoop slave3 |
2. 网络配置
hostname | IP |
---|---|
cluster-master | 172.18.0.2 |
cluster-slave1 | 172.18.0.3 |
cluster-slave2 | 172.18.0.4 |
cluster-slave3 | 172.18.0.5 |
3. Docker安装
阿里云Ubuntu 16.04 LTS 64位系统安装Docker-CE
4. 拉取Centos
执行命令docker pull centos
,默认拉取centos latest版本。然后执行docker images
查看本地镜像
5. 创建子网,指定子网,创建容器
- 在Docker中创建子网,并制定固定IP的容器:
- 创建容器时需要设置固定IP,所以先要在docker使用
docker network create --subnet=172.18.0.0/16 netgroup
命令创建固定IP的子网。- 子网创建完成之后就创建固定IP的容器
cluster-master:-p 设置docker映射到容器的端口 后续查看web管理页面使用:
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-master -h cluster-master -p 18088:18088 -p 9870:9870 --net netgroup --ip 172.18.0.2 centos /usr/sbin/init
cluster-slaves [1-3]
slave1:docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave1 -h cluster-slave1 --net netgroup --ip 172.18.0.3 centos /usr/sbin/init
slave2:
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave2 -h cluster-slave2 --net netgroup --ip 172.18.0.4 centos /usr/sbin/init
slave3:
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave3 -h cluster-slave3 --net netgroup --ip 172.18.0.5 centos /usr/sbin/init
- 使用
docker exec -it cluster-master /bin/bash
与cluster-master交互
6. 安装OpenSSH免密登录
1. cluster-master安装OpenSSH:
- 安装openssh
[root@cluster-master /]#
yum -y install openssh openssh-server openssh-clients
[root@cluster-master /]#systemctl start sshd
- ssh自动接受新的公钥,master设置ssh登录自动添加kown_hosts,进入ssh_config,将StrictHostKeyChecking 的注释去掉,并把ask改为no,保存后退出
[root@cluster-master /]#
vi /etc/ssh/ssh_config
[root@cluster-master /]#systemctl restart sshd
2.对其他三个slave都分别执行下面的命令安装OpenSSH:
[root@cluster-slave1 /]# yum -y install openssh openssh-server openssh-clients
[root@cluster-slave1 /]# systemctl start sshd
3.cluster-master公钥分发:
- 在master机上执行
ssh-keygen -t rsa
并一路回车,完成之后会生成~/.ssh目录,目录下有id_rsa(私钥文件)和id_rsa.pub(公钥文件),再将id_rsa.pub重定向到文件authorized_keys:
[root@cluster-master /]#ssh-keygen -t rsa
[root@cluster-master /]#cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
- 将master中的文件copy到宿主机root下
root@dfxMachine:~#docker cp [master ID]:/.ssh/authorized_keys /root/
- 通过宿主机把authorized_keys分别copy到slave1-3的/root/.ssh目录下。进行这一步之前要分别在各个slave中root目录下创建.ssh目录。
root@dfxMachine:~#docker cp authorized_keys [slave1-3 ID]:/root/.ssh/authorized_keys
切换到master测试对slave1-3的免密登录
说明:如果知道slave1-3 root密码,也可以使用以下方式分发密钥
[root@cluster-master /]#
ssh root@cluster-slave1 'mkdir ~/.ssh'
[root@cluster-master /]#scp ~/.ssh/authorized_keys root@cluster-slave1:~/.ssh
[root@cluster-master /]#ssh root@cluster-slave2 'mkdir ~/.ssh'
[root@cluster-master /]#scp ~/.ssh/authorized_keys root@cluster-slave2:~/.ssh
[root@cluster-master /]#ssh root@cluster-slave3 'mkdir ~/.ssh'
[root@cluster-master /]#scp ~/.ssh/authorized_keys root@cluster-slave3:~/.ssh
7. Ansible安装
使用官方自带的安装,这样的话ansible会被安装到/etc/ansible目录下
- 使用下面的命令进行安装
[root@cluster-master /]#yum -y install epel-release
[root@cluster-master /]#yum -y install ansible
- 使用vi编辑hosts文件
[root@cluster-master /]#vi /etc/ansible/hosts
- hosts需要添加的内容如下
[cluster]
cluster-master
cluster-slave1
cluster-slave2
cluster-slave3
[master]
cluster-master
[slaves]
cluster-slave1
cluster-slave2
cluster-slave3
8. 配置master的hosts
/etc/hosts文件在容器启动时被重写,直接修改内容在容器重启后不能保留,为了让容器在重启之后获取集群hosts,使用了一种启动容器后重写hosts的方法。
- 使用
vi ~/.bashrc
打开bashrc
[root@cluster-master ~]# vim ~/.bashrc
- 在其中追加以下内容,保存后退出
:>/etc/hosts
cat >>/etc/hosts<
- 运行
source ~/.bashrc
使其生效 - 运行
cat /etc/hosts
可以查看文件已经被改为需要的内容
[root@cluster-master ~]# cat /etc/hosts
127.0.0.1 localhost
172.18.0.2 cluster-master
172.18.0.3 cluster-slave1
172.18.0.4 cluster-slave2
172.18.0.5 cluster-slave3
9. 用ansible分发.bashrc至slave集群下
在master中运行ansible cluster-slave1 -m copy -a "src=~/.bashrc dest=~/"
,对cluster-slave2、cluster-slave3也做此操作。
也可以直接使用ansible cluster -m copy -a "src=~/.bashrc dest=~/"
进行批量分发(自己当时配置文件写错啦,所以才使用了上面的蠢方法,轻喷
10. 软件环境配置
- 下载JDK1.8并解压
[root@cluster-master opt]#
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.tar.gz"
[root@cluster-master opt]# tar -xzvf jdk-8u141-linux-x64.tar.gz
- 下载hadoop3 到/opt目录下,下载hadoop建议使用北理源,不然得下6个多小时。就是版本有点少。下载完在/opt下解压缩,并创建链接文件
[root@cluster-master opt]#
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz
[root@cluster-master opt]#tar -xzvf jdk-8u141-linux-x64.tar.gz
[root@cluster-master opt]#tar -xzvf hadoop-3.2.1/hadoop-3.2.1.tar.gz
[root@cluster-master opt]#ln -s hadoop-3.2.1 hadoop
11. 配置java和hadoop环境变量
编辑 ~/.bashrc文件,添加内容如下
# hadoop
export HADOOP_HOME=/opt/hadoop-3.2.1
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
#java
export JAVA_HOME=/opt/jdk1.8.0_141
export PATH=$JAVA_HOME/bin:$PATH
运行以下命令使文件生效
[root@cluster-master opt]#
source ~/.bashrc
12. 配置hadoop运行所需配置文件
使用cd $HADOOP_HOME/etc/hadoop/
进入hadoop目录,使用vi分别修改一下文件
- 1、修改core-site.xml
hadoop.tmp.dir
/home/hadoop/tmp
A base for other temporary directories.
fs.default.name
hdfs://cluster-master:9000
fs.trash.interval
4320
- 2、修改hdfs-site.xml
dfs.namenode.name.dir
/home/hadoop/tmp/dfs/name
dfs.datanode.data.dir
/home/hadoop/data
dfs.replication
3
dfs.webhdfs.enabled
true
dfs.permissions.superusergroup
staff
dfs.permissions.enabled
false
- 3、修改mapred-site.xml
mapreduce.framework.name
yarn
mapred.job.tracker
cluster-master:9001
mapreduce.jobtracker.http.address
cluster-master:50030
mapreduce.jobhisotry.address
cluster-master:10020
mapreduce.jobhistory.webapp.address
cluster-master:19888
mapreduce.jobhistory.done-dir
/jobhistory/done
mapreduce.intermediate-done-dir
/jobhisotry/done_intermediate
mapreduce.job.ubertask.enable
true
- 4、yarn-site.xml
yarn.resourcemanager.hostname
cluster-master
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
cluster-master:18040
yarn.resourcemanager.scheduler.address
cluster-master:18030
yarn.resourcemanager.resource-tracker.address
cluster-master:18025
yarn.resourcemanager.admin.address
cluster-master:18141
yarn.resourcemanager.webapp.address
cluster-master:18088
yarn.log-aggregation-enable
true
yarn.log-aggregation.retain-seconds
86400
yarn.log-aggregation.retain-check-interval-seconds
86400
yarn.nodemanager.remote-app-log-dir
/tmp/logs
yarn.nodemanager.remote-app-log-dir-suffix
logs
13. 打包hadoop文件
将hadoop链接文件和hadoop-3.2.1打包成一个文件方便ansible分发到slave主机
[root@cluster-master opt]#
tar -cvf hadoop-dis.tar hadoop hadoop-3.2.1
14. 使用ansible-playbook分发.bashrc和hadoop-dis.tar至slave主机
---
- hosts: cluster
tasks:
- name: copy .bashrc to slaves
copy: src=~/.bashrc dest=~/
notify:
- exec source
- name: copy hadoop-dis.tar to slaves
unarchive: src=/opt/hadoop-dis.tar dest=/opt
handlers:
- name: exec source
shell: source ~/.bashrc
把以上yaml保存为hadoop-dis.yaml,并执行下面一条语句,hadoop-dis.tar会自动解压到slave主机的/opt目录下。
[root@cluster-master opt]#
ansible-playbook hadoop-dis.yaml
15. 格式化namenode
[root@cluster-master opt]#
hadoop namenode -format
在进行namenode格式化是有几个Fail,不要因此怀疑自己,只要common.Storage: Storage directory /usr/local/hadoop-3.0.2/hdfs/name has been successfully formatted 这个提醒是存在的就没有问题
16. 启动hadoop集群
到这一步已经可以开始hadoop之旅了,启动比较简单,在$HADOOP_HOME/sbin下有几个启动和停止的脚本如下:
[root@cluster-master opt]#
cd $HADOOP_HOME/sbin
[root@cluster-master sbin]#ls -l
total 112
drwxr-xr-x 4 1001 1001 4096 Sep 10 16:33 FederationStateStore
-rwxr-xr-x 1 1001 1001 2756 Sep 10 16:01 distribute-exclude.sh
-rwxr-xr-x 1 1001 1001 1983 Sep 10 15:57 hadoop-daemon.sh
-rwxr-xr-x 1 1001 1001 2522 Sep 10 15:57 hadoop-daemons.sh
-rwxr-xr-x 1 1001 1001 1542 Sep 10 16:04 httpfs.sh
-rwxr-xr-x 1 1001 1001 1500 Sep 10 15:58 kms.sh
-rwxr-xr-x 1 1001 1001 1841 Sep 10 16:36 mr-jobhistory-daemon.sh
-rwxr-xr-x 1 1001 1001 2086 Sep 10 16:01 refresh-namenodes.sh
-rwxr-xr-x 1 1001 1001 1779 Sep 10 15:57 start-all.cmd
-rwxr-xr-x 1 1001 1001 2221 Sep 10 15:57 start-all.sh
-rwxr-xr-x 1 1001 1001 1880 Sep 10 16:01 start-balancer.sh
-rwxr-xr-x 1 1001 1001 1401 Sep 10 16:01 start-dfs.cmd
-rwxr-xr-x 1 1001 1001 5288 Dec 23 12:27 start-dfs.sh
-rwxr-xr-x 1 1001 1001 1793 Sep 10 16:01 start-secure-dns.sh
-rwxr-xr-x 1 1001 1001 1571 Sep 10 16:33 start-yarn.cmd
-rwxr-xr-x 1 1001 1001 3436 Dec 23 12:29 start-yarn.sh
-rwxr-xr-x 1 1001 1001 1770 Sep 10 15:57 stop-all.cmd
-rwxr-xr-x 1 1001 1001 2166 Sep 10 15:57 stop-all.sh
-rwxr-xr-x 1 1001 1001 1783 Sep 10 16:01 stop-balancer.sh
-rwxr-xr-x 1 1001 1001 1455 Sep 10 16:01 stop-dfs.cmd
-rwxr-xr-x 1 1001 1001 3898 Sep 10 16:01 stop-dfs.sh
-rwxr-xr-x 1 1001 1001 1756 Sep 10 16:01 stop-secure-dns.sh
-rwxr-xr-x 1 1001 1001 1642 Sep 10 16:33 stop-yarn.cmd
-rwxr-xr-x 1 1001 1001 3083 Sep 10 16:33 stop-yarn.sh
-rwxr-xr-x 1 1001 1001 1982 Sep 10 15:57 workers.sh
-rwxr-xr-x 1 1001 1001 1814 Sep 10 16:33 yarn-daemon.sh
-rwxr-xr-x 1 1001 1001 2328 Sep 10 16:33 yarn-daemons.sh
此时,若直接运行./start-dfs.sh会启动失败
[root@cluster-master sbin]#
./start-dfs.sh
需要在start-dfs.sh和start-yarn.sh文件的最顶部空白处加一下内容
- 在start-dfs.sh中顶部空白处:
HDFS_DATANODE_USER=root HADOOP_SECURE_DN_USER=hdfs HDFS_NAMENODE_USER=root HDFS_SECONDARYNAMENODE_USER=root
- 在start-yarn.sh中顶部空白处:
YARN_RESOURCEMANAGER_USER=root HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=root
接着,再运行上面的启动命令
[root@cluster-master sbin]#
./start-all.sh
17. 验证
参考:
- https://blog.csdn.net/qq_32635069/article/details/80859790
- https://segmentfault.com/a/1190000019391526?utm_source=tag-newest#item-5-9
- https://www.jianshu.com/p/0c7b6de487ce