1、操作系统: CentOS 7 64位
hostname | IP |
---|---|
cluster-master | 172.20.0.2 |
cluster-slave1 | 172.20.0.3 |
cluster-slave2 | 172.20.0.4 |
cluster-slave3 | 172.20.0.5 |
安装后拉去Centos镜像
$ docker pull daocloud.io/library/centos:7
按照集群的架构,创建容器时需要设置固定IP,所以先要在docker使用如下命令创建固定IP的子网
$ docker network create --subnet=172.20.0.0/16 netgroup
docker的子网创建完成之后就可以创建固定IP的容器了
#cluster-master
#-p 设置docker映射到容器的端口 后续查看web管理页面使用
docker run -d --privileged -ti --name cluster-master -h cluster-master -p 18088:18088 -p 9870:9870 --net netgroup --ip 172.20.0.2 daocloud.io/library/centos:7 /usr/sbin/init
#cluster-slaves
docker run -d --privileged -ti --name cluster-slave1 -h cluster-slave1 --net netgroup --ip 172.20.0.3 daocloud.io/library/centos:7 /usr/sbin/init
docker run -d --privileged -ti --name cluster-slave2 -h cluster-slave2 --net netgroup --ip 172.20.0.4 daocloud.io/library/centos:7 /usr/sbin/init
docker run -d --privileged -ti --name cluster-slave3 -h cluster-slave3 --net netgroup --ip 172.20.0.5 daocloud.io/library/centos:7 /usr/sbin/init
启动控制台并进入docker
容器中:
docker exec -it cluster-master /bin/bash
1、cluster-master
安装:
#cluster-master需要修改配置文件(特殊)
#cluster-master
#安装openssh
$ yum -y install openssh openssh-server openssh-clients
$ systemctl start sshd
####ssh自动接受新的公钥
####master设置ssh登录自动添加kown_hosts
$ vi /etc/ssh/ssh_config
#将原来的StrictHostKeyChecking ask
#设置StrictHostKeyChecking为no
#保存
$ systemctl restart sshd
2、分别对slaves安装OpenSSH
#安装openssh
$ yum -y install openssh openssh-server openssh-clients
$ systemctl start sshd
3、cluster-master公钥分发
在master机上执行ssh-keygen -t rsa并一路回车,完成之后会生成~/.ssh目录,目录下有id_rsa(私钥文件)和id_rsa.pub(公钥文件),再将id_rsa.pub重定向到文件authorized_keys
$ ssh-keygen -t rsa
#一路回车
$ cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
需要先设置slave服务器密码
$ passwd
root
文件生成之后用scp将公钥文件分发到集群slave主机
$ ssh root@cluster-slave1 'mkdir ~/.ssh'
$ scp ~/.ssh/authorized_keys root@cluster-slave1:~/.ssh
$ ssh root@cluster-slave2 'mkdir ~/.ssh'
$ scp ~/.ssh/authorized_keys root@cluster-slave2:~/.ssh
$ ssh root@cluster-slave3 'mkdir ~/.ssh'
$ scp ~/.ssh/authorized_keys root@cluster-slave3:~/.ssh
分发完成之后测试是否已经可以免输入密码登录
[root@cluster-master /]# ssh root@cluster-slave1
[root@cluster-slave1 ~]# exit
logout
[root@cluster-master /]# yum -y install epel-release
[root@cluster-master /]# yum -y install ansible
#这样的话ansible会被安装到/etc/ansible目录下
此时我们再去编辑ansible的hosts文件
$ vi /etc/ansible/hosts
[cluster]
cluster-master
cluster-slave1
cluster-slave2
cluster-slave3
[master]
cluster-master
[slaves]
cluster-slave1
cluster-slave2
cluster-slave3
由于/etc/hosts文件在容器启动时被重写,直接修改内容在容器重启后不能保留,为了让容器在重启之后获取集群hosts,使用了一种启动容器后重写hosts的方法。
需要在~/.bashrc中追加以下指令
$ vi ~/.bashrc
:>/etc/hosts
cat >>/etc/hosts<<EOF
127.0.0.1 localhost
172.20.0.2 cluster-master
172.20.0.3 cluster-slave1
172.20.0.4 cluster-slave2
172.20.0.5 cluster-slave3
EOF
$ source ~/.bashrc
使配置文件生效,可以看到/etc/hosts文件已经被改为需要的内容
[root@cluster-master ansible]# cat /etc/hosts
127.0.0.1 localhost
172.20.0.2 cluster-master
172.20.0.3 cluster-slave1
172.20.0.4 cluster-slave2
172.20.0.5 cluster-slave3
ansible cluster -m copy -a "src=~/.bashrc dest=~/"
下载JDK1.8并解压缩至/opt
目录下
$ docker cp jdk-8u211-linux-x64.tar.gz cluster-master:/opt/
$ tar -zxvf jdk-8u211-linux-x64.tar.gz
下载hadoop3 到/opt
目录下,解压安装包,并创建链接文件
$ wget https://dlcdn.apache.org/hadoop/common/hadoop-3.3.2/hadoop-3.3.2.tar.gz --no-check-certificate
$ tar -xzvf hadoop-3.3.2.tar.gz
$ ln -s hadoop-3.3.2 hadoop
编辑 ~/.bashrc
文件
# hadoop
export HADOOP_HOME=/opt/hadoop-3.3.2
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
#java
export JAVA_HOME=/opt/jdk8
export PATH=$JAVA_HOME/bin:$PATH
使文件生效:
$ source ~/.bashrc
cd $HADOOP_HOME/etc/hadoop/
1、修改core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dirname>
<value>/home/hadoop/tmpvalue>
<description>A base for other temporary directories.description>
property>
<property>
<name>fs.default.namename>
<value>hdfs://cluster-master:9000value>
property>
<property>
<name>fs.trash.intervalname>
<value>4320value>
property>
configuration>
2、修改hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dirname>
<value>/home/hadoop/tmp/dfs/namevalue>
property>
<property>
<name>dfs.datanode.data.dirname>
<value>/home/hadoop/datavalue>
property>
<property>
<name>dfs.replicationname>
<value>3value>
property>
<property>
<name>dfs.webhdfs.enabledname>
<value>truevalue>
property>
<property>
<name>dfs.permissions.superusergroupname>
<value>staffvalue>
property>
<property>
<name>dfs.permissions.enabledname>
<value>falsevalue>
property>
configuration>
3、修改mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.namename>
<value>yarnvalue>
property>
<property>
<name>mapred.job.trackername>
<value>cluster-master:9001value>
property>
<property>
<name>mapreduce.jobtracker.http.addressname>
<value>cluster-master:50030value>
property>
<property>
<name>mapreduce.jobhisotry.addressname>
<value>cluster-master:10020value>
property>
<property>
<name>mapreduce.jobhistory.webapp.addressname>
<value>cluster-master:19888value>
property>
<property>
<name>mapreduce.jobhistory.done-dirname>
<value>/jobhistory/donevalue>
property>
<property>
<name>mapreduce.intermediate-done-dirname>
<value>/jobhisotry/done_intermediatevalue>
property>
<property>
<name>mapreduce.job.ubertask.enablename>
<value>truevalue>
property>
configuration>
4、yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostnamename>
<value>cluster-mastervalue>
property>
<property>
<name>yarn.nodemanager.aux-servicesname>
<value>mapreduce_shufflevalue>
property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.classname>
<value>org.apache.hadoop.mapred.ShuffleHandlervalue>
property>
<property>
<name>yarn.resourcemanager.addressname>
<value>cluster-master:18040value>
property>
<property>
<name>yarn.resourcemanager.scheduler.addressname>
<value>cluster-master:18030value>
property>
<property>
<name>yarn.resourcemanager.resource-tracker.addressname>
<value>cluster-master:18025value>
property> <property>
<name>yarn.resourcemanager.admin.addressname>
<value>cluster-master:18141value>
property>
<property>
<name>yarn.resourcemanager.webapp.addressname>
<value>cluster-master:18088value>
property>
<property>
<name>yarn.log-aggregation-enablename>
<value>truevalue>
property>
<property>
<name>yarn.log-aggregation.retain-secondsname>
<value>86400value>
property>
<property>
<name>yarn.log-aggregation.retain-check-interval-secondsname>
<value>86400value>
property>
<property>
<name>yarn.nodemanager.remote-app-log-dirname>
<value>/tmp/logsvalue>
property>
<property>
<name>yarn.nodemanager.remote-app-log-dir-suffixname>
<value>logsvalue>
property>
configuration>
$ cd /opt
$ tar -cvf hadoop-dis.tar hadoop hadoop-3.3.2
---
- hosts: cluster
tasks:
- name: copy .bashrc to slaves
copy: src=~/.bashrc dest=~/
notify:
- exec source
- name: copy hadoop-dis.tar to slaves
unarchive: src=/opt/hadoop-dis.tar dest=/opt
handlers:
- name: exec source
shell: source ~/.bashrc
将以上yaml保存为hadoop-dis.yaml,并执行
ansible-playbook hadoop-dis.yaml
hadoop-dis.tar会自动解压到slave主机的/opt目录下
$ vi /opt/hadoop-3.3.2/etc/hadoop/hadoop-env.sh
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
$ hdfs --daemon namenode -format
如果看到storage format success等字样,即可格式化成功
$ hdfs --daemon start datanode
$ start-all.sh
启动后可使用jps命令查看是否启动成功
#主节点
9697 NodeManager
8947 NameNode
9076 DataNode
9573 ResourceManager
9318 SecondaryNameNode
10041 Jps
#从节点
944 DataNode
1020 Jps
守护进程 | 网页界面 | 笔记 |
---|---|---|
名称节点 | http://nn_host:port/ | 默认 HTTP 端口为 9870。 |
资源管理器 | http://rm_host:port/ | 默认 HTTP 端口为 18088。 |
MapReduce JobHistory 服务器 | http://jhs_host:port/ | 默认 HTTP 端口为 19888。 |
关闭服务
停止集群
$ stop-all.sh
问题1:ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation
$ vi /opt/hadoop-3.3.2/etc/hadoop/hadoop-env.sh
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
**问题2:**启动hadoop 集群是,发现slave节点的datanode没有启动
查了资料发现是因为我在启动集群前,执行了这个命令:
hadoop namenode -format
这个指令会重新格式化namenode的信息,这样可能会导致master节点的VERSION信息跟datanode的信息对不上,导致指令无法同步。
第一步:停止集群:
> stop-all.sh
第二步:删除logs文件夹和tmp文件夹
> rm -rf /opt/hadoop-3.3.2/logs
> rm -rf /tmp/logs
第三步:查看VERSION文件是否存在,如果存在的话,把它删除:
> cd /home/hadoop/tmp/dfs/name/current
> cat VERSION
如果有VERSION文件,直接删除:
> rm -f VERSION
第四步:所有slave节点删除VERSION文件
> cd /home/hadoop/data/current
> cat VERSION
> rm -f VERSION