使用腾讯云主机,docker构建集群测试环境。
1、操作系统: CentOS 7.2 64位
hostname | IP |
---|---|
cluster-master | 172.18.0.2 |
cluster-slave1 | 172.18.0.3 |
cluster-slave2 | 172.18.0.4 |
cluster-slave3 | 172.18.0.5 |
curl -sSL https://get.daocloud.io/docker | sh
##换源
###这里可以参考这篇文章http://www.jianshu.com/p/34d3b4568059
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://67e93489.m.daocloud.io
##开启自启动
systemctl enable docker
systemctl start docker
docker pull daocloud.io/library/centos:latest
使用docker ps
查看下载的镜像
按照集群的架构,创建容器时需要设置固定IP,所以先要在docker使用如下命令创建固定IP的子网
docker network create --subnet=172.18.0.0/16 netgroup
docker的子网创建完成之后就可以创建固定IP的容器了
#cluster-master
#-p 设置docker映射到容器的端口 后续查看web管理页面使用
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-master -h cluster-master -p 18088:18088 -p 9870:9870 --net netgroup --ip 172.18.0.2 daocloud.io/library/centos /usr/sbin/init
#cluster-slaves
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave1 -h cluster-slave1 --net netgroup --ip 172.18.0.3 daocloud.io/library/centos /usr/sbin/init
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave2 -h cluster-slave2 --net netgroup --ip 172.18.0.4 daocloud.io/library/centos /usr/sbin/init
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave3 -h cluster-slave3 --net netgroup --ip 172.18.0.5 daocloud.io/library/centos /usr/sbin/init
启动控制台并进入docker
容器中:
docker exec -it cluster-master /bin/bash
1、cluster-master
安装:
#cluster-master需要修改配置文件(特殊)
#cluster-master
#安装openssh
[root@cluster-master /]# yum -y install openssh openssh-server openssh-clients
[root@cluster-master /]# systemctl start sshd
####ssh自动接受新的公钥
####master设置ssh登录自动添加kown_hosts
[root@cluster-master /]# vi /etc/ssh/ssh_config
#将原来的StrictHostKeyChecking ask
#设置StrictHostKeyChecking为no
#保存
[root@cluster-master /]# systemctl restart sshd
2、分别对slaves安装OpenSSH
#安装openssh
[root@cluster-slave1 /]#yum -y install openssh openssh-server openssh-clients
[root@cluster-slave1 /]# systemctl start sshd
3、cluster-master公钥分发
在master机上执行
ssh-keygen -t rsa
并一路回车,完成之后会生成~/.ssh目录,目录下有id_rsa(私钥文件)和id_rsa.pub(公钥文件),再将id_rsa.pub重定向到文件authorized_keys
ssh-keygen -t rsa
#一路回车
[root@cluster-master /]# cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
文件生成之后用scp将公钥文件分发到集群slave主机
[root@cluster-master /]# ssh root@cluster-slave1 'mkdir ~/.ssh'
[root@cluster-master /]# scp ~/.ssh/authorized_keys root@cluster-slave1:~/.ssh
[root@cluster-master /]# ssh root@cluster-slave2 'mkdir ~/.ssh'
[root@cluster-master /]# scp ~/.ssh/authorized_keys root@cluster-slave2:~/.ssh
[root@cluster-master /]# ssh root@cluster-slave3 'mkdir ~/.ssh'
[root@cluster-master /]# scp ~/.ssh/authorized_keys root@cluster-slave3:~/.ssh
分发完成之后测试(ssh root@cluster-slave1)是否已经可以免输入密码登录
[root@cluster-master /]# yum -y install epel-release
[root@cluster-master /]# yum -y install ansible
#这样的话ansible会被安装到/etc/ansible目录下
此时我们再去编辑ansible的hosts文件
vi /etc/ansible/hosts
[cluster]
cluster-master
cluster-slave1
cluster-slave2
cluster-slave3
[master]
cluster-master
[slaves]
cluster-slave1
cluster-slave2
cluster-slave3
由于/etc/hosts文件在容器启动时被重写,直接修改内容在容器重启后不能保留,为了让容器在重启之后获取集群hosts,使用了一种启动容器后重写hosts的方法。
需要在~/.bashrc中追加以下指令
:>/etc/hosts
cat >>/etc/hosts<
source ~/.bashrc
使配置文件生效,可以看到/etc/hosts文件已经被改为需要的内容
[root@cluster-master ansible]# cat /etc/hosts
127.0.0.1 localhost
172.18.0.2 cluster-master
172.18.0.3 cluster-slave1
172.18.0.4 cluster-slave2
172.18.0.5 cluster-slave3
ansible cluster -m copy -a "src=~/.bashrc dest=~/"
下载JDK1.8并解压缩至/opt
目录下
下载hadoop3 到/opt
目录下,解压安装包,并创建链接文件
tar -xzvf hadoop-3.2.0.tar.gz
ln -s hadoop-3.2.0 hadoop
编辑 ~/.bashrc
文件
# hadoop
export HADOOP_HOME=/opt/hadoop-3.2.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
#java
export JAVA_HOME=/opt/jdk8
export PATH=$HADOOP_HOME/bin:$PATH
使文件生效:
source .bashrc
cd $HADOOP_HOME/etc/hadoop/
1、修改core-site.xml
hadoop.tmp.dir
/home/hadoop/tmp
A base for other temporary directories.
fs.default.name
hdfs://cluster-master:9000
fs.trash.interval
4320
2、修改hdfs-site.xml
dfs.namenode.name.dir
/home/hadoop/tmp/dfs/name
dfs.datanode.data.dir
/home/hadoop/data
dfs.replication
3
dfs.webhdfs.enabled
true
dfs.permissions.superusergroup
staff
dfs.permissions.enabled
false
3、修改mapred-site.xml
mapreduce.framework.name
yarn
mapred.job.tracker
cluster-master:9001
mapreduce.jobtracker.http.address
cluster-master:50030
mapreduce.jobhisotry.address
cluster-master:10020
mapreduce.jobhistory.webapp.address
cluster-master:19888
mapreduce.jobhistory.done-dir
/jobhistory/done
mapreduce.intermediate-done-dir
/jobhisotry/done_intermediate
mapreduce.job.ubertask.enable
true
4、yarn-site.xml
yarn.resourcemanager.hostname
cluster-master
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
cluster-master:18040
yarn.resourcemanager.scheduler.address
cluster-master:18030
yarn.resourcemanager.resource-tracker.address
cluster-master:18025
yarn.resourcemanager.admin.address
cluster-master:18141
yarn.resourcemanager.webapp.address
cluster-master:18088
yarn.log-aggregation-enable
true
yarn.log-aggregation.retain-seconds
86400
yarn.log-aggregation.retain-check-interval-seconds
86400
yarn.nodemanager.remote-app-log-dir
/tmp/logs
yarn.nodemanager.remote-app-log-dir-suffix
logs
tar -cvf hadoop-dis.tar hadoop hadoop-3.2.0
---
- hosts: cluster
tasks:
- name: copy .bashrc to slaves
copy: src=~/.bashrc dest=~/
notify:
- exec source
- name: copy hadoop-dis.tar to slaves
unarchive: src=/opt/hadoop-dis.tar dest=/opt
handlers:
- name: exec source
shell: source ~/.bashrc
将以上yaml保存为hadoop-dis.yaml,并执行
ansible-playbook hadoop-dis.yaml
hadoop-dis.tar会自动解压到slave主机的/opt目录下
hadoop namenode -format
如果看到storage format success等字样,即可格式化成功
cd $HADOOP_HOME/sbin
start-all.sh
启动后可使用jps命令查看是否启动成功
注意:
在实践中遇到节点slaves 上的datanode服务没有启动,查看slave上目录结构发现
没有生成配置文件中设置的文件夹,比如:core-site.xml中
hadoop.tmp.dir
/home/hadoop/tmp
A base for other temporary directories.
hdfs-site.xml文件中:
dfs.namenode.name.dir
/home/hadoop/tmp/dfs/name
dfs.datanode.data.dir
/home/hadoop/data
手动到节点中生成这些文件夹,之后删除master中这些文件夹和$HADOOP_HOME下的logs文件夹,之后重新格式化namenode
hadoop namenode -format
再次启动集群服务:
start-all.sh
这时在到从节点观察应该会看到节点服务
访问
http://host:18088
http://host:9870
来查看服务是否启动
转载:https://www.jianshu.com/p/d7fa21504784