docker搭建hadoop完全分布式

docker搭建hadoop完全分布式

  • 准备环境
  1. 启动已配置好的虚拟机
  2. 使用xshell连接虚拟机
  3. 检查网络

二、安装Docker

1、检查系统内核

[root@docker108 ~]# cat /etc/redhat-release

CentOS Linux release 7.5.1804 (Core)

[root@docker108 ~]# uname -a

Linux docker108 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

[root@docker108 ~]# uname -r

3.10.0-862.el7.x86_64

安装Docker先决条件

必须是64位CPU架构的计算机,Docker目前不支持32位CPU;

运行Linux3.8或更高版本内核,CentOS的内核版本不能低于3.10;

core >= 3.1

内核必须支持一种合适的存储驱动,可以是Device Manager、AUFS、vfs、btrfs、以及默认的驱动Device Mapper中的一个;

  1. yum安装docker

[root@docker108 ~]# yum -y install docker

  1. 启动docker服务

[root@docker108 ~]#systemctl start docker

  1. 开机启动docker服务

[root@docker108 ~]#systemctl enable docker

  1. 查看docker服务

[root@docker108 ~]#systemctl status docker

  • 配置docker镜像加速器
  1. 配置镜像加速器

[root@docker108 ~]# vim /etc/docker/daemon.json

{

  "registry-mirrors": [

    "http://hub-mirror.c.163.com",

    "https://docker.mirrors.ustc.edu.cn",

    "https://mirror.ccs.tencentyun.com",

    "https://ahunh7pc.mirror.aliyuncs.com"

  ]

}

2. 重启docker服务

[root@docker108 ~]# systemctl daemon-reload

[root@docker108 ~]# systemctl restart docker

[root@docker108 ~]# systemctl status docker

3.查看docker信息

[root@docker108 ~]# docker info

  • 拉取centos基础镜像

[root@docker108 ~]# docker pull centos:7.5.1804

[root@docker108 ~]#mkdir software

[root@docker108 ~]#cd software

使用xftp上传jdk-8u144-linux-x64.tar.gz

  • 创建centos-jdk:1.0 镜像

[root@docker108 software]# mkdir docker-jdk

[root@docker108 software]#cp jdk-8u144-linux-x64.tar.gz docker-jdk/

[root@docker108 software]# cd docker-jdk

[root@docker108 docker-jdk]# touch Dockerfile

[root@docker108 docker-jdk]# vim Dockerfile

FROM centos:7.5.1804

RUN mkdir -p /opt/software

RUN mkdir -p /opt/module

COPY jdk-8u144-linux-x64.tar.gz /opt/software/

RUN tar -xzvf /opt/software/jdk-8u144-linux-x64.tar.gz -C /opt/module

RUN rm -rf /opt/software/jdk-8u144-linux-x64.tar.gz

ENV JAVA_HOME=/opt/module/jdk1.8.0_144

ENV PATH=$PATH:$JAVA_HOME/bin

[root@docker108 docker-jdk]# docker build -t centos-jdk:1.0 ./

  • 创建 centos-jdk-ssh:1.0:1.0 镜像

[root@docker108 software]#mkdir docker-jdk-ssh

[root@docker108 software]# cd docker-jdk-ssh

[root@docker108 docker-jdk-ssh]# touch Dockerfile

[root@docker108 docker-jdk-ssh]# vim Dockerfile

FROM centos_java8:1.0

MAINTAINER AlexMK [email protected]

RUN curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

RUN sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

RUN yum makecache

RUN yum install -y openssh-server openssh-clients vim net-tools

RUN sed -i '/^HostKey/'d /etc/ssh/sshd_config

RUN echo 'HostKey /etc/ssh/ssh_host_rsa_key'>>/etc/ssh/sshd_config

RUN ssh-keygen -t rsa -b 2048 -f /etc/ssh/ssh_host_rsa_key

RUN echo 'root:000000' | chpasswd

EXPOSE 22

RUN mkdir -p /opt

RUN echo '#!/bin/bash' >> /opt/run.sh

RUN echo '/usr/sbin/sshd -D' >> /opt/run.sh

RUN chmod +x /opt/run.sh

CMD ["/opt/run.sh"]

[root@docker108 docker-jdk-ssh]#docker build -t centos-jdk-ssh:1.0 ./

查看一下镜像

[root@docker108 docker-jdk-ssh]#docker images

REPOSITORY       TAG        IMAGE ID       CREATED       SIZE

centos-jdk-ssh   1.0        b3c314c5189c   10 days ago   1.17GB

centos-jdk       1.0        a47dec29d0de   10 days ago   761MB

nginx            1.19       519e12e2a84a   2 weeks ago   133MB

mysql            5.7        cd0f0b1e283d   4 weeks ago   449MB

centos           7.5.1804   cf49811e3cdb   2 years ago   200MB

[root@docker108 docker-jdk-ssh]#docker ps

  • 运行容器

基于centos-jdk-ssh:1.0:1.0 镜像运行容器,并给容器和容器的主机名起一个名字,为了方便记忆和操作,这里容器名字和容器的主机名都设置为一样

[root@docker108 docker-jdk-ssh]#docker run --name hadoop102 -h hadoop102 -d centos-jdk-ssh:1.0

[root@docker108 docker-jdk-ssh]#docker run --name hadoop103 -h hadoop103 -d centos-jdk-ssh:1.0

[root@docker108 docker-jdk-ssh]#docker run --name hadoop104 -h hadoop104 -d centos-jdk-ssh:1.0

查看容器

[root@docker108 docker-jdk-ssh]#docker ps

  • 查看容器网络

[root@docker108 docker-jdk-ssh]#docker network inspect bridge

九、通过这种方式是可以(不推荐)

[root@docker108 docker-jdk-ssh]#docker exec -it hadoop102 /bin/bash

[root@docker108 docker-jdk-ssh]#exit

[root@docker108 docker-jdk-ssh]#docker exec -it hadoop103 /bin/bash

[root@docker108 docker-jdk-ssh]#exit

[root@docker108 docker-jdk-ssh]#docker exec -it hadoop104 /bin/bash

[root@docker108 docker-jdk-ssh]#exit

十、通过端口映射的方式(不推荐)

[root@docker108 docker-jdk-ssh]#docker run --name hadoop105 -h hadoop105 -p 522:22 -d centos-jdk-ssh:1.0

[root@docker108 docker-jdk-ssh]#exit

  • 目标(直接进入容器hadoop102hadoop103,hadoop104)

192.168.2.102:22

192.168.2.103:22

192.168.2.104:22

利用脚本进入容器

[root@docker108]#cd /usr/local/bin/

[root@docker108 bin]#touch docker_network.sh

[root@docker108 bin]#chmod 755 docker_network.sh

[root@docker108 bin]#vim docker_network.sh

#!/bin/bash

docker start hadoop102

docker start hadoop103

docker start hadoop104

brctl addbr br0; \

ip link set dev br0 up; \

ip addr del 192.168.2.108/24 dev ens33; \

ip addr add 192.168.2.108/24 dev br0; \

brctl addif br0 ens33; \

ip route add default via 192.168.2.2 dev br0

sleep 5

pipework br0 hadoop102 192.168.2.102/[email protected]

pipework br0 hadoop103 192.168.2.103/[email protected]

pipework br0 hadoop104 192.168.2.104/[email protected]

(如果最小化安装虚拟机,需要装pipeworkbridge-utils 这两个依赖

[root@docker108 bin]#yum install -y bridge-utils)

启动脚本

[root@hadoop102 ~]# docker_network.sh

执行 docker_network.sh  这个脚本 给容器配置ip

十二、进入容器给每个容器配置主机和ip的映射

[root@docker108 ]#ssh hadoop102

[root@hadoop102]#vim /etc/hosts

192.168.2.102 hadoop102

192.168.2.103 hadoop103

192.168.2.104 hadoop104

[root@hadoop102]#exit

[root@docker108 ]#ssh hadoop103

[root@hadoop103]#vim /etc/hosts

192.168.2.102 hadoop102

192.168.2.103 hadoop103

192.168.2.104 hadoop104

[root@hadoop103]#exit

[root@docker108 ]#ssh hadoop104

[root@hadoop104]#vim /etc/hosts

192.168.2.102 hadoop102

192.168.2.103 hadoop103

192.168.2.104 hadoop104

[root@hadoop104]#exit

  • ssh 免密登陆,在每个容器里面生成公钥和私钥,并复制给其他容器(也可直接使用免密脚本)

[root@docker108 ]#ssh hadoop102

[root@hadoop102]#ssh-kengen -t rsa

[root@hadoop102]#ssh-copy-id hadoop102

[root@hadoop102]#ssh-copy-id hadoop103

[root@hadoop102]#ssh-copy-id hadoop104

[root@hadoop102]#exit

[root@docker108 ]#ssh hadoop103

[root@hadoop103]#ssh-kengen -t rsa

[root@hadoop103]#ssh-copy-id hadoop102

[root@hadoop103]#ssh-copy-id hadoop103

[root@hadoop103]#ssh-copy-id hadoop104

[root@hadoop103]#exit

[root@docker108 ]#ssh hadoop104

[root@hadoop104]#ssh-kengen -t rsa

[root@hadoop104]#ssh-copy-id hadoop102

[root@hadoop104]#ssh-copy-id hadoop103

[root@hadoop104]#ssh-copy-id hadoop104

[root@hadoop104]#exit

  • 配置jdk环境变量

[root@hadoop102 ]#vim  /etc/profile

文件末尾添加jdk路径

##JAVA_HOME

export JAVA_HOME=/opt/module/jdk1.8.0_144

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin

:wq(保存并退出)

让修改后的文件生效

[root@hadoop102 ]#source /etc/profile

测试jdk 安装是否成功

[root@hadoop102 ]#java -version

十五、安装部署hadoop

基本配置文件

hadoop-env.sh

export JAVA_HOME=/opt/module/jdk1.8.0_144

export HADOOP_LOG_DIR=/opt/module/hadoop/logs

yarn-env.sh

export JAVA_HOME=/opt/module/jdk1.8.0_144

export YARN_LOG_DIR=/opt/module/hadoop/logs

mapred-env.sh

export JAVA_HOME=/opt/module/jdk1.8.0_144

core-site.xml

       fs.defaultFS

       hdfs://node1:9000

       hadoop.tmp.dir

       /opt/module/hadoop-2.8.2/data/tmp

hdfs-site.xml

       dfs.replication

       3

    dfs.namenode.secondary.http-address

    node3:50090

 

 

    dfs.permissions.enabled

    false

 

yarn-site.xml

       yarn.nodemanager.aux-services

       mapreduce_shuffle

       yarn.resourcemanager.hostname

       node2

mapred-site.xml

       mapreduce.framework.name

       yarn

Slaves

hadoop102

hadoop103

hadoop104

在集群上分发以上所有文件(可使用分发脚本)

[root@hadoop102 ]# cd /opt/module/hadoop-2.7.2/     

[root@hadoop102 ]# hadoop-2.7.2]$ pwd

/opt/module/hadoop-2.7.2

[root@hadoop102 hadoop-2.7.2]$ rsync-rvl etc/hadoop/* root@hadoop103:/opt/module/hadoop-2.8.2/etc/hadoop/

[root@hadoop102 hadoop-2.7.2]$ rsync -rvl etc/hadoop/*root@hadoop104:/opt/module/hadoop-2.8.2/etc/hadoop/

  • 如果集群是第一次启动,需要格式化namenode

[root@hadoop102 hadoop-2.7.2]# bin/hdfs namenode -format

  • 启动hadoop

[root@hadoop102 hadoop-2.7.2]# sbin/start-dfs.sh

[root@hadoop103 hadoop-2.7.2]# sbin/start-yarn.sh

查看jps (也可使用查看集群jps的脚本)

  • 浏览器访问

192.168.2.102:50070

192.168.2.103:8088

  • 脚本

脚本都在容器hadoop102中加入而且都加入bin目录下

[root@hadoop102]cd /usr/local/bin

第一个脚本(xsync.sh)分发脚本

注意:安装这个脚本需要在每个容器都安装依赖rsyncyum install -y rsync

#!/bin/bash

#1 获取输入参数个数,如果没有参数,直接退出

pcount=$#

if((pcount==0)); then

echo no args;

exit;

fi

#2 获取文件名称

p1=$1

fname=`basename $p1`

echo fname=$fname

#3 获取上级目录到绝对路径

pdir=`cd -P $(dirname $p1); pwd`

echo pdir=$pdir

#4 获取当前用户名称

user=`whoami`

#5 循环

for((host=103; host<=104; host++)); do

    #echo $pdir/$fname $user@hadoop$host:$pdir

    echo --------------- hadoop$host ----------------

    rsync -rvl $pdir/$fname $user@hadoop$host:$pdir

done

第二个脚本 查看jps脚本(xjps.sh)

#!/bin/bash

   

user=`whoami`

for((host=102; host<=104; host++)); do

    echo "=====  $user@hadoop$host  ===="

    ssh $user@hadoop$host 'jps'

done

第三个脚本  免密脚本(auto-ssh-sshpass.sh)

#!/bin/bash

user=`whoami`
passwd=000000

#yum install -y sshpass
echo "开始配置免密登录......"

for((current=102; current<=104; current++));do
    for((host=102; host<=104; host++));do
        sshpass -p $passwd ssh -q -o StrictHostKeyChecking=no $user@hadoop$current "sshpass -p $passwd ssh-copy-id -o StrictHostKeyChecking=no $user@hadoop$host"
    done
done
echo "恭喜, 免密登录配置完成!"

你可能感兴趣的:(分布式,docker,hadoop)