1.准备工作
(1)克隆三台客户机(加上之前的一共四台客户机)。
(2)修改hostname
vi /etc/hostname
四台主机名分别修改为“s200”(主节点),“s201”(从节点),“s202”(从节点),“s203”(从节点)
(3)修改ip地址
vi /etc/sysconfig/network-scripts/ifcfg-ens33
四台ip地址分别为“192.168.231.200”、“192.168.231.201”、“192.168.231.202”、“192.168.231.203”
(4)重新启动客户机
reboot
(5)修改四台客户机的resolv.conf文件
vi /etc/resolv.conf
修改为 nameserver 192.168.231.2
2.配置ssh
(1)递归删除所有主机上的.ssh中的所有文件
rm -r /home/centos/.ssh/*
(2)在s200主机上生成密钥对
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
(3)将s200的公钥文件id_rsa.pub远程复制到201 ~ 203主机上,并放置到/home/centos/.ssh/authorized_keys
scp id_rsa.pub gao@s201:/home/gao/.ssh/authorized_keys
scp id_rsa.pub gao@s202:/home/gao/.ssh/authorized_keys
scp id_rsa.pub gao@s203:/home/gao/.ssh/authorized_keys
3.配置core-site.xml、hdfs-site.xmlmapred-site.xml、yarn-site.xml,并分发到其余三台机器上
cd /etc/hadoop/etc/full
(1)配置core-site.xml
(2)配置hdfs-site.xml
(3)配置mapred-site.xml
复制mapred-site.xml.template为mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
(4)配置yarn-site.xml
(5)配置slaves文件
s201
s202
s203
复制slaves文件到其余三台主机上
scp slaves gao@s201:/soft/hadoop/etc/full/
scp slaves gao@s202:/soft/hadoop/etc/full/
scp slaves gao@s203:/soft/hadoop/etc/full/
(6)配置hadoop-env.sh文件
export JAVA_HOME=/soft/jdk
(7)将以上配置文件分发到其余三台主机上
cd /soft/hadoop/etc/
scp -r full gao@s201:/soft/hadoop/etc/
scp -r full gao@s202:/soft/hadoop/etc/
scp -r full gao@s203:/soft/hadoop/etc/
(8)删除之前的符号连接
cd /soft/hadoop/etc
rm hadoop
ssh s201 rm /soft/hadoop/etc/hadoop
ssh s202 rm /soft/hadoop/etc/hadoop
ssh s203 rm /soft/hadoop/etc/hadoop
(9)创建新的符号连接
cd /soft/hadoop/etc/
ln -s full hadoop
ssh s201 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
ssh s202ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
ssh s203 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
(10)删除临时目录文件
cd /tmp
rm -rf hadoop-gao/*
ssh s201 rm -rf /tmp/hadoop-gao/*
ssh s202 rm -rf /tmp/hadoop-gao/*
ssh s203 rm -rf /tmp/hadoop-gao/*
(11)删除hadoop日志
cd /soft/hadoop/logs
rm -rf *
ssh s201 rm -rf /soft/hadoop/logs/*
ssh s202 rm -rf /soft/hadoop/logs/*
ssh s203 rm -rf /soft/hadoop/logs/*
(12)格式化文件系统
hadoop namenode -format
(13)启动hadoop进程
start-all.sh
4.脚本编写
(1)调用
(a)创建xcall.sh文件
touch xcall.sh
(b)提高权限
chmod a+x xcall.sh
(c)移动到/usr/local/bin下
sudo mv /usr/local/bin
(d)编写脚本
cd /usr/local/bin
vi xcall.sh
脚本内容:
#!/bin/bash
params=$@
i=200
for((i=200;i<=203;i=$i+1)); do
echo ============== s$i $params ============
ssh s$i "$params"
done
(2)同步
(a)创建脚本文件 xsync.sh
touch xsync.sh
(b)提高权限
chmod a+x xsync.sh
(c)移动到/usr/local/bin下
sudo mv /usr/local/bin
(d)安装rsync
sudo yum -y install rsync
(e)编辑脚本文件
cd /usr/local/bin
vi xsync.sh
脚本内容:
#!/bin/bash
if [[ $# -lt 1 ]] ; then echo no params ; exit ; fi
p=$1
dir=`dirname $p`
filename=`basename $p`
cd $dir
fullpath=`pwd -P .`
user=`whoami`
for (( i = 202 ; i <= 204 ; i = $i + 1 )) ; do
echo ======= s$i =======
rsync -lr $p ${user}@s$i:$fullpath
done ;