使用shell脚本安装hadoop高可用集群

文章目录

  • 一.创建一台虚拟机
  • 二.复制两台虚拟机
  • 三.启动集群
  • 四.脚本内容如下
    • 1.jdk
    • 2.hadoop和zookeeper
    • 3.一键启动集群
  • `注:需要下载psmisc依赖包,否则无法完成自动切换节点`

集群划分

192.168.56.120 hadoop01 192.168.56.121 hadoop02 192.168.56.122 hadoop03
QuorumPeerMain QuorumPeerMain QuorumPeerMain
JournalNode JournalNode JournalNode
NameNode NameNode NodeManager
ResourceManager ResourceManager
NodeManager NodeManager
DFSZKFailoverController DFSZKFailoverController
DataNode DataNode DataNode

脚本和相关的文件如下,提取码: sweh
相关文件

一.创建一台虚拟机

  • 1.准备一台纯净版的centos7系统,修改静态ip为192.168.56.120,主机名为hadoop01,关闭防火墙,重启网络,连接moba
  • 2.创建安装包目录:mkdir /opt/software
  • 3.创建安装目录: mkdir /opt/install
  • 4.将脚本和相关配置文件安装包拖到安装包目录下
  • 5.修改脚本的权限:chmod 777 install*
  • 6.执行脚本安装jdk:/opt/software/installJdk.sh
  • 7.执行脚本安装hadoop和zookeeper:/opt/software/installHadoop.sh
  • 8.关闭该虚拟机

二.复制两台虚拟机

  • 1.一台修改静态ip为192.168.56.121 ,主机名为hadoop02
  • 2.另一台修改静态ip为192.168.56.122 ,主机名为hadoop03
  • 3.将hadoop02机器zookeeper的myid修改为2,hadoop03修改为3
  • 4.三台机器配置免登录
  • 1)ssh-keygen -t rsa -P ""生成私钥
  • 2)cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys添加到信任
  • 3)远程免登录:ssh-copy-id -i ~/.ssh/id_rsa.pub -p22 [email protected] ,注意一台向其他两台发送
  • 4)三台机器分别执行:
ssh  -o StrictHostKeyChecking=no `hostname` 

三.启动集群

  • 1.3台 zkServer.sh start,三台hadoop-daemon.sh start journalnode
  • 2.hadoop01 hadoop namenode -format,hadoop-daemon.sh start namenode
  • 3.hadoop02 hdfs namenode -bootstrapStandby,只在第一次启动需要同步以后不需要
  • 3.hadoop01 hdfs zkfc -formatZK
  • 4.三台停掉所有Journalnode进程: hadoop-daemon.sh stop journalnode
  • 5.hadoop01hadoop-daemon.sh stop namenode
  • 6.hadoop01 start-dfs.sh
  • 7.hadoop01 start-yarn.sh
  • 8.hadoop02 yarn-daemon.sh start resourcemanager
  • 9启动后效果如下:
    使用shell脚本安装hadoop高可用集群_第1张图片
    使用shell脚本安装hadoop高可用集群_第2张图片
    使用shell脚本安装hadoop高可用集群_第3张图片

四.脚本内容如下

1.jdk

#!/bin/bash
#安装jdk1.8,需要先创建software目录和install目录,然后将jdk包拖进software目录中
tar -zxvf  /opt/software/jdk-8u221-linux-x64.tar.gz -C /opt/install/
`echo "export JAVA_HOME=/opt/install/jdk1.8.0_221" >> /etc/profile`
`echo 'export CLASSPATH=.:$JAVA_HOME/rt.jar:$JAVA_HOME/tools.jar:$JAVA_HOME/dt.jar' >> /etc/profile`
`echo 'export JRE_HOME=$JAVA_HOME/jre' >> /etc/profile`
`echo 'export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin' >> /etc/profile`
source /etc/profile
java -version

2.hadoop和zookeeper

#!/bin/bash
#将zookeeper和hadoop安装包挪到software下,将配置文件也挪到software目录下
tar -zxvf  /opt/software/zookeeper-3.4.6.tar.gz -C /opt/install/
tar -zxvf  /opt/software/hadoop-2.6.0-cdh5.14.2.tar.gz -C /opt/install/
mv /opt/install/hadoop-2.6.0-cdh5.14.2 /opt/install/hadoop
`echo "export HADOOP_HOME=/opt/install/hadoop" >> /etc/profile`
`echo 'export HADOOP_MAPRED_HOME=$HADOOP_HOME' >> /etc/profile`
`echo 'export HADOOP_COMMON_HOME=$HADOOP_HOME' >> /etc/profile`
`echo 'export HADOOP_HDFS_HOME=$HADOOP_HOME' >> /etc/profile`
`echo 'export YARN_HOME=$HADOOP_HOME' >> /etc/profile`
`echo 'export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native' >> /etc/profile`
`echo 'export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"' >> /etc/profile`
`echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin' >> /etc/profile`
`echo 'export ZK_HOME=/opt/install/zookeeper-3.4.6' >> /etc/profile`
`echo 'export PATH=$PATH:$ZK_HOME/bin' >> /etc/profile`
source /etc/profile
sed -i  '24,26s/\${JAVA_HOME}/\/opt\/install\/jdk1.8.0_221/gi' /opt/install/hadoop/etc/hadoop/hadoop-env.sh
cat $PWD/core-site.xml > /opt/install/hadoop/etc/hadoop/core-site.xml
cat $PWD/hdfs-site.xml > /opt/install/hadoop/etc/hadoop/hdfs-site.xml
cat $PWD/mapred-site.xml > /opt/install/hadoop/etc/hadoop/mapred-site.xml
cat $PWD/yarn-site.xml > /opt/install/hadoop/etc/hadoop/yarn-site.xml
cat $PWD/zoo.cfg > /opt/install/zookeeper-3.4.6/conf/zoo.cfg
cat $PWD/slaves > /opt/install/hadoop/etc/hadoop/slaves
mkdir /opt/install/zookeeper-3.4.6/zkData
mkdir   /opt/install/zookeeper-3.4.6/zkLog
touch /opt/install/zookeeper-3.4.6/zkData/myid
echo '1' >  /opt/install/zookeeper-3.4.6/zkData/myid
echo '192.168.56.120 hadoop01' >> /etc/hosts
echo '192.168.56.121 hadoop02' >> /etc/hosts
echo '192.168.56.122 hadoop03' >> /etc/hosts
source /etc/profile
tar -xvf $PWD/hadoop-native-64-2.6.0.tar -C $HADOOP_HOME/lib/native
tar -xvf $PWD/hadoop-native-64-2.6.0.tar -C $HADOOP_HOME/lib
echo 'hadoop 版本信息:' 
hadoop version

3.一键启动集群

#!/usr/bin/env bash
for s in hadoop01 hadoop02 hadoop03
do
	ssh $s "source /etc/profile; zkServer.sh start; hadoop-daemon.sh start journalnode"
done
sleep 2
ssh hadoop01 "source /etc/profile; hadoop namenode -format; hadoop-daemon.sh start namenode"
sleep 2
ssh hadoop02 "source /etc/profile; hdfs namenode -bootstrapStandby"
sleep 2
ssh hadoop01 "source /etc/profile; hdfs zkfc -formatZK; hadoop-daemon.sh stop journalnode"
sleep 2
ssh hadoop02 "source /etc/profile; hadoop-daemon.sh stop journalnode"
sleep 2
ssh hadoop03 "source /etc/profile; hadoop-daemon.sh stop journalnode"
sleep 2
ssh hadoop01 "source /etc/profile; hadoop-daemon.sh stop namenode; start-dfs.sh; start-yarn.sh"
sleep 2
ssh hadoop02 "source /etc/profile; yarn-daemon.sh start resourcemanager"
sleep 2
for s in hadoop01 hadoop02 hadoop03
do
	echo "=================$s-jps================="
	ssh $s "source /etc/profile; jps"
done

注:需要下载psmisc依赖包,否则无法完成自动切换节点

yum install -y psmisc

你可能感兴趣的:(Hadoop,hadoop,linux,centos,hdfs,大数据)