Apache 原生态hadoop集群一个脚本启动、一个脚本关闭

脚本放在hadoop01上,也是主节点,namenode。

前提:hadoop01 -- hadoop05 五个节点都设置了ssh免密登录。

启动脚本:

echo "首先启动ZK"
ssh root@hadoop03 sh /usr/local/zookeeper/bin/zkServer.sh start
ssh root@hadoop04 sh /usr/local/zookeeper/bin/zkServer.sh start
ssh root@hadoop05 sh /usr/local/zookeeper/bin/zkServer.sh start
echo "再在hadoop01上启动dfs和yarn"
ssh root@hadoop01 sh /root/test.sh
ssh root@hadoop01 sh /usr/local/hadoop/sbin/start-dfs.sh
ssh root@hadoop01 sh /usr/local/hadoop/sbin/start-yarn.sh
echo "在hadoop02上启动journalnode"
ssh root@hadoop02 sh /root/test.sh
ssh root@hadoop02 sh /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode
echo "最后在hadoop01上启动historyserver、kafka、spark、hbase"
ssh root@hadoop01 sh /root/test.sh
ssh root@hadoop01 sh /usr/local/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver
ssh root@hadoop01 sh /usr/local/kafka/bin/kafka-server-start.sh
ssh root@hadoop01 sh /usr/local/spark/sbin/start-all.sh
ssh root@hadoop01 sh /usr/local/hbase/bin/start-hbase.sh

Apache 原生态hadoop集群一个脚本启动、一个脚本关闭_第1张图片

关闭hadoop集群:

echo "先在hadoop01上关闭hbase、kafka、spark、historyserver、yarn"
ssh root@hadoop01 sh /root/test.sh
ssh root@hadoop01 sh /usr/local/hbase/bin/stop-hbase.sh
ssh root@hadoop01 sh /usr/local/kafka/bin/kafka-server-stop.sh
ssh root@hadoop01 sh /usr/local/spark/sbin/stop-all.sh
ssh root@hadoop01 sh /usr/local/hadoop/sbin/mr-jobhistory-daemon.sh stop historyserver
ssh root@hadoop01 sh /usr/local/hadoop/sbin/stop-yarn.sh

echo "在hadoop02上关闭journalnode"
ssh root@hadoop02 sh /root/test.sh
ssh root@hadoop02 sh /usr/local/hadoop/sbin/hadoop-daemon.sh stop journalnode
echo "在hadoop01上关闭dfs"
ssh root@hadoop01 sh /root/test.sh
ssh root@hadoop01 sh /usr/local/hadoop/sbin/stop-dfs.sh
echo "最后关闭ZK"
ssh root@hadoop03 sh /usr/local/zookeeper/bin/zkServer.sh stop
ssh root@hadoop04 sh /usr/local/zookeeper/bin/zkServer.sh stop
ssh root@hadoop05 sh /usr/local/zookeeper/bin/zkServer.sh stop

Apache 原生态hadoop集群一个脚本启动、一个脚本关闭_第2张图片

在各个节点执行jps,发现所有进程都关闭了。

 

 

 

 

 

 

 

 

你可能感兴趣的:(hadoop集群安装)