群起zookeeper集群、hadoop集群、spark集群、kylin集群

搭建好kylin的环境后,需要解决的问题就是集群的群起,如果依次到各个节点输入命令,估计得累死。。所以使用shell脚本的方式来启动和关闭集群。

启动zookeeper集群和hadoop集群

#!/bin/bash
echo "================     正在启动Zookeeper                    ==========="
echo "================           root@repo             ================"
zkServer.sh start
for node in root@node1 root@node2 root@node3
do
        echo "--------"$node"--------" 
        ssh $node "source /etc/profile;source ~/.bash_profile;/home/hadoop/zk-cluster.sh"
done#不要少source /etc/profile;source ~/.bash_profile;之前出现启动后又关闭的情况就是因为少写这句
echo "================     正在启动HDFS                    ==========="
start-dfs.sh
echo "================     正在启动YARN                    ==========="
ssh root@node2 'start-yarn.sh'
echo "================     正在开启JobHistoryServer        ==========="
ssh root@node1 'mr-jobhistory-daemon.sh start historyserver'
echo "================           root@repo             ================"
jps
for i in root@node1 root@node2 root@node3
do
        echo "================           $i             ================"
        ssh $i '/usr/local/jdk1.8.0_201/bin/jps'
done

启动kylin集群:

#!/bin/bash
echo "================     正在启动HBase                    ==========="
start-hbase.sh
echo "================     正在启动Spark                    ==========="
/home/hadoop/apps/spark-2.2.3/sbin/start-spark-all.sh
echo "================     正在启动Kylin                    ==========="
ssh root@node1 'source /etc/profile;source ~/.bash_profile;/home/hadoop/apps/kylin-2.6.1/bin/kylin.sh start'
echo "================     结束        ==========="
echo "================           repo             ================"
jps
for i in root@node1 root@node2 root@node3
do
        echo "================           $i             ================"
        ssh $i '/usr/local/jdk1.8.0_201/bin/jps'
done

 

关闭kylin集群

#!/bin/bash

echo "================     正在关闭Kylin                    ==========="
ssh root@node1 'source /etc/profile;source ~/.bash_profile;/home/hadoop/apps/kylin-2.6.1/bin/kylin.sh stop'
echo "================     正在关闭Spark                    ==========="
/home/hadoop/apps/spark-2.2.3/sbin/stop-spark-all.sh
echo "================     正在关闭HBase                    ==========="
stop-hbase.sh
echo "================     结束        ==========="
echo "================           repo             ================"
jps
for i in root@node1 root@node2 root@node3
do
        echo "================           $i             ================"
        ssh $i '/usr/local/jdk1.8.0_201/bin/jps'
done

关闭hadoop集群

#!/bin/bash
echo "================     开始关闭所有节点服务            ==========="
echo "================     正在关闭Zookeeper               ==========="
for i in root@node1 root@node2 root@node3
do
        echo "================           $i             ================"
        ssh $i 'zkServer.sh stop'
done
echo "================     正在关闭HDFS                    ==========="
stop-dfs.sh
echo "================     正在关闭YARN                    ==========="
ssh root@node2 'stop-yarn.sh'
echo "================     正在关闭JobHistoryServer        ==========="
ssh root@node1 'mr-jobhistory-daemon.sh stop historyserver'
echo "================           root@repo             ================"
jps
for i in root@node1 root@node2 root@node3
do
        echo "================           $i             ================"
        ssh $i 'jps'
done

 

你可能感兴趣的:(环境搭建)