华为云服务器centos7.2下spark独立机群管理器简单实例

1.三台云服务器
192.168.1.114   spark01
192.168.1.185   spark02
192.168.1.50    spark03

2.配置免密登陆
主要是5个小脚本
ip.txt
#要加入到/etc/hosts文件中的
192.168.1.114   spark01
192.168.1.185   spark02
192.168.1.50    spark03

yes.txt
#抑制免密登陆的yes确认信息的
#加入到/etc/ssh/ssh_config文件最后
StrictHostKeyChecking no
UserKnownHostsFile /dev/null

every.sh
#!/bin/bash
cat ~/ip.txt >> /etc/hosts
#下面的代码,在我配置的镜像中已经部署了,注释掉
#cat ~/env.txt >> /etc/profile 
#source /etc/profile
#cat ~/yes.txt >>  /etc/ssh/ssh_config 
#systemctl restart sshd.service

env.txt
#环境变量
JAVA_HOME=/usr/local/src/app/jdk1.8.0_131
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
HADOOP_HOME=/usr/local/src/app/hadoop-2.8.1
PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$PATH
export JAVA_HOME  PATH CLASSPATH  HADOOP_HOME

只需要在主节点执行这一个就可以完成全部的免密登陆配置
only01.sh
#!/bin/bash
#安装sshpass,可以指定密码
yum install -y epel-release
yum repolist
yum install -y sshpass

sh ~/every.sh
num=$1
#生成密钥,收集公钥
for i in $( seq 1 $num )
do
   sshpass -p '3363018tiaN' ssh spark0${i} "ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa"
   sshpass -p '3363018tiaN' scp root@spark0${i}:~/.ssh/id_rsa.pub  ~/.ssh/p${i}
   cat ~/.ssh/p${i} >>  ~/.ssh/authorized_keys
done


#派发公钥
for i in $( seq 2 $num )
do
   scp ~/.ssh/authorized_keys  root@spark0${i}:~/.ssh
done
#派发txt
for i in $( seq 2 $num )
do
   scp ~/* root@spark0${i}:~/
done

#执行txt
for i in $( seq 2 $num )
do
   sshpass -p '3363018tiaN' ssh spark0${i} "sh ~/every.sh"
   sshpass -p '3363018tiaN' ssh spark0${i} "hostname spark0${i}"
done

hostname spark01





3.上传上面的5个文件到spark01
#我这里提前有的都配置了
[root@spark01 ~]# ls
anaconda-ks.cfg  every.sh  ip.txt  only01.sh
[root@spark01 ~]# sh only01.sh 3
剩下的就都是输密码了,然后就免密登陆完成

4.配置spark
[root@spark01 ~]# cd /usr/local/src/app/spark-2.2.0-bin-hadoop2.7/
[root@spark01 spark-2.2.0-bin-hadoop2.7]# cd conf/
[root@spark01 conf]# ls
docker.properties.template  log4j.properties.template    slaves.template               spark-env.sh.template
fairscheduler.xml.template  metrics.properties.template  spark-defaults.conf.template
[root@spark01 conf]# cp slaves.template slaves
[root@spark01 conf]# cp spark-env.sh.template spark-env.sh
[root@spark01 conf]# vim slaves
spark02
spark03
[root@spark01 conf]# vim spark-env.sh
export JAVA_HOME=/usr/local/src/app/jdk1.8.0_131
远程拷贝到spark02,spark03
[root@spark01 spark-2.2.0-bin-hadoop2.7]# 
scp conf/* root@spark03:/usr/local/src/app/spark-2.2.0-bin-hadoop2.7/conf
在主节点上启动
[root@spark01 spark-2.2.0-bin-hadoop2.7]# sh sbin/start-all.sh 
starting org.apache.spark.deploy.master.Master,
 logging to /usr/local/src/app/spark-2.2.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-spark01.out
spark03: Warning: Permanently added 'spark03,192.168.1.50' (ECDSA) to the list of known hosts.
spark02: Warning: Permanently added 'spark02,192.168.1.185' (ECDSA) to the list of known hosts.
spark03: starting org.apache.spark.deploy.worker.Worker,
 logging to /usr/local/src/app/spark-2.2.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-spark03.out
spark02: starting org.apache.spark.deploy.worker.Worker,
 logging to /usr/local/src/app/spark-2.2.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-spark02.out
[root@spark01 spark-2.2.0-bin-hadoop2.7]# jps
6050 Jps
5976 Master
[root@spark01 spark-2.2.0-bin-hadoop2.7]# 

3.查看执行节点
[root@spark02 conf]# jps
6050 Jps
5958 Worker
[root@spark02 conf]# 
[root@spark03 conf]# jps
6002 Worker
6119 Jps
[root@spark03 conf]# 

你可能感兴趣的:(华为云服务器centos7.2下spark独立机群管理器简单实例)