spark 2.3.1集群搭建(Master,Slave,Slave)

基础配置同上篇博客

配置教程请先参阅:厦门大学数据库实验室系列博客
Spark 2.0分布式集群环境搭建

需要注意的配置有两个

cd /usr/local/spark/
cp ./conf/slaves.template ./conf/slaves

#slaves文件设置Worker节点。编辑slaves内容,把默认内容localhost替换成如下内容:

slave1
slave2

配置spark-env.sh文件
将 spark-env.sh.template 拷贝到 spark-env.sh
cp ./conf/spark-env.sh.template ./conf/spark-env.sh

编辑spark-env.sh,添加如下内容:

export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_MASTER_IP=192.168.137.129

配置完成后启动master:

lockey@master:/usr/local$ ./spark/sbin/start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-lockey-org.apache.spark.deploy.master.Master-1-master.out
lockey@ubuntu-lockey:/usr/local/spark$ jps
16371 Master
16421 Jps
15063 SecondaryNameNode
14840 NameNode
15210 ResourceManager

然后再启动两个slave(此命令在master上执行):

lockey@master:/usr/local$ ./spark/sbin/start-slaves.sh
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-lockey-org.apache.spark.deploy.worker.Worker-1-slave1.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-lockey-org.apache.spark.deploy.worker.Worker-1-slave2.out


lockey@slave1:/usr/local$ jps
1832 Jps
1578 NodeManager
1435 DataNode
1787 Worker

然后我们来验证以下web界面吧,master端口为8080:
spark 2.3.1集群搭建(Master,Slave,Slave)_第1张图片

启动slaves之后的界面会多出两个Worker
spark 2.3.1集群搭建(Master,Slave,Slave)_第2张图片

我们再来看一下worker的web界面:
spark 2.3.1集群搭建(Master,Slave,Slave)_第3张图片

好了,到这里我们的spark简版集群就搭建好了

关闭Spark集群

关闭Master节点
sbin/stop-master.sh

关闭Worker节点
sbin/stop-slaves.sh

关闭Hadoop集群
cd /usr/local/hadoop/
sbin/stop-all.sh

你可能感兴趣的:(spark,hadoop)