spark2.2.0安装配置

依赖环境:

java1.8
scala2.11.8
hadoop2.7.3

说明:

主机host映射:
192.168.238.100 node01
192.168.238.101 node02
192.168.238.102 node03

其中node01上安装master,node02、node03上安装worker
node01先配置ssh到node02、node03

修改配置

cd spark-2.2.0/conf/
spark-env.sh

cp spark-env.sh.template spark-env.sh
vi spark-env.sh

末尾添加:
export JAVA_HOME=/export/servers/jdk1.8.0_102
export SCALA_HOME=/export/servers/scala-2.11.8
export HADOOP_HOME=/export/servers/hadoop-2.7.3
export HADOOP_CONF_DIR=/export/servers/hadoop-2.7.3/etc/hadoop
export SPARK_MASTER_IP=node01
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_MEMORY=1g

slaves

cp slaves.template slaves
vi slaves

内容localhost替换为:
node02
node03

分别拷贝到节点node02,node03

scp -r /export/servers/spark-2.2.0/ bingo@node02:/export/servers/
scp -r /export/servers/spark-2.2.0/ bingo@node03:/export/servers/

分别加入环境变量

vi /etc/profile
# 添加
export SPARK_HOME=/export/servers/spark-2.2.0
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

分别刷新环境
source /etc/profile

在node01上启动和停止:

启动master:
start-master.sh
启动所有slave:
start-slaves.sh
停止:
stop-master.sh
stop-slaves.sh

weburl:
http://node01:8080/

你可能感兴趣的:(spark)