Spark Standalone 单机

请先看完 集群搭建必看!初期准备工作!

 

所需文件:spark-2.1.2-bin-hadoop2.7.tgz

 

1.安装spark-2.1.2

$ sudo mkdir -p /colony/spark //创建spark安装目录

$ sudo tar -zxvf spark-2.1.2-bin-hadoop2.7.tgz -C /colony/spark/

$ sudo chown -R hadoop /colony //将目录归属为hadoop用户

$ sudo vim /etc/profile //配置环境变量

 

export SPARK_HOME=/colony/spark/spark-2.1.2-bin-hadoop2.7

export PATH=$PATH:$JAVA_HOME/bin:$SPARK_HOME/bin

 

$ sudo source /etc/profile //使配置文件生效

2.修改配置文件(/colony/spark/spark-2.1.2-bin-hadoop2.7/conf/)

1)spark-env.sh.template

$ mv spark-env.sh.template spark-env.sh

$ mv slaves.template slaves

4.启动spark-shell

$ /colony/spark/spark-2.1.2-bin-hadoop2.7/bin/spark-shell

 

 

 

 

至此Spark Standalone安装完毕,谢谢阅读!

你可能感兴趣的:(Spark Standalone 单机)