centos7 安装spark

1.首先确保安装好hadoop

安装教程见:
http://blog.csdn.net/sunweijm/article/details/78399726

2. 安装spark

下载或者上传其他主机上的安装包:

[hadoop@vdevops sparkDownload]$ wget http://mirror.bit.edu.cn/apache/spark/spark-2.2.0/spark-2.2.0-bin-hadoop2.7.tgz  #下载安装包
[hadoop@vdevops sparkDownload]$ sudo tar -zxf /home/hadoop/sparkDownload/spark-2.2.0-bin-hadoop2.7.tgz -C /usr/local #解压安装包
[hadoop@vdevops local]$ sudo mv ./spark-2.2.0-bin-hadoop2.7/ ./spark #重命名
[hadoop@vdevops local]$ sudo chown -R hadoop:hadoop ./spark #修改文件夹权限

需要在 ./conf/spark-env.sh 中修改 Spark 的 Classpath

[hadoop@vdevops local]$ cd spark
[hadoop@vdevops spark]$ cp ./conf/spark-env.sh.template ./conf/spark-env.sh

[hadoop@vdevops spark]$ vim ./conf/spark-env.sh
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)

提交python脚本到spark: (提交到的目录/home/hadoop/sparkMyApp/test1)
在/home/hadoop/sparkMyApp/test1目录下:

/usr/local/spark/bin/spark-submit   --master local[*]   SimpleApp.py

(运用scp命令将主机文件夹test1及其内容上传到test2的sparkMyApp目录下:
scp -r test1 hadoop@test2:~/sparkMyApp/)

你可能感兴趣的:(spark,spark)