[Spark]spark3.0.0 on yarn 安装测试

spark3.0.0  on yarn 安装测试

版本:

hive 3.1.1 

spark 3.0.0 

hadoop3.0.0

从spark3.0.0版本开始支持hadoop3。hive3 也支持hadoop,完美的组合。

 

3.1.下载spark3.0.0

https://mirror.bit.edu.cn/apache/spark/spark-3.0.0/spark-3.0.0-bin-hadoop3.2.tgz

版本支持:scala2.12版本(其实不装scala也可以运行,但是低版本应该不行),java8+ 

 

3.2.spark3配置

 

3.2.1.spark-env.sh 配置

export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://xx/user/spark/applicationHistory_xx0001"
export HADOOP_HOME=~/hadoop
export SPARK_HOME=/home/xx/spark-3.0.0-bin-hadoop3.2
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native
export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:$HADOOP_HOME/lib/native
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export SPARK_CLASSPATH=$SPARK_CLASSPATH:$HADOOP_HOME/share/hadoop/yarn/:$HADOOP_HOME/share/hadoop/yarn/lib/:$HADOOP_HOME/share/hadoop/common/:$HADOOP_HOME/share/hadoop/common/lib/:$HADOOP_HOME/share/hadoop/hdfs/:$HADOOP_HOME/share/hadoop/hdfs/lib/:$HADOOP_HOME/share/hadoop/mapreduce/:$HADOOP_HOME/share/hadoop/mapreduce/lib/:$HADOOP_HOME/share/hadoop/tools/lib/:$SPARK_HOME/lib
export SPARK_LOCAL_IP=172.x.x.2
 

3.2.2.spark-default.conf

spark.eventLog.enabled           true
spark.yarn.historyServer.address 172.23.4.152:12345
spark.eventLog.compress true
spark.history.ui.port 12345

spark.eventLog.dir hdfs://xx/user/spark/applicationHistory_szhadoop310001
spark.history.fs.logDirectory hdfs://xx/user/spark/applicationHistory_szhadoop310001
spark.port.min  30000
spark.port.max  40000
spark.executor.extraLibraryPath /home/hadoop/cdh5/hadoop/latest/lib/native
spark.shuffle.consolidateFiles true
spark.executor.extraJavaOptions -Xss24m
spark.driver.extraJavaOptions -Xss24m
spark.yarn.submit.file.replication 3
spark.sql.parquet.compression.codec     gzip

spark.history.fs.cleaner.enabled true
spark.history.fs.cleaner.interval 1d
spark.history.fs.cleaner.maxAge 3d
 

3.2.3. 相关配置软连接到spark/conf目录

core-site.xml 

hdfs-site.xml 

hive-site.xml 

mapred-site.xml 

yarn-site.xml 

hadoop和hive的5个配置文件。

如果想查看hive的表,记住要拷贝hive-site.xml

 

3.3测试

spark-shell --master yarn  --deploy-mode client --queue root --num-executors 10 --driver-memory 3g --executor-memory 4g --executor-cores 3 --conf spark.speculation=true --conf spark.speculation.quantile=0.75 --conf spark.speculation.multiplier=1.5 --conf spark.driver.userClassPathFirst=true --conf spark.executor.userClassPathFirst=true --jars /home/xx/module/hive/lib/mysql-connector-java-5.1.38.jar

scala> spark.sql("show tables").show(false)

+--------+---------+-----------+

|database|tableName|isTemporary|

+--------+---------+-----------+

|default |student  |false      |

+--------+---------+-----------+

 

 

scala> import spark.sql

import spark.sql

 

scala> sql("select * from student").show(false)

+---+----+                                                                      

|id |name|

+---+----+

|1  |tony|

|1  |tony|

+---+----+

 

测试通过,接下来做业务方面任务的验证。

 

你可能感兴趣的:(spark,spark)