大数据 IMF 传奇 spark -history在分布式 集群 的安装部署 及问题解决

配置Spark History Server
 
1. 在Spark的conf目录下/usr/local/spark-1.6.0-bin-hadoop2.6/conf,将spark-defaults.conf.template改名为spark-defaults.conf
 mv spark-defaults.conf.template spark-defaults.conf  
 
 root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# cd /usr/local/spark-1.6.0-bin-hadoop2.6/conf
root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/conf# mv spark-defaults.conf.template spark-defaults.conf 






2. 对spark-defaults.conf 配置
spark.eventLog.enabled           true
spark.eventLog.dir hdfs://master:9000/historyserverforSpark
spark.history.ui.port            18080
spark.history.fs.logDirectory    hdfs://master:9000/historyserverforSpark






8.分发配置/usr/local/spark-1.6.0-bin-hadoop2.6/conf/spark-defaults.conf




root@master:/usr/local/setup_scripts# vi sparkhistory_scp.sh




#!/bin/sh
for i in  2 3 4 5 6 7 8  9
do
 
scp   -rq /usr/local/spark-1.6.0-bin-hadoop2.6/conf/spark-defaults.conf  [email protected].$i:/usr/local/spark-1.6.0-bin-hadoop2.6/conf/spark-defaults.conf


done




root@master:/usr/local/setup_scripts# chmod u+x sparkhistory_scp.sh
root@master:/usr/local/setup_scripts# ./sparkhistory_scp.sh


3.启动history-server
root@master:/usr/local/setup_scripts# cd /usr/local/spark-1.6.0-bin-hadoop2.6/sbin
[root@master sbin]# ./start-history-server.sh


root@master:/usr/local/setup_scripts# cd /usr/local/spark-1.6.0-bin-hadoop2.6/sbin
root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin#  ./start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-master.out
root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# 






root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# cat /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-master.out
Spark Command: /usr/local/jdk1.8.0_60/bin/java -cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/hadoop-2.6.0/etc/hadoop/ -Xms1g -Xmx1g org.apache.spark.deploy.history.HistoryServer
========================================
16/02/07 17:49:55 INFO history.HistoryServer: Registered signal handlers for [TERM, HUP, INT]
16/02/07 17:49:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/02/07 17:49:56 INFO spark.SecurityManager: Changing view acls to: root
16/02/07 17:49:56 INFO spark.SecurityManager: Changing modify acls to: root
16/02/07 17:49:56 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
Exception in thread "main" java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:235)
        at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
Caused by: java.lang.IllegalArgumentException: Log directory specified does not exist: hdfs://master:9000/historyserverforSpark.
        at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$startPolling(FsHistoryProvider.scala:168)
        at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:120)
        at org.apache.spark.deploy.history.FsHistoryProvider.(FsHistoryProvider.scala:116)
        at org.apache.spark.deploy.history.FsHistoryProvider.(FsHistoryProvider.scala:49)
        ... 6 more
root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# 






10.问题解决 在HDFS中建立historyserverforSpark目录




root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# hadoop fs -mkdir /historyserverforSpark
root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# hadoop fs -ls /
Found 2 items
drwxr-xr-x   - root supergroup          0 2016-02-07 18:24 /historyserverforSpark
drwx-wx-wx   - root supergroup          0 2016-02-07 17:37 /tmp
root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# 






root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# ./start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /usr/local/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-master.out
root@master:/usr/local/spark-1.6.0-bin-hadoop2.6/sbin# jps
5378 NameNode
7925 Jps
5608 SecondaryNameNode
5742 ResourceManager
7887 HistoryServer






11。web浏览http://192.168.189.1:18080/




 1.6.0 History Server
Event log directory: hdfs://master:9000/historyserverforSpark
No completed applications found!


Did you specify the correct logging directory? Please verify your setting of spark.history.fs.logDirectory and whether you have the permissions to access it.
It is also possible that your application did not run to completion or did not stop the SparkContext.


Show incomplete applications


大数据 IMF 传奇 spark -history在分布式 集群 的安装部署 及问题解决_第1张图片






你可能感兴趣的:(Hadoop)