编译服务器:ip
编译目录:/data10/spark/
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
cd $SPARK_HOME(spark源码路径)
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phadoop-provided -Phive -Phive-thriftserver -Pnetlib-lgpl -DskipTests clean package
spark-2.0.0
./dev/make-distribution.sh --name dev --tgz -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phadoop-provided -Phive -Phive-thriftserver -Pnetlib-lgpl
spark-2.0.1
./dev/make-distribution.sh --name custom-spark --tgz -Psparkr -Phadoop-2.4 -Phive -Phive-thriftserver -Pyarn
1.spark.yarn.archive: 配置的value是一个目录,目录下的jar内容和spark-2.x.x-bin/jars目录一致
即将spark-2.x.x-bin/jars 上传到hdfs的指定目录,如hdfs://ns1/spark/jars/spark2.x.x_jars
2.发布新版本之后,一定要修改spark.yarn.archive 的值,其他的配置可以参考spark-2.0.0的配置
spark-default.conf
##################### common for yarn #####################
spark.yarn.archive hdfs://ns1/spark/jars/spark2.0.1_jars
spark.yarn.historyServer.address yz724.hadoop.data.sina.com.cn:18080
spark.eventLog.enabled true
spark.eventLog.dir hdfs://ns1/spark/logs
spark.yarn.report.interval 3000
spark.yarn.maxAppAttempts 2
spark.yarn.submit.file.replication 10
spark.rdd.compress true
spark.dynamicAllocation.enabled false
spark.ui.port 4050
##################### common for yarn #####################
spark.yarn.archive hdfs://ns1/spark/jars/spark2.0.1_jars
spark.yarn.historyServer.address yz724.hadoop.data.sina.com.cn:18080
spark.eventLog.enabled true
spark.eventLog.dir hdfs://ns1/spark/logs
spark.yarn.report.interval 3000
spark.yarn.maxAppAttempts 2
spark.yarn.submit.file.replication 10
spark.rdd.compress true
spark.dynamicAllocation.enabled false
spark.ui.port 4050
spark.kryoserializer.buffer.max 128m
spark.task.maxFailures 10
### common shuffle ###
spark.shuffle.service.enabled false
spark.shuffle.io.maxRetries 20
spark.shuffle.io.retryWait 5
##################### driver #####################
#spark.driver.extraJavaOptions=-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark -XX:MaxDirectMemorySize=1g -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
spark.driver.extraJavaOptions=-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark -XX:MaxDirectMemorySize=1g
spark.driver.extraLibraryPath=/usr/local/hadoop-2.4.0/lib/native
spark.driver.maxResultSize=1g
#spark.jars=/data0/spark/spark-2.0.0-bin/jars/javax.servlet-api-3.1.0.jar
##################### executor #####################
spark.executor.extraJavaOptions=-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark -XX:MaxDirectMemorySize=300m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
spark.executor.extraLibraryPath /usr/local/hadoop-2.4.0/lib/native
spark.yarn.executor.memoryOverhead 400
spark.executor.logs.rolling.strategy time
spark.executor.logs.rolling.time.interval daily
spark.executor.logs.rolling.maxRetainedFiles 7
##################### spark sql #####################
spark.sql.hive.metastorePartitionPruning true
##################### spark streaming #####################
spark.streaming.kafka.maxRetries 20
################### spark tasks ##########################
spark.scheduler.executorTaskBlacklistTime 100000
HADOOP_CONF_DIR=/usr/local/hadoop-2.4.0/etc/hadoop
SPARK_PRINT_LAUNCH_COMMAND=true
LD_LIBRARY_PATH=/usr/local/hadoop-2.4.0/lib/native
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=chaochao1, access=WRITE, inode="/tmp":hadoop:supergroup:drwxr-xr-x
修改前:
javax.jdo.option.ConnectionPassword
16B78E6FED30A530
password to use against metastore database
修改后:
javax.jdo.option.ConnectionPassword
hive2
password to use against metastore database
note:不修改上述内容会有如下异常:
java.sql.SQLException: Access denied for user 'hive2'@'10.39.3.142' (using password: YES)
1.copy mysql-connector-java-5.1.15-bin.jar 到 spark-2.x.x-bin/jars/
2.copy mysql-connector-java-5.1.15-bin.jar 上传到 spark.yarn.archive 配置的目录下
异常如下:
Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
... 88 more
yarn.resourcemanager.webapp.address.rm1
rm1.hadoop.data.sina.com.cn:9008
true
rm2的是可选参数
yarn.resourcemanager.webapp.address.rm2
rm2.hadoop.data.sina.com.cn:9008
true
Spark在1.5.0版本以后默认集成的hive版本为1.2.1,hive从0.14版本之后有关scratchdir的权限配置发生了变动(从755变成了733)。因此如果线上已经部署了hive-0.13应用,那么在基于spark1.6提交hiveSQL的时候,会出现权限不匹配的相关异常。 目前的改进办法是修改Hive的org.apache.hadoop.hive.ql.session.SessionState类(社区1.2.1版本),执行createRootHDFSDir方法时改成755权限赋值,以此来做到向下兼容。在将编译完成以后的SessionState类拷贝到spark的assembly包和examples包中。 相关patch参考:因为spark-2.0.0的所依赖的hive-1.2.1已经打好patch 所以:
1. 10.39.3.142 /data0/spark/spark-2.0.0/jars/有关hive的jar替换spark-2.x.x-bin/jars的hive相关的jar
2. 10.39.3.142 /data0/spark/spark-2.0.0/jars/有关hive的jar替换spark.yarn.archive配置的目录下hive相关的jar
a. /data0/spark/spark-2.0.0 的相关配置
b.官方编译文档:http://spark.apache.org/docs/latest/building-spark.html
c.Spark 编译配置