Build Spark1.3.1 with CDH HADOOP

Build Spark1.3.1 with CDH HADOOP

1、找到CDH的版本

[root@web02 spark1.3]# hadoop version
Hadoop 2.0.0-cdh4.7.0
Subversion file:///var/lib/jenkins/workspace/CDH4.7.0-Packaging-Hadoop/build/cdh4/hadoop/2.0.0-cdh4.7.0/source/hadoop-common-project/hadoop-common -r c2ecdeb17590d43320f2151f64e48603aa4849f7
Compiled by jenkins on Wed May 28 09:41:14 PDT 2014
From source with checksum f60207d0daa9f943f253cc8932d598c8
This command was run using /app/home/hadoop/src/hadoop-2.0.0-cdh4.7.0/share/hadoop/common/hadoop-common-2.0.0-cdh4.7.0.jar

对应版本号:
2.0.0-cdh4.7.0

编译参考地址:http://spark.apache.org/docs/latest/building-spark.html

编译前,设置环境变量:
export MAVEN_OPTS=”-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m”

Maven编译(版本要选择在apache-maven-3.0.4以上)

指定编译的profile,可以选择的有yarn, hive, hive-thriftserver

Apache版本

mvn -Dhadoop.version=1.2.1 -DskipTests clean package

CDH版本

/app/hadoop/shengli/apache-maven-3.0.5/bin/mvn -Dhadoop.version=2.0.0-cdh4.7.0 -Pyarn -Phive -Phive-thriftserver -DskipTests clean package

Test Case测试:
mvn -Dhadoop.version=… -DwildcardSuites=org.apache.spark.repl.ReplSuite test

SBT编译

对对应版本号进行编译:

build/sbt -Pyarn -Phadoop-2.3 assembly

运行Spark-on-YARN需要Spark的二进制发布包。参考编译

SPARK_YARN_USER_ENV

  用户可以在这个参数中设置Spark on YARN的环境变量,可以省略。

  例如:SPARK_YARN_USER_ENV=”JAVA_HOME=/jdk64,FOO=bar”

SPARK_JAR

  设置Spark jar在HDFS的位置。

例如:export SPARK_JAR=hdfs:///some/path.

在每台Hadoop NodeManager节点上设置变量

将Spark发布包放到hdfs上,而不是每次提交都提交spark jar包,使用命令 SPARK_JAR=hdfs://xxx/xx/spark-assembly-1.3.1-hadoop2.5.0.jar

例如:

SPARK_JAR=hdfs://xxxxxx/xxxxxxx/spark-assembly-1.3.1-hadoop2.5.0.jar \
./bin/spark-submit --class org.apache.spark.examples.SparkPI \
    --master yarn-cluster \     --num-executors 3 \     --driver-memory 1g \     --executor-memory 3g \     --executor-cores 2 \     lib/spark-examples*.jar \
    10

你可能感兴趣的:(hadoop,spark,Build,cdh)