Apache Zeppelin(2)Zeppelin and Spark Yarn Cluster

Apache Zeppelin(2)Zeppelin and Spark Yarn Cluster

Recently try to debug something on zeppelin, if some error happens, we need to go to the log file to check more information.

Check the log file under zeppelin/opt/zeppelin/logs
zeppelin-carl-carl-mac.local.log
zeppelin-interpreter-spark-carl-carl-mac.local.log

Error Message:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.deploy.SparkHadoopUtil$
        at org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:1959)
        at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:104)
        at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:179)
        at org.apache.spark.SparkEnv$.create(SparkEnv.scala:310)
        at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:163)
        at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:269)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:272)

Build the Zeppelin again.
> mvn clean package -Pspark-1.4 -Dhadoop.version=2.6.0 -Phadoop-2.6 -Pyarn -DskipTests

Error Message
ERROR [2015-06-30 17:04:43,588] ({Thread-43} JobProgressPoller.java[run]:57) - Can not get or update progress
org.apache.zeppelin.interpreter.InterpreterException: java.lang.IllegalStateException: Pool not open
        at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getProgress(RemoteInterpreter.java:286)
        at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getProgress(LazyOpenInterpreter.java:110)
        at org.apache.zeppelin.notebook.Paragraph.progress(Paragraph.java:179)
        at org.apache.zeppelin.scheduler.JobProgressPoller.run(JobProgressPoller.java:54)
Caused by: java.lang.IllegalStateException: Pool not open
        at org.apache.commons.pool2.impl.BaseGenericObjectPool.assertOpen(BaseGenericObjectPool.java:662)
        at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:412)
        at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
        at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:139)
        at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getProgress(RemoteInterpreter.java:284)

Error Message:
ERROR [2015-06-30 17:18:05,297] ({sparkDriver-akka.actor.default-dispatcher-4} Logging.scala[logError]:75) - Lost executor 13 on ubuntu-dev1: remote Rpc client disassociated
INFO [2015-06-30 17:18:05,297] ({sparkDriver-akka.actor.default-dispatcher-4} Logging.scala[logInfo]:59) - Re-queueing tasks for 13 from TaskSet 3.0
WARN [2015-06-30 17:18:05,298] ({sparkDriver-akka.actor.default-dispatcher-4} Logging.scala[logWarning]:71) - Lost task 0.3 in stage 3.0 (TID 14, ubuntu-dev1): ExecutorLostFailure (executor 13 lost)
ERROR [2015-06-30 17:18:05,298] ({sparkDriver-akka.actor.default-dispatcher-4} Logging.scala[logError]:75) - Task 0 in stage 3.0 failed 4 times; aborting job

Solutions:
After I fetch and load the recently zeppelin from github and build it myself again. Everything works.

Some configuration are as follow:
> less conf/zeppelin-env.sh
export MASTER="yarn-client"
export HADOOP_CONF_DIR="/opt/hadoop/etc/hadoop/"

export SPARK_HOME="/opt/spark"
. ${SPARK_HOME}/conf/spark-env.sh
export ZEPPELIN_CLASSPATH="${SPARK_CLASSPATH}"

Start the yarn cluster, start the zeppelin with command
> bin/zeppelin-daemon.sh start

Check the yarn cluster
http://ubuntu-master:8088/cluster/apps

Visit the zeppelin UI
http://ubuntu-master:8080/

Check the Interpreter to make sure we are using the yarn-client mode and other information.

Place this Simple List there:
val threshold = "book1"
val products = Seq("book1", "book2", "book3", "book4")
val rdd = sc.makeRDD(products,2)
val result = rdd.filter{ p =>
  p.equals(threshold)
}.count()
println("!!!!!!!!!!!!!!================result=" + result)

Run that simple list, zeppelin will start a something like spark-shell context on yarn cluster, that jobs will be always running, and after that, we will visit the spark master from this URL
http://ubuntu-master:4040/

We can see all the spark jobs, executors there.

If you plan to try some complex example like this one, you need to open the interpreter and increase the memory.
import org.apache.spark.SparkContext
import org.apache.spark.mllib.classification.SVMWithSGD
import org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.util.MLUtils
import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.classification.{SVMModel, SVMWithSGD}

val data:RDD[LabeledPoint] = MLUtils.loadLibSVMFile(sc, "file:///opt/spark/data/mllib/sample_libsvm_data.txt")

// Split data into training (60%) and test (40%).
val splits:Array[RDD[LabeledPoint]] = data.randomSplit(Array(0.6, 0.4), seed = 11L)
val training:RDD[LabeledPoint] = splits(0).cache()
val test:RDD[LabeledPoint] = splits(1)

// Run training algorithm to build the model
val numIterations = 100
val model = SVMWithSGD.train(training, numIterations)

// Clear the default threshold.
model.clearThreshold()

// Compute raw scores on the test set.
val scoreAndLabels:RDD[(Double,Double)] = test.map { point =>
val score = model.predict(point.features)
      (score, point.label)
}

scoreAndLabels.take(10).foreach { case (score, label) =>
      println("Score = " + score + " Label = " + label);
}

// Get evaluation metrics.
val metrics = new BinaryClassificationMetrics(scoreAndLabels)
val auROC = metrics.areaUnderROC()

println("Area under ROC = " + auROC)


Reference:
http://sillycat.iteye.com/blog/2216604


你可能感兴趣的:(cluster)