Streaming job 异常排查- Container killed on request

Streaming job 异常排查- Container killed on request_第1张图片
image.png

User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 34.0 failed 3 times, most recent failure: Lost task 0.2 in stage 34.0 (TID 12636, s25.hdp.cn): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Container marked as failed: container_1516175032931_3532_02_000012 on host: s25.hdp.cn. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
Driver stacktrace:

查看yarn日志,由于job运行时间太久,yarn logs -applicationId的方式查看日志非常麻烦。就想看看之前ui的日志。但是无奈的是我们这边没有启动histroyserver所以看不到了。没办法,可以看到是某个container被kill了。于是我先去对应的节点上查看弄得manager的日志,值发现了一条日志。

然后我又想查看节点上的container日志,但是我发现已经没有了,后来才知道,Task执行完之后就会删除container上的日志。
于是我找到yarn logs 带有container‘Id的 查看方式:

yarn logs -applicationId application_1516175032931_3532 -containerId container_1516175032931_3532_02_000012 -nodeAddress s25.hdp.cn|less

于是看到日志


Streaming job 异常排查- Container killed on request_第2张图片
image.png

可见,是由于executor发生了oom导致。所以需要修改spark.executor.memory .
当然这个时候你也可以看下是否程序哪里有问题。一般都是由于数据量陡增,或者网络抖动对于task的执行时间较长,大部分对象无法释放,最后出现OOM。这建议增加内存,如果增加内存之后还是出现此类问题,则需要重点看下代码。可以在conf中添加javaOpts参数。

./bin/spark-submit --name "My app" --master local[4] --conf spark.eventLog.enabled=false
  --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar

你可能感兴趣的:(Streaming job 异常排查- Container killed on request)