jmx监控spark executor配置

jmx监控spark比storm稍微有点繁琐:

方法一、

首先在spark-defaults.conf中添加 ,但是8711端口不能重复,也就是说不能在一个节点上启动两个executor,或者端口冲突,没有storm友好

 spark.executor.extraJavaOptions  -XX:+PrintGCDetails
 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
 -Dcom.sun.management.jmxremote.port=8711

然后配置metrics.properties:,我只对executor监控,你可以替换*

executor.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink
executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource


方法二:(好像有问题,只能监控SparkSubmit进程,不能监控CoarseGrainedExecutorBackend进程)

或者直接在spark-class文件配置,当是每次启动driver时候收到修改端口号,当然前提master和work已经启动了,不然启动会报端口错误

if [ -n "$SPARK_SUBMIT_BOOTSTRAP_DRIVER" ]; then
  # This is used only if the properties file actually contains these special configs
  # Export the environment variables needed by SparkSubmitDriverBootstrapper
 echo "hello111111111111111111111111111111111111"
  export RUNNER
  export CLASSPATH
  export JAVA_OPTS
  #export JAVA_OPTS="-XX:MaxPermSize=128m $OUR_JAVA_OPTS -Dcom.sun.management.jmxremote.port=8300 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
  export OUR_JAVA_MEM
  export SPARK_CLASS=1
  shift # Ignore main class (org.apache.spark.deploy.SparkSubmit) and use our own
  exec "$RUNNER" org.apache.spark.deploy.SparkSubmitDriverBootstrapper "$@"
else         
  export JAVA_OPTS="-XX:MaxPermSize=128m $OUR_JAVA_OPTS -Dcom.sun.management.jmxremote.port=8300
 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"


  # Note: The format of this command is closely echoed in SparkSubmitDriverBootstrapper.scala
  if [ -n "$SPARK_PRINT_LAUNCH_COMMAND" ]; then
    echo -n "Spark Command: " 1>&2
    echo "$RUNNER" -cp "$CLASSPATH" $JAVA_OPTS "$@" 1>&2
    echo -e "========================================\n" 1>&2
  fi
  exec "$RUNNER" -cp "$CLASSPATH" $JAVA_OPTS "$@"
fi  



你可能感兴趣的:(spark)