spark job history

版本:2.3.0

一般情况下可以访问http://:4040 , 若多个 SparkContexts正在运行,则可以绑定后续的端口 4040 (4041, 4042, etc)。
注意这些信息只有在应用运行期间可以访问。要访问历史的需要进行配置。

配置参数

spark.eventLog.enabled  true 
spark.eventLog.dir               hdfs://hmcluster/user/spark/eventLog
spark.history.fs.logDirectory  hdfs://hmcluster/user/spark/eventLog

该目录是默认的目录 ,默认需要提前创建 。
也可以指定为hdfs上的目录 , 也需要提前创建上。

注意: spark.eventLog.dir 与 spark.history.fs.logDirectory 应该配置为同一目录 。

启动 历史服务器

./sbin/start-history-server.sh

可以通过浏览器访问 http://:18080 , 会列出已完成和未完成的应用。
当使用文件系统提供类(参数spark.history.provider ), 则需要指定 spark.history.fs.logDirectory 配置项。

环境参数 Environment Variables

Environment Variable Meaning
SPARK_DAEMON_MEMORY Memory to allocate to the history server (default: 1g).
SPARK_DAEMON_JAVA_OPTS JVM options for the history server (default: none).
SPARK_DAEMON_CLASSPATH Classpath for the history server (default: none).
SPARK_PUBLIC_DNS The public address for the history server. If this is not set, links to application history may use the internal address of the server, resulting in broken links (default: none).
SPARK_HISTORY_OPTS spark.history.* configuration options for the history server (default: none).

配置 spark-default.conf

Property Name Default Meaning
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider Name of the class implementing the application history backend. Currently there is only one implementation, provided by Spark, which looks for application logs stored in the file system.
spark.history.fs.logDirectory file:/tmp/spark-events For the filesystem history provider, the URL to the directory containing application event logs to load. This can be a local file:// path, an HDFS path hdfs://namenode/shared/spark-logs or that of an alternative filesystem supported by the Hadoop APIs.
spark.history.fs.update.interval 10s The period at which the filesystem history provider checks for new or updated logs in the log directory. A shorter interval detects new applications faster, at the expense of more server load re-reading updated applications. As soon as an update has completed, listings of the completed and incomplete applications will reflect the changes.
spark.history.retainedApplications 50 The number of applications to retain UI data for in the cache. If this cap is exceeded, then the oldest applications will be removed from the cache. If an application is not in the cache, it will have to be loaded from disk if it is accessed from the UI.
spark.history.ui.maxApplications Int.MaxValue The number of applications to display on the history summary page. Application UIs are still available by accessing their URLs directly even if they are not displayed on the history summary page.
spark.history.ui.port 18080 The port to which the web interface of the history server binds.
spark.history.kerberos.enabled false Indicates whether the history server should use kerberos to login. This is required if the history server is accessing HDFS files on a secure Hadoop cluster. If this is true, it uses the configs spark.history.kerberos.principal andspark.history.kerberos.keytab.
spark.history.kerberos.principal (none) Kerberos principal name for the History Server.
spark.history.kerberos.keytab (none) Location of the kerberos keytab file for the History Server.
spark.history.ui.acls.enable false Specifies whether acls should be checked to authorize users viewing the applications. If enabled, access control checks are made regardless of what the individual application had set for spark.ui.acls.enable when the application was run. The application owner will always have authorization to view their own application and any users specified viaspark.ui.view.acls and groups specified via spark.ui.view.acls.groups when the application was run will also have authorization to view that application. If disabled, no access control checks are made.
spark.history.ui.admin.acls empty Comma separated list of users/administrators that have view access to all the Spark applications in history server. By default only the users permitted to view the application at run-time could access the related application history, with this, configured users/administrators could also have the permission to access it. Putting a "*" in the list means any user can have the privilege of admin.
spark.history.ui.admin.acls.groups empty Comma separated list of groups that have view access to all the Spark applications in history server. By default only the groups permitted to view the application at run-time could access the related application history, with this, configured groups could also have the permission to access it. Putting a "*" in the list means any group can have the privilege of admin.
spark.history.fs.cleaner.enabled false Specifies whether the History Server should periodically clean up event logs from storage.
spark.history.fs.cleaner.interval 1d How often the filesystem job history cleaner checks for files to delete. Files are only deleted if they are older than spark.history.fs.cleaner.maxAge
spark.history.fs.cleaner.maxAge 7d Job history files older than this will be deleted when the filesystem history cleaner runs.
spark.history.fs.numReplayThreads 25% of available cores Number of threads that will be used by history server to process event logs.
spark.history.store.path (none) Local directory where to cache application history data. If set, the history server will store application data on disk instead of keeping it in memory. The data written to disk will be re-used in the event of a history server restart.

注意:

  1. history 服务显示已完成和未完成的 spark job 。 若一个应用尝试了多次,则失败的尝试信息也会显示出来。
  2. 未完成的任务信息只会周期性显示 。 由参数 spark.history.fs.update.interval 指定更新间隔 。 对于大型集群设置大的值。 查看正在运行的任务最好通过它自己的ui。
  3. 当应用程序退出时, 但是没有更新其状态, 会显示为未完成状态- 即时他们不再运行。 这可能会在应用程序崩溃时出现。
  4. 通知spark应用完成的方法是显示使用sc.stop( ) 。

你可能感兴趣的:(spark job history)