Spark2.4.0 任务调度(TaskScheduler)源码分析

细心的话,在DAGScheduler中我们已经注意到TaskScheduler的身影,TaskScheduler负责提交TaskSet到集群,并将计算的结果汇报给DAGScheduler。
Task调度器的超类是org.apache.spark.scheduler.TaskScheduler,且只有一个实现类org.apache.spark.scheduler.TaskSchedulerImpl

TaskScheduler

这个接口可以有多个task调度器,DAGScheduler为每个stage拆解对应的TaskSet,并交给TaskScheduler,taskScheduler负责将task发送到集群,运行task,并且在出现故障时进行重试来减轻压力,最后将事件返回到DAGScheduler。
TaskScheduler中主要定义了任务id(appId),根调度池(rootPool),调度模式(schedulingMode)等参数;另外还定义了task的提交,终止,重试等方法。

private[spark] trait TaskScheduler {

  // 定义一个任务id
  private val appId = "spark-application-" + System.currentTimeMillis
  // 根调度池
  def rootPool: Pool
  // 调度模式。调度模式有先进先出模式(FIFO)和公平调度模式(FAIR),详见SchedulingMode枚举类
  def schedulingMode: SchedulingMode

  def start(): Unit

  // 在系统成功初始化之后(通常是在spark context中),Yarn使用这个方法根据首选位置来分配资源,等待系统slave注册等
  def postStartHook() { }

  // 从集群断开链接
  def stop(): Unit

  // 提交待运行的task队列
  def submitTasks(taskSet: TaskSet): Unit

  // 杀死一个Stage中的所有任务,使该Stage和依赖该Stage的所有task失败。如果后端不支持kill任务,则引发unsupportedOperationException。
  def cancelTasks(stageId: Int, interruptThread: Boolean): Unit

  // 终止任务尝试。如果后端不支持终止任务,则抛出UnsupportedOperationException。
  def killTaskAttempt(taskId: Long, interruptThread: Boolean, reason: String): Boolean

  // 终止一个stage中的所有运行中的任务尝试,如果不支持终止任务,则抛出UnsupportedOperationException。
  def killAllTaskAttempts(stageId: Int, interruptThread: Boolean, reason: String): Unit

  // Set the DAG scheduler for upcalls. This is guaranteed to be set before submitTasks is called.
  // 在调用前为DAG调度器赋值,这个是为了保证在调submitTasks方法前赋值。
  def setDAGScheduler(dagScheduler: DAGScheduler): Unit

  // 获取要在集群中使用的默认并行级别,作为调整作业大小的提示。
  def defaultParallelism(): Int

  // excutor心跳接收器
  def executorHeartbeatReceived(
      execId: String,
      accumUpdates: Array[(Long, Seq[AccumulatorV2[_, _]])],
      blockManagerId: BlockManagerId,
      executorUpdates: ExecutorMetrics): Boolean

  // 获取和job关联的application ID
  def applicationId(): String = appId

  // 处理丢失的executor
  def executorLost(executorId: String, reason: ExecutorLossReason): Unit

  // 处理移除的worker
  def workerRemoved(workerId: String, host: String, message: String): Unit

  // 获取和job关联的application的重试ID
  def applicationAttemptId(): Option[String]
}

TaskSchedulerImpl

再来看TaskScheduler的实现类,从SparkContext调用源头追踪task调度器的调用链。

TaskSchedulerImpl对象什么时候构建的?

调用链的入口在SparkContext类的createTaskScheduler方法,在createTaskScheduler方法中根据用户指定的运行模式(spark.master参数)构建TaskSchedulerImpl对象且立即调用了TaskSchedulerImpl的initialize方法进行初始化。
构建入口:

 // Create and start the scheduler
 val (sched, ts) = SparkContext.createTaskScheduler(this, master, deployMode)

构建源码:

  private def createTaskScheduler(
      sc: SparkContext,
      master: String,
      deployMode: String): (SchedulerBackend, TaskScheduler) = {
    import SparkMasterRegex._

    // 当在本地模式运行时,失败的task不再重试
    val MAX_LOCAL_TASK_FAILURES = 1

    // 根据运行模式,来构建task调度器
    master match {
      // local模式
       case "local" =>
        val scheduler = new TaskSchedulerImpl(sc, MAX_LOCAL_TASK_FAILURES, isLocal = true)
        val backend = new LocalSchedulerBackend(sc.getConf, scheduler, 1)
        //  初始化
        scheduler.initialize(backend)
        (backend, scheduler)
      // local[*] 模式
      case LOCAL_N_REGEX(threads) =>
        def localCpuCount: Int = Runtime.getRuntime.availableProcessors()
        ...
        val scheduler = new TaskSchedulerImpl(sc, MAX_LOCAL_TASK_FAILURES, isLocal = true)
        val backend = new LocalSchedulerBackend(sc.getConf, scheduler, threadCount)
        scheduler.initialize(backend)
        (backend, scheduler)
      // local[1,1] 模式
      case LOCAL_N_FAILURES_REGEX(threads, maxFailures) =>
        def localCpuCount: Int = Runtime.getRuntime.availableProcessors()
        ...
        val scheduler = new TaskSchedulerImpl(sc, maxFailures.toInt, isLocal = true)
        val backend = new LocalSchedulerBackend(sc.getConf, scheduler, threadCount)
        scheduler.initialize(backend)
        (backend, scheduler)
      // spark://... 模式
      case SPARK_REGEX(sparkUrl) =>
        val scheduler = new TaskSchedulerImpl(sc)
        val masterUrls = sparkUrl.split(",").map("spark://" + _)
        val backend = new StandaloneSchedulerBackend(scheduler, sc, masterUrls)
        scheduler.initialize(backend)
        (backend, scheduler)
      // 本地集群模式
      case LOCAL_CLUSTER_REGEX(numSlaves, coresPerSlave, memoryPerSlave) =>
        ...
        val scheduler = new TaskSchedulerImpl(sc)
        val localCluster = new LocalSparkCluster(
          numSlaves.toInt, coresPerSlave.toInt, memoryPerSlaveInt, sc.conf)
        val masterUrls = localCluster.start()
        val backend = new StandaloneSchedulerBackend(scheduler, sc, masterUrls)
        scheduler.initialize(backend)
        backend.shutdownCallback = (backend: StandaloneSchedulerBackend) => {
          localCluster.stop()
        }
        (backend, scheduler)
      // 集群模式
      case masterUrl =>
       ...
      val scheduler = cm.createTaskScheduler(sc, masterUrl)
      val backend = cm.createSchedulerBackend(sc, masterUrl, scheduler)
      cm.initialize(scheduler, backend)
      (backend, scheduler)
   }
  }

TaskSchedulerImpl什么时候启动的?

TaskSchedulerImpl实例由SparkContext携带着传递给DAGScheduler之后,便可以启动了:

    _taskScheduler.start()

start方法都做了什么呢?

override def start() {
    backend.start()

    if (!isLocal && conf.get(SPECULATION_ENABLED)) {
      logInfo("Starting speculative execution thread")
      speculationScheduler.scheduleWithFixedDelay(new Runnable {
        override def run(): Unit = Utils.tryOrStopSparkContext(sc) {
          checkSpeculatableTasks()
        }
      }, SPECULATION_INTERVAL_MS, SPECULATION_INTERVAL_MS, TimeUnit.MILLISECONDS)
    }
  }

可以看出taskScheduler.start()调用了backend.start(),在 backend.start()内部做了什么呢?我们在后面分析SchedulerBackend的时候再详谈。

TaskSchedulerImpl什么时候提交task任务的呢?

DAGScheduler方法submitMissingTasks里,调用了TaskSchedulerImpl的submitTasks方法:

  /** Called when stage's parents are available and we can now do its task. */
  private def submitMissingTasks(stage: Stage, jobId: Int) {
  // 此处省略了好多源码
   ...
    //   构建taskSet
    val tasks: Seq[Task[_]] = try { ... }

    if (tasks.size > 0) {
      logInfo(s"Submitting ${tasks.size} missing tasks from $stage (${stage.rdd}) (first 15 " +
        s"tasks are for partitions ${tasks.take(15).map(_.partitionId)})")
      // task调度器提交task
      taskScheduler.submitTasks(new TaskSet(
        tasks.toArray, stage.id, stage.latestInfo.attemptNumber, jobId, properties))
    } else {
      ...
      }
      submitWaitingChildStages(stage)
    }
  }

至此,通过TaskSchedulerImpl的调用链我们知道了task调度器的构建,初始化,启动以及task任务提交。我们注意到,其中初始化和启动依赖于SchedulerBackendSchedulerBackend何方神圣呢,我们下回分解。

你可能感兴趣的:(Spark2.4.0 任务调度(TaskScheduler)源码分析)