Spark Streaming每个批次完毕后的清道夫工作分析

  本文目标:搞明白每个批次作业运行完毕后,是如何进行清理工作的。
  回到提交作业的地方,即JobGeneratorgenerateJobs这里,沿着这条线顺藤摸瓜找到清理任务的入口,可以看到任务生成成功后会提交任务运行,摸瓜路线:JobGenerator.generateJobs() --> jobScheduler.submitJobSet() --> JobHandler.run() --> eventLoop发送一个JobCompleted事件 --> jobScheduler.handleJobCompletion()顺藤摸瓜到这里之后,就找到了任务运行结束后的处理逻辑,对这个方法有必要简单分析一下,看注释:

private def handleJobCompletion(job: Job, completedTime: Long) {
    val jobSet = jobSets.get(job.time)//获取当前批次锁产生的所有job集合
    jobSet.handleJobCompletion(job)//job处理完成,将当前任务从incompleteJobs中去除
    job.setEndTime(completedTime)
    listenerBus.post(StreamingListenerOutputOperationCompleted(job.toOutputOperationInfo))//通知UI显示
    logInfo("Finished job " + job.id + " from job set of time " + jobSet.time)
    if (jobSet.hasCompleted) {//判断当前批次的所有job是否均处理完毕
      jobSets.remove(jobSet.time) //当前批次所有任务处理完毕,移除当前批次的job集合
      jobGenerator.onBatchCompletion(jobSet.time)//进入批处理结束后收尾工作,入口!!!!
      logInfo("Total delay: %.3f s for time %s (execution: %.3f s)".format(
        jobSet.totalDelay / 1000.0, jobSet.time.toString,
        jobSet.processingDelay / 1000.0
      ))
      listenerBus.post(StreamingListenerBatchCompleted(jobSet.toBatchInfo))
    }
    //如果任意一个job报错,直接报告异常,否则直接pass
    job.result match {
      case Failure(e) =>
        reportError("Error running job " + job, e)
      case _ =>
    }
  }
  
  //任务完成,从incompleteJobs删除对应job
  def handleJobCompletion(job: Job) {
    incompleteJobs -= job
    if (hasCompleted) processingEndTime = System.currentTimeMillis()
  }
  
  def hasCompleted: Boolean = incompleteJobs.isEmpty

  注释列出了三个方法的代码,handleJobCompletionhasCompleted是辅助内容,很显然入口就在jobGenerator.onBatchCompletion(jobSet.time)这里,继续追踪发现路线:eventLoop.post(ClearMetadata(time)) --> jobGenerator.clearMetadata(),至此,clearMetadata就是我们要看的清理工作的核心方法,代码和注释如下:

private def clearMetadata(time: Time) {
    // 清理DStream中产生的RDD数据
    ssc.graph.clearMetadata(time)

    // If checkpointing is enabled, then checkpoint,
    // else mark batch to be fully processed
    // 如果设置了checkpoint,发布checkpoint消息进行checkpoint
    if (shouldCheckpoint) {
      eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = true))
    } else {
      // If checkpointing is not enabled, then delete metadata information about
      // received blocks (block data not saved in any case). Otherwise, wait for
      // checkpointing of this batch to complete.
      val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
      // 清理receiverTrack记录的block信息
      jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
      //清理receiver数据源的信息
      jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
      //更新最新处理完毕的批次时间
      markBatchFullyProcessed(time)
    }
  }

  关于这里的清理而言,之前的文章已经介绍过,每个批次数据流是由inputStreamoutputStream这种,中间会经历一系列的处理在每个DStream中生成RDDssc.graph.clearMetadata(time)这一步的工作就是把这些处理过程中产生的RDD数据给清理,清理的规则主要就是依据应用中设置的rememberDuration时间是多久,存活时间超过这个的都会被从每个DStreamgeneratedRDD中清理掉,具体清理路线就是从outputStream反向链式对依赖的DStream中数据进行清理。
  对于jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)而言,它的内部主要就是删除ReceiverTracker中自带的ReceiverBlockTracker中记录的关于Block的信息;同时,如果设置开启了WAL,则还会把内容写入到WAL中。不过这里有一点可以了解一下,先看看cleanupOldBlocksAndBatches的代码:

def cleanupOldBlocksAndBatches(cleanupThreshTime: Time) {
    // Clean up old block and batch metadata
    receivedBlockTracker.cleanupOldBatches(cleanupThreshTime, waitForCompletion = false)

    // Signal the receivers to delete old block data
    if (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {
      logInfo(s"Cleanup old received batch data: $cleanupThreshTime")
      synchronized {
        if (isTrackerStarted) {
          endpoint.send(CleanupOldBlocks(cleanupThreshTime))
        }
      }
    }
  }

  这段代码中,if条件是如果配置了在Receiver中允许对每个Block数据块写WAL,那么在任务运行完毕后需要删除对应Receiver中保存的数据块;否则,保留Receiver端的数据块不予删除。这样做很显然是为了支持容错,因为如果在ReceiverBlock的生成写了WAL,那么在出现异常进行恢复的时候,可以直接通过WAL日志来恢复数据块用于计算,所以压根不需要再保留计算过后的数据块,虽然保留数据块能省下恢复数据块的时间,但是属于空间换时间,而且不删除的话那写WAL日志就没有意义了;反之,如果没启动Receiver端写WAL的配置,那么就必须保留之前的Block,不然恢复的时候就会存在数据缺失。针对这块配置官网有直接的解释如下:

Since Spark 1.2, we have introduced write-ahead logs for achieving strong fault-tolerance guarantees. If enabled, all the data received from a receiver gets written into a write-ahead log in the configuration checkpoint directory. This prevents data loss on driver recovery, thus ensuring zero data loss (discussed in detail in the Fault-tolerance Semantics section). This can be enabled by setting the configuration parameter spark.streaming.receiver.writeAheadLog.enable to true. However, these stronger semantics may come at the cost of the receiving throughput of individual receivers. This can be corrected by running more receivers in parallel to increase aggregate throughput. Additionally, it is recommended that the replication of the received data within Spark be disabled when the write-ahead log is enabled as the log is already stored in a replicated storage system. This can be done by setting the storage level for the input stream to StorageLevel.MEMORY_AND_DISK_SER.

  这段话就说明了如果启动写WAL,那么单个Receiver的吞吐量就是有所下降,原因应该是由于之前如果不写WAL的时候,只需要生成这个Block,然后通过异步的方式执行Block的复制,但是现在如果需要执行写WAL之后,得先写WAL成功后,再生成Block,时间多了写WAL的时间,自然每次耗时是要多一些的,解决办法就是多设置一些Receiver
  继续回归正轨,上面说完了清除完DStreamGraph中所有DStream中数据后,如果不需要checkpoint情况下的清理工作。接下来继续进入JobGenerator.clearMetadata中需要进行checkpoint部分的逻辑,这里主要是发布一个DoCheckpoint事件,对应方法代码如下:

private def doCheckpoint(time: Time, clearCheckpointDataLater: Boolean) {
    if (shouldCheckpoint && (time - graph.zeroTime).isMultipleOf(ssc.checkpointDuration)) {
      // 设置了需要checkpoint,并且当前批次的时间也符合设置的checkpoint时间间隔
      logInfo("Checkpointing graph for time " + time)
      // 更新graph中所有的DStream对应的需要checkpoint的内容。
      ssc.graph.updateCheckpointData(time)
      // 将spark streaming自身的运行状态进行checkpoint保存
      checkpointWriter.write(new Checkpoint(ssc, time), clearCheckpointDataLater)
    } else if (clearCheckpointDataLater) {
      markBatchFullyProcessed(time)
    }
  }

  至此,主要的清理工作和后续的checkpoint备份工作已经完成,当然这里对于checkpoint的具体内容这一部分还可以继续深入了解,不过关于checkpoint的这部分内容展开来研究内容也会更多,后续我们可以找机会专门写关于checkpoint的内容。

你可能感兴趣的:(Spark,Streaming源码分析,Spark,Streaming)