推荐阅读
目录
前言
HeartbeatReceiver类
声明和构造
部分成员属性的含义
HeartbeatReceiver提供的方法
启动
监听Executor添加和移除
消息处理与回复
处理Executor心跳
清理超时的Executor
总结
按照SparkContext初始化的顺序,下一个应该是心跳接收器HeartbeatReceiver。由于笔者感染乙流仍然没有痊愈,状态不好,文中若有疏漏,请批评指正。
我们已经知道,Executor需要定期向Driver发送心跳信号来表示自己存活,因此HeartbeatReceiver由Driver持有,负责处理各个Executor的心跳消息,监控它们的状态。本文就来简要探究一下HeartbeatReceiver的实现细节。
代码#15.1 - o.a.s.HeartbeatReceiver类定义及成员属性
private[spark] class HeartbeatReceiver(sc: SparkContext, clock: Clock)
extends SparkListener with ThreadSafeRpcEndpoint with Logging {
def this(sc: SparkContext) {
this(sc, new SystemClock)
}
sc.listenerBus.addToManagementQueue(this)
override val rpcEnv: RpcEnv = sc.env.rpcEnv
private[spark] var scheduler: TaskScheduler = null
private val executorLastSeen = new mutable.HashMap[String, Long]
private val slaveTimeoutMs =
sc.conf.getTimeAsMs("spark.storage.blockManagerSlaveTimeoutMs", "120s")
private val executorTimeoutMs =
sc.conf.getTimeAsSeconds("spark.network.timeout", s"${slaveTimeoutMs}ms") * 1000
private val timeoutIntervalMs =
sc.conf.getTimeAsMs("spark.storage.blockManagerTimeoutIntervalMs", "60s")
private val checkTimeoutIntervalMs =
sc.conf.getTimeAsSeconds("spark.network.timeoutInterval", s"${timeoutIntervalMs}ms") * 1000
private var timeoutCheckingTask: ScheduledFuture[_] = null
private val eventLoopThread =
ThreadUtils.newDaemonSingleThreadScheduledExecutor("heartbeat-receiver-event-loop-thread")
private val killExecutorThread = ThreadUtils.newDaemonSingleThreadExecutor
可见,HeartbeatReceiver类继承了SparkListener抽象类,又实现了ThreadSafeRpcEndpoint特征,说明它既是一个监听器,又是一个(线程安全的)RPC端点。我们之前对Spark监听器机制和RPC环境都有了深入的了解,所以这些都是毛毛雨了。
HeartbeatReceiver类有两个构造方法参数,其一是SparkContext,另外一个则是o.a.s.util.Clock特征的实现类SystemClock。SystemClock提供了对系统时间System.currentTimeMillis()的简单封装。
在HeartbeatReceiver构造时,会将其同时加入LiveListenerBus的Executor管理(executorManagement)队列中进行监听。
executorLastSeen:维护Executor ID与收到该Executor最近一次心跳时间戳之间的映射关系。
slaveTimeoutMs:对应配置项spark.storage.blockManagerSlaveTimeoutMs,表示Executor上的BlockManager的超时时间,默认值120s。
executorTimeoutMs:对应配置项spark.network.timeout,表示Executor本身的超时时间,默认值与spark.storage.blockManagerSlaveTimeoutMs相同。
timeoutIntervalMs:对应配置项spark.storage.blockManagerTimeoutIntervalMs,表示检查Executor上的BlockManager是否超时的间隔,默认值60s。
checkTimeoutIntervalMs:对应配置项spark.network.timeoutInterval,表示检查Executor是否超时的间隔,默认值与spark.storage.blockManagerTimeoutIntervalMs相同。
timeoutCheckingTask:持有检查Executor是否超时的任务返回的ScheduledFuture对象。
eventLoopThread:一个单守护线程的调度线程池,其名称为heartbeat-receiver-event-loop-thread,是整个HeartbeatReceiver的事件处理线程。
killExecutorThread:一个单守护线程的普通线程池,其名称为kill-executor-thread,用来异步执行杀掉Executor的任务。
下面我们来看HeartbeatReceiver类提供的方法,看看它是如何运作的。
HeartbeatReceiver作为一个RPC端点,实现了RpcEndpoint.onStart()方法,当RPC环境中的Dispatcher注册RPC端点时,会调用该方法。代码如下。
代码#15.2 - o.a.s.HeartbeatReceiver.onStart()方法
override def onStart(): Unit = {
timeoutCheckingTask = eventLoopThread.scheduleAtFixedRate(new Runnable {
override def run(): Unit = Utils.tryLogNonFatalError {
Option(self).foreach(_.ask[Boolean](ExpireDeadHosts))
}
}, 0, checkTimeoutIntervalMs, TimeUnit.MILLISECONDS)
}
可见,在HeartbeatReceiver启动时,会让eventLoopThread开始以spark.network.timeoutInterval规定的间隔调度执行,并将ScheduledFuture对象返回给timeoutCheckingTask。该线程只做一件事,就是向HeartbeatReceiver自己发送ExpireDeadHosts消息,并等待回复。后面我们会知道它如何处理该消息。
HeartbeatReceiver作为一个监听器,实现了SparkListener.onExecutorAdded()与onExecutorRemoved()方法,用来监听Executor的添加与移除。代码如下。
代码#15.3 - o.a.s.HeartbeatReceiver.onExecutorAdded()/onExecutorRemoved()方法
override def onExecutorAdded(executorAdded: SparkListenerExecutorAdded): Unit = {
addExecutor(executorAdded.executorId)
}
override def onExecutorRemoved(executorRemoved: SparkListenerExecutorRemoved): Unit = {
removeExecutor(executorRemoved.executorId)
}
它们分别调用的addExecutor()和removeExecutor()方法如下所示。
代码#15.4 - o.a.s.HeartbeatReceiver.addExecutor()/removeExecutor()方法
def addExecutor(executorId: String): Option[Future[Boolean]] = {
Option(self).map(_.ask[Boolean](ExecutorRegistered(executorId)))
}
def removeExecutor(executorId: String): Option[Future[Boolean]] = {
Option(self).map(_.ask[Boolean](ExecutorRemoved(executorId)))
}
可见,当监听到Executor的添加或移除时,HeartbeatReceiver就会向自己发送带有Executor ID的ExecutorRegistered或ExecutorRemoved消息,并等待回复。
这部分逻辑自然是通过实现RpcEndpoint.receiveAndReply()方法来实现的。
代码#15.5 - o.a.s.HeartbeatReceiver.receiveAndReply()方法
override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] = {
case ExecutorRegistered(executorId) =>
executorLastSeen(executorId) = clock.getTimeMillis()
context.reply(true)
case ExecutorRemoved(executorId) =>
executorLastSeen.remove(executorId)
context.reply(true)
case TaskSchedulerIsSet =>
scheduler = sc.taskScheduler
context.reply(true)
case ExpireDeadHosts =>
expireDeadHosts()
context.reply(true)
case heartbeat @ Heartbeat(executorId, accumUpdates, blockManagerId) =>
// 下节讲述
}
我们来详细看下对每种消息分别是如何处理的。
ExecutorRegistered:将Executor ID与通过SystemClock获取的当前时间戳加入executorLastSeen映射中,并回复true。
ExecutorRemoved:从executorLastSeen映射中删除Executor ID对应的条目,并回复true。
TaskSchedulerIsSet:该消息的含义是TaskScheduler已经生成并准备好,在SparkContext初始化过程中会发送此消息,可以参见代码#2.12。收到该消息后会令HeartbeatReceiver也持有一份TaskScheduler实例,并回复true。
ExpireDeadHosts:顾名思义,该消息的含义是清理那些由于太久没发送心跳而超时的Executor,会调用expireDeadHosts()方法并回复true。expireDeadHosts()方法会在最后讲述。
Heartbeat:这就是Executor向Driver发送来的心跳信号,下面一节来看处理心跳的方法。
接着上面的receiveAndReply()方法继续看。
代码#15.6 - o.a.s.HeartbeatReceiver.receiveAndReply()方法
override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] = {
// ...
case heartbeat @ Heartbeat(executorId, accumUpdates, blockManagerId) =>
if (scheduler != null) {
if (executorLastSeen.contains(executorId)) {
executorLastSeen(executorId) = clock.getTimeMillis()
eventLoopThread.submit(new Runnable {
override def run(): Unit = Utils.tryLogNonFatalError {
val unknownExecutor = !scheduler.executorHeartbeatReceived(
executorId, accumUpdates, blockManagerId)
val response = HeartbeatResponse(reregisterBlockManager = unknownExecutor)
context.reply(response)
}
})
} else {
logDebug(s"Received heartbeat from unknown executor $executorId")
context.reply(HeartbeatResponse(reregisterBlockManager = true))
}
} else {
logWarning(s"Dropping $heartbeat because TaskScheduler is not ready yet")
context.reply(HeartbeatResponse(reregisterBlockManager = true))
}
可见,在TaskScheduler不为空的情况下,如果executorLastSeen映射中已经保存有Executor ID,就更新时间戳,并向eventLoopThread线程提交执行TaskScheduler.executorHeartbeatReceived()方法(该方法用于通知Master,使其知道BlockManager是存活状态),并回复HeartbeatResponse消息。值得注意的是,executorHeartbeatReceived()方法会返回一个布尔值,表示Driver是否对Executor持有的BlockManager有感知,如果没有的话,就得在HeartbeatResponse消息中注明需要重新注册BlockManager。
至于executorLastSeen映射中不包含当前Executor ID,或者TaskScheduler为空的情况,都会直接回复需要重新注册BlockManager的HeartbeatResponse消息。
该逻辑由expireDeadHosts()方法来实现。
代码#15.7 - o.a.s.HeartbeatReceiver.expireDeadHosts()方法
private def expireDeadHosts(): Unit = {
logTrace("Checking for hosts with no recent heartbeats in HeartbeatReceiver.")
val now = clock.getTimeMillis()
for ((executorId, lastSeenMs) <- executorLastSeen) {
if (now - lastSeenMs > executorTimeoutMs) {
logWarning(s"Removing executor $executorId with no recent heartbeats: " +
s"${now - lastSeenMs} ms exceeds timeout $executorTimeoutMs ms")
scheduler.executorLost(executorId, SlaveLost("Executor heartbeat " +
s"timed out after ${now - lastSeenMs} ms"))
killExecutorThread.submit(new Runnable {
override def run(): Unit = Utils.tryLogNonFatalError {
sc.killAndReplaceExecutor(executorId)
}
})
executorLastSeen.remove(executorId)
}
}
}
该方法会遍历executorLastSeen映射,取出最后一次心跳的时间戳与当前对比,如果时间差值大于spark.network.timeout,就表示Executor已经超时,执行以下操作:
调用TaskScheduler.executorLost()方法,从调度体系中移除超时的Executor。
向killExecutorThread线程池提交执行SparkContext.killAndReplaceExecutor()方法的任务,异步地杀掉超时的Executor。
从executorLastSeen映射中删掉超时Executor ID的条目。