Spark广播之TorrentBroadcast实现原理

尊重作者,标明出处:http://www.jianshu.com/p/ebfabfbb8c0e

Broadcast 就是将数据从一个节点发送到其他各个节点上去。Spark有两种方式:一种是HttpBroadcast(Spark2.1.0已经移除),另一种是TorrentBroadcast。

Driver 端:
Driver 先把 data 序列化到 byteArray,然后切割成 BLOCK_SIZE(由 spark.broadcast.blockSize = 4MB 设置)大小的 data block。完成分块切割后,就将分块信息(称为 meta 信息)存放到 driver 自己的 blockManager 里面,StorageLevel 为内存+磁盘,同时会通知 driver 自己的 blockManagerMaster 说 meta 信息已经存放好。通知 blockManagerMaster 这一步很重要,因为 blockManagerMaster 可以被 driver 和所有 executor 访问到,信息被存放到 blockManagerMaster 就变成了全局信息。之后将每个分块 data block 存放到 driver 的 blockManager 里面,StorageLevel 为内存+磁盘。存放后仍然通知 blockManagerMaster 说 blocks 已经存放好。到这一步,driver 的任务已经完成。

Executor 端:
executor 收到 serialized task 后,先反序列化 task,这时候会反序列化 serialized task 中包含的数据类型是 TorrentBroadcast,也就是去调用 TorrentBroadcast.readBroadcastBlock()。先询问所在的 executor 里的 blockManager 是会否包含 data,包含就直接从本地 blockManager 读取 data。否则,就通过本地 blockManager 去连接 driver 的 blockManagerMaster 获取 data 分块的 meta 信息,获取信息后,就开始了 BT 过程。

BT 过程:task 先在本地开一个ByteBuffer用于存放将要 fetch 过来的data block。然后打乱要 fetch 的 data blocks 的顺序,比如如果 data block 共有 5 个,那么打乱后的 fetch 顺序可能是 3-1-2-4-5。然后按照打乱后的顺序去 fetch 一个个 data block。fetch 的过程就是通过 “本地 blockManager-driver/executor 的 blockManager-data” 得到 data。每 fetch 到一个 block 就将其存放到 executor 的 blockManager 里面,同时通知 driver 上的 blockManagerMaster 说该 data block 多了一个存储地址。这一步通知非常重要,意味着 blockManagerMaster 知道 data block 现在在 cluster 中有多份,下一个不同节点上的 task 再去 fetch 这个 data block 的时候,可以有两个选择了,而且会随机选择一个去 fetch。这个过程持续下去就是 BT 协议,随着下载的客户端越来越多,data block 服务器也越来越多,就变成 p2p下载了。

最后将 data 存放到 task 所在 executor 的 blockManager 里面,StorageLevel 为内存+磁盘。

  1. 在SparkEnv中创建BroadcastManager
    val broadcastManager = new BroadcastManager(isDriver, conf, securityManager)
  2. SparkContext的broadcast()方法会创建HttpBroadcast或者TorrentBroadcast
    def broadcast[T: ClassTag](value: T): Broadcast[T] = {
     assertNotStopped()
     if (classOf[RDD[_]].isAssignableFrom(classTag[T].runtimeClass)) {
       // This is a warning instead of an exception in order to avoid breaking user programs that
       // might have created RDD broadcast variables but not used them:
       logWarning("Can not directly broadcast RDDs; instead, call collect() and "
         + "broadcast the result (see SPARK-5063)")
     }
     val bc = env.broadcastManager.newBroadcast[T](value, isLocal)
     val callSite = getCallSite
     logInfo("Created broadcast " + bc.id + " from " + callSite.shortForm)
     cleaner.foreach(_.registerBroadcastForCleanup(bc))
     bc
    }
  3. TorrentBroadcast的writeBlocks()方法
    private def writeBlocks(value: T): Int = {
     // Store a copy of the broadcast variable in the driver so that tasks run on the driver
     // do not create a duplicate copy of the broadcast variable's value.
     SparkEnv.get.blockManager.putSingle(broadcastId, value, StorageLevel.MEMORY_AND_DISK,
       tellMaster = false)
     val blocks =
       TorrentBroadcast.blockifyObject(value, blockSize, SparkEnv.get.serializer, compressionCodec)
     blocks.zipWithIndex.foreach { case (block, i) =>
       SparkEnv.get.blockManager.putBytes(
         BroadcastBlockId(id, "piece" + i),
         block,
         StorageLevel.MEMORY_AND_DISK_SER,
         tellMaster = true)
     }
     blocks.length
    }
    首先调用putSingle将整个数据写入到blockManager,然后调用blockifyObject将数据分成多个block,然后将每个block写入到blockManager。
  4. TorrentBroadcast的readBlocks()方法

    private def readBlocks(): Array[ByteBuffer] = {
     // Fetch chunks of data. Note that all these chunks are stored in the BlockManager and reported
     // to the driver, so other executors can pull these chunks from this executor as well.
     val blocks = new Array[ByteBuffer](numBlocks)
     val bm = SparkEnv.get.blockManager
    
     // 将block打散
     for (pid <- Random.shuffle(Seq.range(0, numBlocks))) {
       val pieceId = BroadcastBlockId(id, "piece" + pid)
       logDebug(s"Reading piece $pieceId of $broadcastId")
       // First try getLocalBytes because there is a chance that previous attempts to fetch the
       // broadcast blocks have already fetched some of the blocks. In that case, some blocks
       // would be available locally (on this executor).
       def getLocal: Option[ByteBuffer] = bm.getLocalBytes(pieceId)
       def getRemote: Option[ByteBuffer] = bm.getRemoteBytes(pieceId).map { block =>
         // If we found the block from remote executors/driver's BlockManager, put the block
         // in this executor's BlockManager.
         SparkEnv.get.blockManager.putBytes(
           pieceId,
           block,
           StorageLevel.MEMORY_AND_DISK_SER,
           tellMaster = true)
         block
       }
       val block: ByteBuffer = getLocal.orElse(getRemote).getOrElse(
         throw new SparkException(s"Failed to get $pieceId of $broadcastId"))
       blocks(pid) = block
     }
     blocks
    }

    先看本地有没有block,如果没有,则从driver或者其他executor获取。下面是从远处获取block的方法。

    private def doGetRemote(blockId: BlockId, asBlockResult: Boolean): Option[Any] = {
     require(blockId != null, "BlockId is null")
     val locations = Random.shuffle(master.getLocations(blockId))
     for (loc <- locations) {
       logDebug(s"Getting remote block $blockId from $loc")
       val data = blockTransferService.fetchBlockSync(
         loc.host, loc.port, loc.executorId, blockId.toString).nioByteBuffer()
    
       if (data != null) {
         if (asBlockResult) {
           return Some(new BlockResult(
             dataDeserialize(blockId, data),
             DataReadMethod.Network,
             data.limit()))
         } else {
           return Some(data)
         }
       }
       logDebug(s"The value of block $blockId is null")
     }
     logDebug(s"Block $blockId not found")
     None
    }

你可能感兴趣的:(Spark广播之TorrentBroadcast实现原理)