第5课:基于案例一节课贯通Spark Streaming流计算框架的运行源码

第一部分:在线动态计算分类最热门商品案例代码

package com.dt.spark.sparkstreaming
 import com.robinspark.utils.ConnectionPool
 import org.apache.spark.SparkConf
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.hive.HiveContext
 import org.apache.spark.sql.types.{IntegerType, StringType, StructField, StructType}
 import org.apache.spark.streaming.{Seconds, StreamingContext}

 /** * 使用Spark Streaming+Spark SQL来在线动态计算电商中不同类别中最热门的商品排名,例如手机这个类别下面最热门的三种手机、电视这个类别 * 下最热门的三种电视,该实例在实际生产环境下具有非常重大的意义; * * @author DT大数据梦工厂 * 新浪微博:http://weibo.com/ilovepains/ * * * 实现技术:Spark Streaming+Spark SQL,之所以Spark Streaming能够使用ML、sql、graphx等功能是因为有foreachRDD和Transform * 等接口,这些接口中其实是基于RDD进行操作,所以以RDD为基石,就可以直接使用Spark其它所有的功能,就像直接调用API一样简单。 * 假设说这里的数据的格式:user item category,例如Rocky Samsung Android */
 object OnlineTheTop3ItemForEachCategory2DB {
   def main(args: Array[String]){
     /** * 第1步:创建Spark的配置对象SparkConf,设置Spark程序的运行时的配置信息, * 例如说通过setMaster来设置程序要链接的Spark集群的Master的URL,如果设置 * 为local,则代表Spark程序在本地运行,特别适合于机器配置条件非常差(例如 * 只有1G的内存)的初学者 * */
     val conf = new SparkConf() //创建SparkConf对象
    conf.setAppName("OnlineTheTop3ItemForEachCategory2DB") //设置应用程序的名称,在程序运行的监控界面可以看到名称
    conf.setMaster("spark://Master:7077") //此时,程序在Spark集群
    //conf.setMaster("local[2]")
     //设置batchDuration时间间隔来控制Job生成的频率并且创建Spark Streaming执行的入口
    val ssc = new StreamingContext(conf, Seconds(5))

     ssc.checkpoint("/root/Documents/SparkApps/checkpoint")

     val userClickLogsDStream = ssc.socketTextStream("Master", 9999)

     val formattedUserClickLogsDStream = userClickLogsDStream.map(clickLog =>
         (clickLog.split(" ")(2) + "_" + clickLog.split(" ")(1), 1))

 // val categoryUserClickLogsDStream = formattedUserClickLogsDStream.reduceByKeyAndWindow((v1:Int, v2: Int) => v1 + v2,
 // (v1:Int, v2: Int) => v1 - v2, Seconds(60), Seconds(20))

     val categoryUserClickLogsDStream = formattedUserClickLogsDStream.reduceByKeyAndWindow(_+_,
       _-_, Seconds(60), Seconds(20))

     categoryUserClickLogsDStream.foreachRDD { rdd => {
       if (rdd.isEmpty()) {
         println("No data inputted!!!")
       } else {
         val categoryItemRow = rdd.map(reducedItem => {
           val category = reducedItem._1.split("_")(0)
           val item = reducedItem._1.split("_")(1)
           val click_count = reducedItem._2
           Row(category, item, click_count)
         })

         val structType = StructType(Array(
           StructField("category", StringType, true),
           StructField("item", StringType, true),
           StructField("click_count", IntegerType, true)
         ))

         val hiveContext = new HiveContext(rdd.context)
         val categoryItemDF = hiveContext.createDataFrame(categoryItemRow, structType)

         categoryItemDF.registerTempTable("categoryItemTable")

         val reseltDataFram = hiveContext.sql("SELECT category,item,click_count FROM (SELECT category,item,click_count,row_number()" +
           " OVER (PARTITION BY category ORDER BY click_count DESC) rank FROM categoryItemTable) subquery " +
           " WHERE rank <= 3")
         reseltDataFram.show()

         val resultRowRDD = reseltDataFram.rdd

         resultRowRDD.foreachPartition { partitionOfRecords => {

           if (partitionOfRecords.isEmpty){
             println("This RDD is not null but partition is null")
           } else {
             // ConnectionPool is a static, lazily initialized pool of connections
             val connection = ConnectionPool.getConnection()
             partitionOfRecords.foreach(record => {
               val sql = "insert into categorytop3(category,item,client_count) values('" + record.getAs("category") + "','" +
                 record.getAs("item") + "'," + record.getAs("click_count") + ")"
               val stmt = connection.createStatement();
               stmt.executeUpdate(sql);

             })
             ConnectionPool.returnConnection(connection) // return to the pool for future reuse

           }
         }
         }
       }
     }
     }
     /** * 在StreamingContext调用start方法的内部其实是会启动JobScheduler的Start方法,进行消息循环,在JobScheduler * 的start内部会构造JobGenerator和ReceiverTacker,并且调用JobGenerator和ReceiverTacker的start方法: * 1,JobGenerator启动后会不断的根据batchDuration生成一个个的Job * 2,ReceiverTracker启动后首先在Spark Cluster中启动Receiver(其实是在Executor中先启动ReceiverSupervisor),在Receiver收到 * 数据后会通过ReceiverSupervisor存储到Executor并且把数据的Metadata信息发送给Driver中的ReceiverTracker,在ReceiverTracker * 内部会通过ReceivedBlockTracker来管理接受到的元数据信息 * 每个BatchInterval会产生一个具体的Job,其实这里的Job不是Spark Core中所指的Job,它只是基于DStreamGraph而生成的RDD * 的DAG而已,从Java角度讲,相当于Runnable接口实例,此时要想运行Job需要提交给JobScheduler,在JobScheduler中通过线程池的方式找到一个 * 单独的线程来提交Job到集群运行(其实是在线程中基于RDD的Action触发真正的作业的运行),为什么使用线程池呢? * 1,作业不断生成,所以为了提升效率,我们需要线程池;这和在Executor中通过线程池执行Task有异曲同工之妙; * 2,有可能设置了Job的FAIR公平调度的方式,这个时候也需要多线程的支持; * */
     ssc.start()
     ssc.awaitTermination()
   }
 }

第二部分:源码分析

  1. 创建StreamingContext的代码: val ssc = new StreamingContext(conf, Seconds(5))
def this(conf: SparkConf, batchDuration: Duration) = {
   this(StreamingContext.createNewSparkContext(conf), null, batchDuration)
 }
private[streaming] def createNewSparkContext(conf: SparkConf): SparkContext = {
   new SparkContext(conf)
 }

从以上代码可见,创建StreamingContext的背后也创建了一个SparkContext,所以我们说Streaming是运行在Spark Core 之上的。
2. 创建Socket输入流的代码:val userClickLogsDStream = ssc.socketTextStream(“Master”, 9999)

def socketTextStream( hostname: String, port: Int, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2 ): ReceiverInputDStream[String] = withNamedScope("socket text stream") {
   socketStream[String](hostname, port, SocketReceiver.bytesToLines, storageLevel)
 }
def  socketStream[T: ClassTag](
 hostname: String,
 port: Int,
 converter: (InputStream) => Iterator[T],
 storageLevel: StorageLevel
   ): ReceiverInputDStream[T] = {
   new SocketInputDStream[T](this, hostname, port, converter, storageLevel)
 }

由以上代码可知,StreamingContext的socketTextStream方法新建了一个SocketInputDStream类的实例。

private[streaming]
class SocketInputDStream[T: ClassTag](
     ssc_ : StreamingContext,
     host: String,
     port: Int,
     bytesToObjects: InputStream => Iterator[T],
     storageLevel: StorageLevel
   ) extends ReceiverInputDStream[T](ssc_) {

   def getReceiver(): Receiver[T] = {
     new SocketReceiver(host, port, bytesToObjects, storageLevel)
   }
 }
abstract class ReceiverInputDStream[T: ClassTag](ssc_ : StreamingContext)
   extends InputDStream[T](ssc_) {
abstract class InputDStream[T: ClassTag] (ssc_ : StreamingContext)
   extends DStream[T](ssc_) {

由以上代码可知,SocketInputDStream的继承关系如下:SocketInputDStream -> ReceiverInputDStream -> InputDStream -> DStream,这也验证了DStream是sparkSteaming的核心抽象。

从SocketInputDStream的源码可以看出,其在实例化时通过getReceiver方法新建了一个SocketReceiver类的实例。

private[streaming]
class SocketReceiver[T: ClassTag](
    host: String,
    port: Int,
    bytesToObjects: InputStream => Iterator[T],
    storageLevel: StorageLevel
  ) extends Receiver[T](storageLevel) with Logging {

  def onStart() {
    // Start the thread that receives data over a connection
    new Thread("Socket Receiver") {
      setDaemon(true)
      override def run() { receive() }
    }.start()
  }

  def onStop() {
    // There is nothing much to do as the thread calling receive()
    // is designed to stop by itself isStopped() returns false
  }

  /** Create a socket connection and receive data until receiver is stopped */
  def receive() {
    var socket: Socket = null
    try {
      logInfo("Connecting to " + host + ":" + port)
      socket = new Socket(host, port)
      logInfo("Connected to " + host + ":" + port)
      val iterator = bytesToObjects(socket.getInputStream())
      while(!isStopped && iterator.hasNext) {
        store(iterator.next)
      }
      if (!isStopped()) {
        restart("Socket data stream had no more data")
      } else {
        logInfo("Stopped receiving")
      }
    } catch {
      case e: java.net.ConnectException =>
        restart("Error connecting to " + host + ":" + port, e)
      case NonFatal(e) =>
        logWarning("Error receiving data", e)
        restart("Error receiving data", e)
    } finally {
      if (socket != null) {
        socket.close()
        logInfo("Closed socket to " + host + ":" + port)
      }
    }
  }
}

有以上代码可知,SocketReceiver在onStart方法中新建了一个线程来来不断接受来自socket的数据,。
3. 基于业务逻辑,对SocketInputDStream进行各种transformation操作
4. 基于业务逻辑,对由SocketInputDStream进行transformation后产生的DStream进行foreachRDD操作,foreachRDD是output operations,且在foreachRDD的内部有RDD的action.
5. 真正启动streaming程序的运行:ssc.start()

/** * Start the execution of the streams. * * @throws IllegalStateException if the StreamingContext is already stopped. */
  def start(): Unit = synchronized {
    state match {
      case INITIALIZED =>
        startSite.set(DStream.getCreationSite())
        StreamingContext.ACTIVATION_LOCK.synchronized {
          StreamingContext.assertNoOtherContextIsActive()
          try {
            validate()

            // Start the streaming scheduler in a new thread, so that thread local properties
            // like call sites and job groups can be reset without affecting those of the
            // current thread.
            ThreadUtils.runInNewThread("streaming-start") {
              sparkContext.setCallSite(startSite.get)
              sparkContext.clearJobGroup()
              sparkContext.setLocalProperty(SparkContext.SPARK_JOB_INTERRUPT_ON_CANCEL, "false")
              scheduler.start()
            }
            state = StreamingContextState.ACTIVE
          } catch {
            case NonFatal(e) =>
              logError("Error starting the context, marking it as stopped", e)
              scheduler.stop(false)
              state = StreamingContextState.STOPPED
              throw e
          }
          StreamingContext.setActiveContext(this)
        }
        shutdownHookRef = ShutdownHookManager.addShutdownHook(
          StreamingContext.SHUTDOWN_HOOK_PRIORITY)(stopOnShutdown)
        // Registering Streaming Metrics at the start of the StreamingContext
        assert(env.metricsSystem != null)
        env.metricsSystem.registerSource(streamingSource)
        uiTab.foreach(_.attach())
        logInfo("StreamingContext started")
      case ACTIVE =>
        logWarning("StreamingContext has already been started")
      case STOPPED =>
        throw new IllegalStateException("StreamingContext has already been stopped")
    }
  }

在StreamingContext调用start方法的内部其实是会启动JobScheduler的Start方法,进行消息循环,在JobScheduler的start内部会构造JobGenerator和ReceiverTacker,并且调用JobGenerator和ReceiverTacker的start方法:
1)、JobGenerator启动后会不断的根据batchDuration生成一个个的Job
2)、ReceiverTracker启动后首先在Spark Cluster中启动Receiver(其实是在Executor中先启动ReceiverSupervisor).
在Receiver收到数据后会通过ReceiverSupervisor存储到Executor并且把数据的Metadata信息发送给Driver中的ReceiverTracker,在ReceiverTracker 内部会通过ReceivedBlockTracker来管理接受到的元数据信息每个BatchInterval会产生一个具体的Job(这里的Job主要是封装了业务逻辑例如上面实例中的代码),其实这里的Job不是Spark Core中所指的Job,它只是基于DStreamGraph而生成的RDD 的DAG而已,从Java角度讲,相当于Runnable接口实例,此时要想运行Job需要提交给JobScheduler,在JobScheduler中通过线程池的方式找到一个单独的线程来提交Job到集群运行(其实是在线程中基于RDD的Action触发真正的作业的运行).为什么使用线程池呢?a)、作业不断生成,所以为了提升效率,我们需要线程池;这和在Executor中通过线程池执行Task有异曲同工之妙;b)、有可能设置了Job的FAIR公平调度的方式,这个时候也需要多线程的支持;

  1. 追踪scheduler.start():
    if (eventLoop != null) return // scheduler has already been started

    logDebug("Starting JobScheduler")
    eventLoop = new EventLoop[JobSchedulerEvent]("JobScheduler") {
      override protected def onReceive(event: JobSchedulerEvent): Unit = processEvent(event)

      override protected def onError(e: Throwable): Unit = reportError("Error in job scheduler", e)
    }
    eventLoop.start()

    // attach rate controllers of input streams to receive batch completion updates
    for {
      inputDStream <- ssc.graph.getInputStreams
      rateController <- inputDStream.rateController
    } ssc.addStreamingListener(rateController)

    listenerBus.start(ssc.sparkContext)
    receiverTracker = new ReceiverTracker(ssc)
    inputInfoTracker = new InputInfoTracker(ssc)
    receiverTracker.start()
    jobGenerator.start()
    logInfo("Started JobScheduler")
  }

JobScheduler.start会创建并启动EventLoop.
7. EventLoop:

/** * An event loop to receive events from the caller and process all events in the event thread. It * will start an exclusive event thread to process all events. * * Note: The event queue will grow indefinitely. So subclasses should make sure `onReceive` can * handle events in time to avoid the potential OOM. */
private[spark] abstract class EventLoop[E](name: String) extends Logging {

  private val eventQueue: BlockingQueue[E] = new LinkedBlockingDeque[E]()

  private val stopped = new AtomicBoolean(false)

  private val eventThread = new Thread(name) {
    setDaemon(true)

    override def run(): Unit = {
      try {
        while (!stopped.get) {
          val event = eventQueue.take()
          try {
            onReceive(event)
          } catch {
            case NonFatal(e) => {
              try {
                onError(e)
              } catch {
                case NonFatal(e) => logError("Unexpected error in " + name, e)
              }
            }
          }
        }
      } catch {
        case ie: InterruptedException => // exit even if eventQueue is not empty
        case NonFatal(e) => logError("Unexpected error in " + name, e)
      }
    }

  }

  def start(): Unit = {
    if (stopped.get) {
      throw new IllegalStateException(name + " has already been stopped")
    }
    // Call onStart before starting the event thread to make sure it happens before onReceive
    onStart()
    eventThread.start()
  }

  def stop(): Unit = {
    if (stopped.compareAndSet(false, true)) {
      eventThread.interrupt()
      var onStopCalled = false
      try {
        eventThread.join()
        // Call onStop after the event thread exits to make sure onReceive happens before onStop
        onStopCalled = true
        onStop()
      } catch {
        case ie: InterruptedException =>
          Thread.currentThread().interrupt()
          if (!onStopCalled) {
            // ie is thrown from `eventThread.join()`. Otherwise, we should not call `onStop` since
            // it's already called.
            onStop()
          }
      }
    } else {
      // Keep quiet to allow calling `stop` multiple times.
    }
  }

  /** * Put the event into the event queue. The event thread will process it later. */
  def post(event: E): Unit = {
    eventQueue.put(event)
  }

  /** * Return if the event thread has already been started but not yet stopped. */
  def isActive: Boolean = eventThread.isAlive

  /** * Invoked when `start()` is called but before the event thread starts. */
  protected def onStart(): Unit = {}

  /** * Invoked when `stop()` is called and the event thread exits. */
  protected def onStop(): Unit = {}

  /** * Invoked in the event thread when polling events from the event queue. * * Note: Should avoid calling blocking actions in `onReceive`, or the event thread will be blocked * and cannot process events in time. If you want to call some blocking actions, run them in * another thread. */
  protected def onReceive(event: E): Unit

  /** * Invoked if `onReceive` throws any non fatal error. Any non fatal error thrown from `onError` * will be ignored. */
  protected def onError(e: Throwable): Unit

}

EventLoop会创建eventThread线程来接收消息,在接受到消息后会调用JobScheduler中的processEvent(event)方法。

  1. JobScheduler的processEvent(event)方法:
  private def processEvent(event: JobSchedulerEvent) {
    try {
      event match {
        case JobStarted(job, startTime) => handleJobStart(job, startTime)
        case JobCompleted(job, completedTime) => handleJobCompletion(job, completedTime)
        case ErrorReported(m, e) => handleError(m, e)
      }
    } catch {
      case e: Throwable =>
        reportError("Error in job scheduler", e)
    }
  }

可见,JobScheduler本身用了一个线程循环,去监听不同的Job启动、Job完成、Job失败等任务,JobScheduler是整个Job的调度器,是消息驱动系统。

  1. JobScheduler.start也会:
   // attach rate controllers of input streams to receive batch completion updates
    for {
      inputDStream <- ssc.graph.getInputStreams
      rateController <- inputDStream.rateController
    } ssc.addStreamingListener(rateController)

多个InputStream,RateController控制输入的速度。
10. JobScheduler.start也会listenerBus.start(ssc.sparkContext),这里的val listenerBus = new StreamingListenerBus()

/** Asynchronously passes StreamingListenerEvents to registered StreamingListeners. */
private[spark] class StreamingListenerBus
  extends AsynchronousListenerBus[StreamingListener, StreamingListenerEvent]("StreamingListenerBus")
  with Logging {

  private val logDroppedEvent = new AtomicBoolean(false)

  override def onPostEvent(listener: StreamingListener, event: StreamingListenerEvent): Unit = {
    event match {
      case receiverStarted: StreamingListenerReceiverStarted =>
        listener.onReceiverStarted(receiverStarted)
      case receiverError: StreamingListenerReceiverError =>
        listener.onReceiverError(receiverError)
      case receiverStopped: StreamingListenerReceiverStopped =>
        listener.onReceiverStopped(receiverStopped)
      case batchSubmitted: StreamingListenerBatchSubmitted =>
        listener.onBatchSubmitted(batchSubmitted)
      case batchStarted: StreamingListenerBatchStarted =>
        listener.onBatchStarted(batchStarted)
      case batchCompleted: StreamingListenerBatchCompleted =>
        listener.onBatchCompleted(batchCompleted)
      case outputOperationStarted: StreamingListenerOutputOperationStarted =>
        listener.onOutputOperationStarted(outputOperationStarted)
      case outputOperationCompleted: StreamingListenerOutputOperationCompleted =>
        listener.onOutputOperationCompleted(outputOperationCompleted)
      case _ =>
    }
  }
  1. JobScheduler.start也会receiverTracker = new ReceiverTracker(ssc)和receiverTracker.start()
  2. JobScheduler.start也会inputInfoTracker = new InputInfoTracker(ssc)
  3. JobScheduler.start也会jobGenerator.start()

本次分享来自于王家林老师的课程‘源码版本定制发行班’,在此向王家林老师表示感谢!
王家林老师新浪微博:http://weibo.com/ilovepains
王家林老师博客:http://blog.sina.com.cn/ilovepains

欢迎大家交流技术知识!一起学习,共同进步!
笔者的微博:http://weibo.com/keepstriving

你可能感兴趣的:(源码,spark)