Spark Streaming监控HDFS目录失败的原因(已解决)

我因为做毕设需要用到大数据的一些东西,我用sqoop增量读取mysql数据库的数据,然后写入hdfs文件系统,于是想用Spark Sreaming监控HDFS文件目录,可以实时读取HDFS文件目录新增加的数据,通过Spark处理读取的数据。
所以前期就调试Spark Streaming监控hdfs文件目录,可是出现了一个奇怪的问题。我在另外一台电脑上可以跑通,但是在笔记本上就跑不通。而且没有任何报错。我在一些群里问过几次,后没有人给我解决,后百度,发现竟然搜不到一个可以解决的答案。我发现也有人问,但是没人解答上来。
所以在此提供我的解决办法,以此记录,为后面遇到此问题的人提供参考。
代码很简单,如下

package bigdata.project.spark

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}

object sparkstreaming {
  def main(args: Array[String]): Unit = {
       val sparkConf = new SparkConf().setMaster("local[2]").setAppName("sparkstreaming")
       val ssc = new StreamingContext(sparkConf,Seconds(5))
       val lines = ssc.textFileStream("hdfs://hadoop:9000/ershoufang")
       println("-----------------------------------------")
       var cleandata = lines.flatMap(_.split(",")).map((_, 1)).reduceByKey(_ + _)

       cleandata.print()

       ssc.start()
       ssc.awaitTermination()
  }
}

问题的关键就出在 虚拟机的时间和物理机的时间是不同步的,而且相差很大,导致物理机的IDEA的Spark Streaming 监测不到虚拟机HDFS目录的数据。

解决办法:就是把虚拟机的时间跟物理机时间同步就可以了。
这个问题困扰了我一个多星期,问题很隐蔽。没有报错。

时间同步方法
Spark Streaming监控HDFS目录失败的原因(已解决)_第1张图片Spark Streaming监控HDFS目录失败的原因(已解决)_第2张图片保证虚拟机和物理机时间同步就可以了。问题解决。
监测成功的截图:

19/05/03 15:57:45 INFO scheduler.DAGScheduler: Missing parents: List()
19/05/03 15:57:45 INFO scheduler.DAGScheduler: Submitting ResultStage 179 (ShuffledRDD[227] at reduceByKey at sparkstreaming.scala:12), which has no missing parents
19/05/03 15:57:45 INFO memory.MemoryStore: Block broadcast_95 stored as values in memory (estimated size 2.8 KB, free 1993.1 MB)
19/05/03 15:57:45 INFO memory.MemoryStore: Block broadcast_95_piece0 stored as bytes in memory (estimated size 1712.0 B, free 1993.1 MB)
19/05/03 15:57:45 INFO storage.BlockManagerInfo: Added broadcast_95_piece0 in memory on 192.168.56.1:57656 (size: 1712.0 B, free: 1993.9 MB)
19/05/03 15:57:45 INFO spark.SparkContext: Created broadcast 95 from broadcast at DAGScheduler.scala:1006
19/05/03 15:57:45 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 179 (ShuffledRDD[227] at reduceByKey at sparkstreaming.scala:12) (first 15 tasks are for partitions Vector(1))
19/05/03 15:57:45 INFO scheduler.TaskSchedulerImpl: Adding task set 179.0 with 1 tasks
19/05/03 15:57:45 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 179.0 (TID 92, localhost, executor driver, partition 1, ANY, 4621 bytes)
19/05/03 15:57:45 INFO executor.Executor: Running task 0.0 in stage 179.0 (TID 92)
-------------------------------------------
Time: 1556870265000 ms
-------------------------------------------
(西红,1)
(96,1)
(null,9)
(一吻,1)

19/05/03 15:57:45 INFO storage.ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
19/05/03 15:57:45 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
19/05/03 15:57:45 INFO executor.Executor: Finished task 0.0 in stage 179.0 (TID 92). 1218 bytes result sent to driver
19/05/03 15:57:45 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 179.0 (TID 92) in 5 ms on localhost (executor driver) (1/1)
19/05/03 15:57:45 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 179.0, whose tasks have all completed, from pool 
19/05/03 15:57:45 INFO scheduler.DAGScheduler: ResultStage 179 (print at sparkstreaming.scala:14) finished in 0.007 s
19/05/03 15:57:45 INFO scheduler.DAGScheduler: Job 89 finished: print at sparkstreaming.scala:14, took 0.015823 s
19/05/03 15:57:45 INFO scheduler.JobScheduler: Finished job streaming job 1556870265000 ms.0 from job set of time 1556870265000 ms
19/05/03 15:57:45 INFO scheduler.JobScheduler: Total delay: 0.166 s for time 1556870265000 ms (execution: 0.101 s)
19/05/03 15:57:45 INFO rdd.ShuffledRDD: Removing RDD 221 from persistence list
19/05/03 15:57:45 INFO rdd.MapPartitionsRDD: Removing RDD 220 from persistence list
19/05/03 15:57:45 INFO storage.BlockManager: Removing RDD 221

你可能感兴趣的:(spark)