关于SparkStreaming 缓存KafkaCosumer导致多个线程使用一个Cosumer对象报错解决思路

现象

[INFO ] 2020-06-28 23:23:22,092 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Removing CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0) from cache
[INFO ] 2020-06-28 23:23:22,092 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Cache miss for CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0)
[INFO ] 2020-06-28 23:23:22,095 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Initial fetch for spark-executor-GPBPAnalysis-group-prodtest gshopper_logs 0 1804944
[WARN ] 2020-06-28 23:23:22,096 method:org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
Putting block rdd_722699_5 failed due to an exception
[WARN ] 2020-06-28 23:23:22,096 method:org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
Block rdd_722699_5 could not be removed as it was not found on disk or in memory
[WARN ] 2020-06-28 23:23:22,097 method:org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
Putting block rdd_722700_5 failed due to an exception
[INFO ] 2020-06-28 23:23:22,097 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Removed CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0) from cache
[WARN ] 2020-06-28 23:23:22,097 method:org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
Block rdd_722700_5 could not be removed as it was not found on disk or in memory
[INFO ] 2020-06-28 23:23:22,097 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Cache miss for CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0)
[ERROR] 2020-06-28 23:23:22,098 method:org.apache.spark.internal.Logging$class.logError(Logging.scala:91)
Exception in task 1007.2 in stage 243931.0 (TID 61808749)
java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
        at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2286)
        at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2270)
        at org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1543)
        at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.seek(CachedKafkaConsumer.scala:95)
        at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:69)
        at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:223)
        at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:189)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:215)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1038)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1029)
        at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:969)
        at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1029)
        at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:760)
        at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:336)
        at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:334)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1055)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1029)
        at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:969)
        at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1029)
        at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:760)
        at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
        at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.sc

可以通过以上异常看到“Block rdd_722700_5 could not be removed as it was not found on disk or in memory”表示rdd缓存失败

“Cache miss for CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0)” 表示KafkaCosumer从缓存中获取失败

推测:由于KafkaCosumer获取失败了,然后某一个job失败了,又重新去创建KafkaCosumer导致了“java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access”异常

可能的情况:

​ 首先由于缓存被击穿了,所以只要任何一个job重新执行都会导致重复创建KafkaConsumer的问题。

​ 1.推测执行机制导致(默认关闭)

​ 2.某一个job失败后从头开始执行

​ 3.在foreachrdd中重复使用Rdd

解决办法:

​ 1.开启Checkpoint斩断Rdd链条

你可能感兴趣的:(关于SparkStreaming 缓存KafkaCosumer导致多个线程使用一个Cosumer对象报错解决思路)