requirement failed: Block broadcast_487 is already present in the MemoryStore

场景:

以往正常执行的sparksql,今天在公司执行报如下错误:

第一次执行报错如下:

Caused by: java.sql.SQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1382 in stage 320.0 failed 4 times, most recent failure: Lost task 1382.3 in stage 320.0 (TID 165140, leaptest-dn03.bjev.com): java.io.IOException: java.lang.IllegalArgumentException: requirement failed: Block broadcast_487 is already present in the MemoryStore
    at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1258)
    at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:174)
    at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:65)
    at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:65)
    at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:89)
    at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:72)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:283)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: requirement failed: Block broadcast_487 is already present in the MemoryStore
    at scala.Predef$.require(Predef.scala:224)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:182)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:919)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:910)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:910)
    at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:700)
    at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1213)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:194)
    at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1251)
    ... 12 more

Driver stacktrace:
    at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:279)
    at com.lenovo.lps.farseer.priest2.ext.SparkExecDao.executeOneSql(SparkExecDao.java:370)
    at com.lenovo.lps.farseer.priest2.ext.SparkExecDao.executeOneSqlProxy(SparkExecDao.java:341)
    at com.lenovo.lps.farseer.priest2.ext.SparkExecDao.sparkExecute(SparkExecDao.java:255)
    ... 82 more


 

多次执行的时候偶尔会报另一个错误:

Caused by: java.sql.SQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 342.0 failed 4 times, most recent failure: Lost task 3.3 in stage 342.0 (TID 166270, leaptest-dn03.bjev.com): java.io.FileNotFoundException: /swap/hadoop/yarn/local/usercache/hive/appcache/application_1557227972884_8104/blockmgr-927821e7-2b49-465d-86de-9d2707e2517b/03/temp_shuffle_be51c50d-54ac-4a5b-b840-59129fed41fb (设备上没有空间)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.(FileOutputStream.java:221)
    at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
    at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:181)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:150)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:283)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:279)
    at com.lenovo.lps.farseer.priest2.ext.SparkExecDao.executeOneSql(SparkExecDao.java:370)
    at com.lenovo.lps.farseer.priest2.ext.SparkExecDao.executeOneSqlProxy(SparkExecDao.java:341)
    at com.lenovo.lps.farseer.priest2.ext.SparkExecDao.sparkExecute(SparkExecDao.java:255)
    ... 100 more

从第二次的异常中发现,部署hadoop集群的时候,实施人员将swap分区的目录配置成yarn的任务日志目录中了。即yarn-site.xml的这两个配置(yarn.nodemanager.log-dirs和yarn.nodemanager.local-dirs)。

   
      yarn.nodemanager.local-dirs
      /swap/hadoop/yarn/local,/data/hadoop/yarn/local
   

   
      yarn.nodemanager.log-dirs
      /swap/hadoop/yarn/log,/data/hadoop/yarn/log
   

在配置中去掉swap分区目录,重启yarn即可:

   
      yarn.nodemanager.local-dirs
      /data/hadoop/yarn/local
   

   
      yarn.nodemanager.log-dirs
      /data/hadoop/yarn/log
   

你可能感兴趣的:(Hadoop,spark)