Spark运行报错:ERROR CoarseGrainedExecutorBackend: Driver disassociated ! Shutting down

报错:

INFO MemoryStore: Will not store rdd_4_5 as it would require dropping another block from the same RDD
WARN MemoryStore: Not enough space to cache rdd_4_5 in memory! (computed 68.8 MB so far)
INFO MemoryStore: Memory use = 302.6 KB (blocks) + 123.7 MB (scratch space shared across 3 tasks(s)) = 124.0 MB. Storage limit = 134.6 MB.
INFO Executor: Finished task 3.0 in stage 0.0 (TID 3). 2126 bytes result sent to driver
INFO Executor: Finished task 4.0 in stage 0.0 (TID 4). 2126 bytes result sent to driver
INFO Executor: Finished task 5.0 in stage 0.0 (TID 5). 2126 bytes result sent to driver
INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
INFO MemoryStore: MemoryStore cleared
INFO BlockManager: BlockManager stopped
WARN CoarseGrainedExecutorBackend: An unknown (leen:45970) driver disconnected.
INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.

ERROR CoarseGrainedExecutorBackend: Driver 192.168.230.135:45970 disassociated! Shutting down.
INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.


这个错误比较隐晦,从信息上看来不知道是什么问题,但是归根结底还是内存的问题,有两个方法可以解决这个错误:

1、加大excutor-memory的值,减少executor-cores的数量,问题可以解决。
2、加大executor.overhead的值,但是这样其实并没有解决掉根本的问题。所以如果集群的资源是支持的话,就用1的办法吧。

你可能感兴趣的:(spark)