sparkStreaming报错Failed to send RPC 6254780973500208805 to /10.11.10.10:48838: java.nio.channels.Clos

sparkStreaming报错Failed to send RPC 6254780973500208805 to /10.11.10.10:48838: java.nio.channels.ClosedChannelException

21/04/09 06:33:44 ERROR client.TransportClient: Failed to send RPC 6254780973500208805 to /10.11.10.10:48838: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
	at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
21/04/09 06:33:44 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, vf-uat-pdc-04, 36905, None)
21/04/09 06:33:44 INFO storage.BlockManagerMaster: Removed 1 successfully in removeExecutor
21/04/09 06:33:44 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 1 at RPC address 10.11.10.7:55572, but got no response. Marking as slave lost.

原因:
1、spark给的内存太少,yarn把sparkApplication kill掉了
2、集群重启了…
解决:
1、给yarn-site.xml增加配置,然后重启hadoop
2、提交任务时,增加内存

#方法一:
	
    		yarn.nodemanager.pmem-check-enabled</name>
    		false</value>
	</property>
	
		yarn.nodemanager.vmem-check-enabled</name>
    		false</value>
	</property>
#方法二:
--driver-memory 2g \
--executor-memory 2g \

你可能感兴趣的:(worker)