Hadoop在YARN上运行案例卡死

问题

在本地环境运行模式,运行案例(pi/grep/wordcount)都是OK的

在伪分布式运行模式,运行上述案例,直接卡死

[root@master-node sbin]# start-dfs.sh              --启动HDFS集群
[root@master-node sbin]# start-yarn.sh             --启动YARN集群
[root@master-node sbin]# jps                       --查看启动状态
12880 NameNode
13764 Jps
13431 NodeManager
13322 ResourceManager
12988 DataNode
[root@master-node sbin]# hadoop jar /opt/module/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar pi 5 5        --运行pi案例
Number of Maps  = 5
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
19/09/04 18:42:42 INFO client.RMProxy: Connecting to ResourceManager at master-node/192.168.159.10:8032
19/09/04 18:42:43 INFO input.FileInputFormat: Total input paths to process : 5
19/09/04 18:42:44 INFO mapreduce.JobSubmitter: number of splits:5
19/09/04 18:42:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1567593642473_0001
19/09/04 18:42:46 INFO impl.YarnClientImpl: Submitted application application_1567593642473_0001
19/09/04 18:42:46 INFO mapreduce.Job: The url to track the job: http://master-node:8088/proxy/application_1567593642473_0001/
19/09/04 18:42:46 INFO mapreduce.Job: Running job: job_1567593642473_0001
19/09/04 18:43:03 INFO mapreduce.Job: Job job_1567593642473_0001 running in uber mode : false
19/09/04 18:43:03 INFO mapreduce.Job:  map 0% reduce 0%

卡死不动了,查看硬盘使用情况,查看日志yarn-root-nodemanager-master-node.log(WARN/ERROR信息)

[root@master-node logs]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda5        17G  8.1G  7.7G  52% /
tmpfs           931M     0  931M   0% /dev/shm
/dev/sda1       194M   34M  151M  19% /boot
/dev/sda2       2.0G   68M  1.9G   4% /home
[root@master-node logs]# cat yarn-root-nodemanager-master-node.log | grep WARN
2019-09-04 18:40:44,573 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config.
2019-09-04 18:40:44,786 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: NodeManager configured with 8 G physical memory allocated to containers, which is more than 80% of the total physical memory available (1.8 G). Thrashing might happen.
2019-09-04 18:45:12,623 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1567593642473_0001_01_000001 is : 137
2019-09-04 18:47:58,519 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1567593642473_0001_01_000002 is : 65
2019-09-04 18:47:58,647 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1567593642473_0001_01_000005 is : 65
2019-09-04 18:47:58,648 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1567593642473_0001_01_000004 is : 65
2019-09-04 18:47:58,804 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1567593642473_0001_01_000006 is : 65
2019-09-04 18:47:58,805 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1567593642473_0001_01_000003 is : 65
2019-09-04 18:47:59,425 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1567593642473_0001_01_000005 and exit code: 65
2019-09-04 18:47:59,426 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1567593642473_0001_01_000004 and exit code: 65
2019-09-04 18:47:59,425 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1567593642473_0001_01_000003 and exit code: 65
2019-09-04 18:47:59,426 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1567593642473_0001_01_000002 and exit code: 65
2019-09-04 18:47:59,425 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1567593642473_0001_01_000006 and exit code: 65
2019-09-04 18:48:02,010 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 65
2019-09-04 18:48:02,016 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 65
2019-09-04 18:48:02,019 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 65
2019-09-04 18:48:02,019 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 65
2019-09-04 18:48:02,071 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 65
2019-09-04 18:48:07,304 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Failed  TARGET=ContainerImpl    RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE   APPID=application_1567593642473_0001    CONTAINERID=container_1567593642473_0001_01_000001
2019-09-04 18:48:27,426 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Failed  TARGET=ContainerImpl    RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE   APPID=application_1567593642473_0001    CONTAINERID=container_1567593642473_0001_01_000006
2019-09-04 18:48:27,427 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Failed  TARGET=ContainerImpl    RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE   APPID=application_1567593642473_0001    CONTAINERID=container_1567593642473_0001_01_000004
2019-09-04 18:48:27,427 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Failed  TARGET=ContainerImpl    RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE   APPID=application_1567593642473_0001    CONTAINERID=container_1567593642473_0001_01_000002
2019-09-04 18:48:27,428 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Failed  TARGET=ContainerImpl    RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE   APPID=application_1567593642473_0001    CONTAINERID=container_1567593642473_0001_01_000003
2019-09-04 18:48:27,429 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Failed  TARGET=ContainerImpl    RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE   APPID=application_1567593642473_0001    CONTAINERID=container_1567593642473_0001_01_000005

[root@master-node logs]# cat yarn-root-resourcemanager-master-node.log |grep WARN
2019-09-05 10:03:16,373 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start
2019-09-05 10:03:16,376 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start

看日志,主要有4类警告信息

1.2019-09-04 18:40:44,573 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config.

2.2019-09-04 18:40:44,786 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: NodeManager configured with 8 G physical memory allocated to containers, which is more than 80% of the total physical memory available (1.8 G). Thrashing might happen.

3....exit code 65

Are you using a single node cluster? in such a case, if u are giving large file and there is no sufficient memory then the container might not be initialized.verify the pig configuration files also

4 maximum-am-resource-percent is likely set too low

2019-09-05 10:03:16,373 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start

原因分析

1.0 GB:任务所占的物理内存
1GB: mapreduce.map.memory.mb 参数默认设置大小
2.8 GB:程序占用的虚拟内存
2.1 GB: mapreduce.map.memory.mb 乘以 yarn.nodemanager.vmem-pmem-ratio 得到的

其中 yarn.nodemanager.vmem-pmem-ratio 是 虚拟内存和物理内存比例,在yarn-site.xml中设置,默认是2.1

很明显,container占用了2.8G的虚拟内存,但是分配给container的却只有2.1GB。所以kill掉了这个container

上面只是map中产生的报错,当然也有可能在reduce中报错,如果是reduce中,那么就是mapreduce.reduce.memory.db * yarn.nodemanager.vmem-pmem-ratio

注:
物理内存:真实的硬件设备(内存条)
虚拟内存:利用磁盘空间虚拟出的一块逻辑内存,用作虚拟内存的磁盘空间被称为交换空间(Swap Space)。(为了满足物理内存的不足而提出的策略)
linux会在物理内存不足时,使用交换分区的虚拟内存。内核会将暂时不用的内存块信息写到交换空间,这样以来,物理内存得到了释放,这块内存就可以用于其它目的,当需要用到原始的内容时,这些信息会被重新从交换空间读入物理内存。

解决方方法

1. 取消虚拟内存的检查(不建议):
在yarn-site.xml或者程序中中设置yarn.nodemanager.vmem-check-enabled为false

  yarn.nodemanager.vmem-check-enabled
  false
  Whether virtual memory limits will be enforced for containers.

除了虚拟内存超了,也有可能是物理内存超了,同样也可以设置物理内存的检查为 yarn.nodemanager.pmem-check-enabled :false
个人认为这种办法并不太好,如果程序有内存泄漏等问题,取消这个检查,可能会导致集群崩溃。

2. 增大mapreduce.map.memory.mb 或者 mapreduce.map.memory.mb (建议)
个人觉得是一个办法,应该优先考虑这种办法,这种办法不仅仅可以解决虚拟内存,或许大多时候都是物理内存不够了,这个办法正好适用

3. 适当增大 yarn.nodemanager.vmem-pmem-ratio的大小,为物理内存增大对应的虚拟内存, 但是这个参数也不能太离谱。

4. 如果任务所占用的内存太过离谱,更多考虑的应该是程序是否有内存泄漏,是否存在数据倾斜等,优先程序解决此类问题。

你可能感兴趣的:(大数据)