Error: GC overhead limit exceeded解决之道

前提:运行MR的硬件环境必须满足,本人的i7处理器,8G内存。在执行2000W数据,(大表和小表关联)如图所示CPU的情况:
 
  
瞬时CUP达到99%,内存占用率70%。
eclipse中mp任务异常
http://blog.csdn.net/xiaoshunzi111/article/details/52882234
 
 
  
 i have a problem when run Hibench with hadoop-2.2.0, the wrong message
list as below

 

14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%

14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000020_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000008_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000015_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000023_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000026_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000019_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000007_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000000_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000021_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000029_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000010_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000018_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000014_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000028_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000002_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%

14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000005_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%

14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000006_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000027_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%

14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000009_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000017_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000022_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%

14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000001_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%

14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000024_0, Status : FAILED

 

and then i add a parameter "mapred.child.java.opts" to the file
"mapred-site.xml", 

  

        mapred.child.java.opts

        -Xmx1024m

  

then another error occurs as below

 

14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%

14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000002_0, Status : FAILED

Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000004 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

       |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stderr  

 

Container killed on request. Exit code is 143

14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000001_0, Status : FAILED

Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000003 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stderr  

       |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

Container killed on request. Exit code is 143

你可能感兴趣的:(CPU,内存溢出,GC,eclipse,hadoop)