Hadoop Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded

关于一个经典Hadoop 错误信息的解决的方法

错误信息如下:
17/12/08 10:08:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
具体如下:
[wor@ha-hadoop06-prd ~]$ hadoop dfs -ls /log/shu_hai/2017-12-06-04*
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
17/12/08 10:08:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
        at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:830)
        at org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:849)
        at org.apache.hadoop.fs.Globber.listStatus(Globber.java:69)
        at org.apache.hadoop.fs.Globber.glob(Globber.java:217)
        at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1657)
        at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
        at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
        at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
        at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
怀疑是所在服务器内存不足的问题,使用以下命令进行查看,发现不是这个原因
[wor@ha-hadoop06-prd ~]$ cat /proc/meminfo
MemTotal:       132042876 kB
MemFree:        104218204 kB
Buffers:          364980 kB
Cached:          7875876 kB
SwapCached:            0 kB
Active:         16876228 kB
Inactive:        6703604 kB
Active(anon):   10954008 kB
Inactive(anon):  4390920 kB
Active(file):    5922220 kB
Inactive(file):  2312684 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:      32997368 kB
SwapFree:       32997368 kB
Dirty:               624 kB

经过百度,发觉,可能是hadoop配置时使用了默认的命令行内存(512M),需要增大,即将hadoop-env.sh进行如下设置:
改变前配置:
Hadoop Exception in thread
改变后配置:
Hadoop Exception in thread
然后发现一切配置都好啦,具体更细节原理需要更深学习.

参考网址:

http://blog.csdn.net/memray/article/details/14453961

http://blog.sina.com.cn/s/blog_81e6c30b0101b4ix.html


你可能感兴趣的:(Hadoop)