Hadoop运维记录系列(六)

这两天有业务部门反馈,总有hive跑某天的log失败。看了一下log,虽然各种报错不一样,但基本都是OOM,追了一下午,终于追出来原因了,特此记录一下。


这个问题很诡异,map阶段oom,按说map的时候一个map对应一个数据块,最大也就占用128M内存,怎么会溢出呢,通常都应该是reduce溢出才对。先看看各个hadoop节点的tasktracker报的错误log。

节点一

2013-05-08 20:59:06,467 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space

at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)

at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)

at sun.nio.cs.StreamDecoder.<init>(StreamDecoder.java:231)

at sun.nio.cs.StreamDecoder.<init>(StreamDecoder.java:211)

at sun.nio.cs.StreamDecoder.forInputStreamReader(StreamDecoder.java:50)

at java.io.InputStreamReader.<init>(InputStreamReader.java:57)

at org.apache.hadoop.util.Shell.runCommand(Shell.java:211)

at org.apache.hadoop.util.Shell.run(Shell.java:182)

at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)

at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:710)

at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:443)

at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:426)

at org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:267)

at org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)

at org.apache.hadoop.mapred.Child$4.run(Child.java:260)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.mapred.Child.main(Child.java:249)


节点二

2013-05-08 20:56:37,012 INFO org.apache.hadoop.mapred.Task: Communication exception: java.lang.OutOfMemoryError: Java heap space

at java.io.BufferedReader.<init>(BufferedReader.java:80)

at java.io.BufferedReader.<init>(BufferedReader.java:91)

at org.apache.hadoop.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:396)

at org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessTree(ProcfsBasedProcessTree.java:151)

at org.apache.hadoop.util.LinuxResourceCalculatorPlugin.getProcResourceValues(LinuxResourceCalculatorPlugin.java:401)

at org.apache.hadoop.mapred.Task.updateResourceCounters(Task.java:808)

at org.apache.hadoop.mapred.Task.updateCounters(Task.java:830)

at org.apache.hadoop.mapred.Task.access$600(Task.java:66)

at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:666)

at java.lang.Thread.run(Thread.java:662)


节点三

2013-05-08 21:02:26,489 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space

at com.sun.org.apache.xerces.internal.xni.XMLString.toString(XMLString.java:185)

at com.sun.org.apache.xerces.internal.parsers.AbstractDOMParser.characters(AbstractDOMParser.java:1185)

at com.sun.org.apache.xerces.internal.xinclude.XIncludeHandler.characters(XIncludeHandler.java:1085)

at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:464)

at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)

at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)

at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)

at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:232)

at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)

at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)

at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1168)

at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1119)

at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1063)

at org.apache.hadoop.conf.Configuration.get(Configuration.java:416)

at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:521)

at org.apache.hadoop.io.nativeio.NativeIO.ensureInitialized(NativeIO.java:120)

at org.apache.hadoop.io.nativeio.NativeIO.getOwner(NativeIO.java:103)

at org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:116)

at org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:191)

at org.apache.hadoop.mapred.TaskLogsTruncater.getAllLogsFileDetails(TaskLogsTruncater.java:342)

at org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:134)

at org.apache.hadoop.mapred.Child$4.run(Child.java:260)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.mapred.Child.main(Child.java:249)


每个节点报错信息还都不一样,这就很让人头疼,不过下面这个log倒是殊途同归,所有节点最后都报了个无关紧要的文件错误。

2013-05-08 20:58:47,568 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.io.SecureIOUtils$AlreadyExistsException: EEXIST: File exists

2013-05-08 20:58:47,569 WARN org.apache.hadoop.mapred.Child: Error running child

org.apache.hadoop.io.SecureIOUtils$AlreadyExistsException: EEXIST: File exists

at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:167)

at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:312)

at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:385)

at org.apache.hadoop.mapred.Child$4.run(Child.java:257)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.mapred.Child.main(Child.java:249)

Caused by: EEXIST: File exists

at org.apache.hadoop.io.nativeio.NativeIO.open(Native Method)

at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:161)

... 7 more


最后这个log的确是殊途同归的,因为每个map在OOM之后,tasktracker会尝试重新启动map并重新创建相同的TaskLog的文件句柄,但因为是异常终止的,之前的TaskLog的文件句柄已经被创建,但是OOM了,这个句柄没有正常关闭,所以会报写失败的异常。源码如下


/**
 * Open the specified File for write access, ensuring that it does not exist.
 * @param f the file that we want to create
 * @param permissions we want to have on the file (if security is enabled)
 *
 * @throws AlreadyExistsException if the file already exists
 * @throws IOException if any other error occurred
 */
public static FileOutputStream createForWrite(File f, int permissions)
throws IOException {
  if (skipSecurity) {
    return insecureCreateForWrite(f, permissions);
  } else {
    // Use the native wrapper around open(2)
    try {
      FileDescriptor fd = NativeIO.open(f.getAbsolutePath(),
        NativeIO.O_WRONLY | NativeIO.O_CREAT | NativeIO.O_EXCL,
        permissions);
      return new FileOutputStream(fd);
    } catch (NativeIOException nioe) {
      if (nioe.getErrno() == Errno.EEXIST) {
        throw new AlreadyExistsException(nioe);
      }
      throw nioe;
    }
  }
}




De这个bug有点难度,按说map阶段是不容易溢出的,map阶段溢出最有可能发生的应该是在sort和spill阶段,也就是io.sort.mb参数所控制的环形内存所做的快速排序上,但是这个内存已经设置为600M足够大了,mapred.map.child.java.opts也设置了1.5G,也够了。然后由于每个节点的溢出错误都不一样,但是主要有一个共同点,是节点一、二heap和buffer读写问题。


回去再看他的HiveQL,很简单,做了一个uid的count distinct,然后做了几个字段的group by就没了。看来问题可能主要还是出在这个uid上面,他这个hql是特定选的4月某一天。然后尝试选3月的某天,没问题,选5月的某天也没问题,5月日志肯定比4月要大。这样就排除掉了日志大小的问题,一天才100G日志而已,比这个大的有的是。肯定不是因为日志太大引发的map阶段oom。


继续排除,然后高潮来了,选这一天的前一天,也没问题,选后一天,也没问题。看来问题就出在这一天的日志上面了。选了一下distinct length(uid),发现有一条uid有71个字节,正常的应该都是32字节。然后剔除uid长度为71的uid,也就是加上where length(uid) != 71做跟报错HQL相同的group by立马正常了,然后尝试原报错HQL,仍然报错,看来就是他了。


单独写个select * from xxx where length(uid) = 71的HQL抽出这条uid,发现里面含有特殊字符,就是这条uid捣的鬼了。可能是nginx在日志接收过程中发生了TCP错误,然后数据清洗没有洗掉。由于含有特殊字符,Hadoop在做本地map的时候,数据流含有可能是EOF或者NOP之类的东西,导致无法用BufferedReader读取和用StreamDecoder解释,跟做本地任务执行时继承到map.local.dir里的job.xml指定的InputFormat对不上,说白了也就是无法正常执行,然后就溢出了。


这个问题太操蛋了,但是也可能很常见,日志清洗总不能100%保证正确。就一行特殊字符的log,坑了我一下午。这个日志一天一百多G,总不能拿肉眼看,awk跑正则更没戏,总得想点变通的办法。Hive出的问题,最后还是用Hive把问题揪出来了。搞hadoop集群运维不能光搞hadoop的运行进程稳定,还得了解hadoop原理,还得了解业务。还好接手集群运维之前我是写map/reduce的,还算有一点点编程的基础,了解一些业务,比较容易分析一些bug。但是还是不得不吐槽,特殊字符真是个大坑。回头得要求他们把清洗程序加点过滤条件。


上亿行里面就一行bad log,典型的一颗老鼠屎污染了一片海。


另外,51cto的新版编辑器里面的引用标签好像不好使,我把log放在引用里面了,似乎没起什么美化的作用,盼修正。


本文出自 “实践检验真理” 博客,谢绝转载!

你可能感兴趣的:(hadoop,运维,故障分析)