hive异常记录

1.HIVE MapJoin异常问题

摘要: HIVE被很广泛的使用,使用过程中也会遇到各种千奇百怪的问题。这里就遇到的MapJoin Local 内存不足的问题进行讨论,从问题描述、mapjion原理以及产生该问题的原因,解决方案做一下介绍,最后对该问题进行了进一步的思考,希望对解决该类问题的朋友有所帮助。

问题描述

在跑hive作业的时候,偶尔会遇到下面的异常 FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask 。通过查看日志,你可以看到这是map join的问题,会看到Starting to launch local task to process map join; maximum memory = xxx,Execution failed with exit status: 3 等等这样的日志。在网上搜索也可以看到一些问题的解释,例如 stackoverflow上就有一个 http://stackoverflow.com/questions/22977790/hive-query-execution-error-return-code-3-from-mapredlocaltask

搜索结果建议的解决方案

    1. set hive.auto.convert.join = false; 关闭mapjion
    1. 调小hive.smalltable.filesize,默认是25000000(在2.0.0版本中)
    1. hive.mapjoin.localtask.max.memory.usage 调大到0.999
    1. set hive.ignore.mapjoin.hint=false; 关闭忽略mapjoin的hints

原理及问题分析

MapJoin原理可以参见这里,讲的比较清楚。出现问题的地方就是MapredLocalTask这里,在客户端本地启动一个Driver进程,扫描小表的数据,将其转换成一个HashTable的数据结构,这个过程中在做内存检查,即checkMemoryStatus的时候,抛出了异常。我们看一下这里的检查点

    double percentage = (double) usedMemory / (double) maxHeapSize;
    String msg = Utilities.now() + "\tProcessing rows:\t" + numRows + "\tHashtable size:\t"
        + tableContainerSize + "\tMemory usage:\t" + usedMemory + "\tpercentage:\t" + percentageNumberFormat.format(percentage);
    console.printInfo(msg);
    if(percentage > maxMemoryUsage) {
      throw new MapJoinMemoryExhaustionException(msg);
    }

跟当前进程的MaxHeap有关,跟当前进程的UsedMemory有关,跟参数maxMemoryUsage有关(hive.mapjoin.localtask.max.memory.usage),通过分析比较我们可以发现,上述的方案1和4,直接关闭mapjion,避免启动MapredLocalTask,就不会出现这样的check,进而不会出现问题;上述的方案2,减小join表的大小,进而减小UsedMemory,也可以解决这个问题;上面的方案3, 调大maxMemoryUsage,使内存充分利用,也可以解决这个问题。我们注意到maxHeapSize 这个参数,没有针对性的解决方案

增加的一种解决方案,调大MapredLocalTask JVM启动参数

解决方案还是需要考虑不影响性能。
调大MapredLocalTask 的JVM启动参数,进而可以增加maxHeapSize,同样也可以解决这个问题。如何去调大这个参数呢?通过查看MapredLocalTask代码我们可以看到

      jarCmd = hiveJar + " " + ExecDriver.class.getName();
      String hiveConfArgs = ExecDriver.generateCmdLine(conf, ctx);
      String cmdLine = hadoopExec + " jar " + jarCmd + " -localtask -plan " + planPath.toString()
          + " " + isSilent + " " + hiveConfArgs;
      ...
      Map<String, String> variables = new HashMap<String, String>(System.getenv());
      ...
      // Run ExecDriver in another JVM
      executor = Runtime.getRuntime().exec(cmdLine, env, new File(workDir));

启动新的ExecDriver,使用的是hadoop jar,系统环境参数继承了父进程的系统环境变量(里面逻辑有一些参数会覆盖)。而hadoop jar 启动java进程,内存参数会受哪些地方影响呢?如果没有设置,受hadoop自身一些脚本配置的影响;HADOOP_HEAPSIZE,如果设置了该变量,JVM参数就是-Xmx${HADOOP_HEAPSIZE}m ;如果不设置 ,就会受/usr/lib/hadoop-current/libexec/hadoop-config.sh里面配置的JAVA_HEAP_MAX=-Xmx1000m 。有没有印象?你使用hadoop jar启动的一些进程参数都是-Xmx1000m, 如果注意观察,ExecDriver这个进程也是这个参数。知道这个参数之后,可以在/usr/lib/hadoop-current/libexec/hadoop-config.sh 这里将参数调大,例如设置JAVA_HEAP_MAX=-Xmx1408m 可以解决问题。

研究与思考

通过查看checkMemoryStatus 的代码,我们可以看到,这个比较的逻辑不太合适,当前内存使用达到了一定阈值,并不代表内存不够用,因为还有gc存在啊,如果gc之后还是超过了这个阈值,确实需要抛出异常。基于这样的分析,在HIVE JIRA上提了一个issue 并有相应的一些想法和patch。如果感兴趣,欢迎讨论交流,请戳HIVE-15221


二、Error during job, obtaining debugging information 以及beyond physical memory limits

MapReduce Total cumulative CPU time: 0 days 1 hours 41 minutes 34 seconds 890 msec

Ended Job = job_1519262673035_0490 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1519262673035_0490_m_000064 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000104 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000040 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000098 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000066 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000127 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000024 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000111 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000116 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000058 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000008 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000014 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000125 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000156 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000132 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000169 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000167 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_m_000168 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000067 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000087 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000013 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000032 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000055 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000045 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000051 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000155 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000111 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000117 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000124 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000131 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000132 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000173 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000166 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000182 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000002 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000149 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000004 (and more) from job job_1519262673035_0490
Examining task ID: task_1519262673035_0490_r_000100 (and more) from job job_1519262673035_0490


Task with the most failures(4):
-----
Task ID:
  task_1519262673035_0490_r_000042


URL:
  http://ICC-FIS-HADOOP-S1:8088/taskdetails.jsp?jobid=job_1519262673035_0490&tipid=task_1519262673035_0490_r_000042
-----
Diagnostic Messages for this Task:
Container [pid=118755,containerID=container_1519262673035_0490_01_000412] is running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical memory used; 2.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1519262673035_0490_01_000412 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 118755 118753 118755 118755 (bash) 0 0 108650496 312 /bin/bash -c /usr/java/jdk1.8.0_111/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.net.preferIPv4Stack=true -Xmx820m -Djava.io.tmpdir=/data/cdh/yarn/nm/usercache/root/appcache/application_1519262673035_0490/container_1519262673035_0490_01_000412/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/data/yarn/container-logs/application_1519262673035_0490/container_1519262673035_0490_01_000412 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 192.168.0.37 53235 attempt_1519262673035_0490_r_000042_3 412 1>/data/yarn/container-logs/application_1519262673035_0490/container_1519262673035_0490_01_000412/stdout 2>/data/yarn/container-logs/application_1519262673035_0490/container_1519262673035_0490_01_000412/stderr
        |- 118789 118755 118755 118755 (java) 1924 93 2860195840 275812 /usr/java/jdk1.8.0_111/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.net.preferIPv4Stack=true -Xmx820m -Djava.io.tmpdir=/data/cdh/yarn/nm/usercache/root/appcache/application_1519262673035_0490/container_1519262673035_0490_01_000412/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/data/yarn/container-logs/application_1519262673035_0490/container_1519262673035_0490_01_000412 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 192.168.0.37 53235 attempt_1519262673035_0490_r_000042_3 412


Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143




FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 8  Reduce: 27   Cumulative CPU: 183.28 sec   HDFS Read: 1790840004 HDFS Write: 492703 SUCCESS
Stage-Stage-2: Map: 8  Reduce: 15   Cumulative CPU: 309.33 sec   HDFS Read: 950007223 HDFS Write: 675072676 SUCCESS
Stage-Stage-3: Map: 16  Reduce: 11   Cumulative CPU: 573.67 sec   HDFS Read: 675224947 HDFS Write: 685779920 SUCCESS
Stage-Stage-4: Map: 31  Reduce: 124   Cumulative CPU: 2929.47 sec   HDFS Read: 8289071173 HDFS Write: 9167219076 SUCCESS
Stage-Stage-5: Map: 135  Reduce: 137   Cumulative CPU: 5936.56 sec   HDFS Read: 9168741806 HDFS Write: 9834645116 SUCCESS
Stage-Stage-6: Map: 42  Reduce: 170   Cumulative CPU: 4886.68 sec   HDFS Read: 11355167413 HDFS Write: 13183387340 SUCCESS
Stage-Stage-7: Map: 171  Reduce: 197   Cumulative CPU: 6094.89 sec   HDFS Read: 13185924915 HDFS Write: 1261746521 FAIL
Total MapReduce CPU Time Spent: 0 days 5 hours 48 minutes 33 seconds 880 msec


原因:空间不足

解决办法:

在执行hive语句前加上

set mapreduce.map.memory.mb=1025;//只要大于1024,hive默认分配的内存分大一倍,也就是2048M  
set mapreduce.reduce.memory.mb=1025;

你可能感兴趣的:(hive)