Hive 跑mapReduce 任务时候卡住的两种情况

情况1:

In order to change the average load for a reducer (in bytes):

  set hive.exec.reducers.bytes.per.reducer=

In order to limit the maximum number of reducers:

  set hive.exec.reducers.max=

In order to set a constant number of reducers:

  set mapreduce.job.reduces=


卡在这里不动,大致原因:内存不足,方法,关闭其他任务,重启集群后解决;


情况2:

卡在这里:

Starting Job = job_1604227043139_0001, Tracking URL = http://hadoop103:8088/proxy/application_1604227043139_0001/

Kill Command = /opt/module/hadoop-3.1.3/bin/mapred job  -kill job_1604227043139_0001


原因:

有一个 节点的NodeManager  挂掉了。


执行:yarn --daemon start nodemanager

把它启动起来就好了;

你可能感兴趣的:(Hive 跑mapReduce 任务时候卡住的两种情况)