Diagnostic Messages for this Task: Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":2446708,"order_id":162153513,"reason_from":"P","reason_id":39,"reason":"发错单了,我想要叫车","blame":"P","gmt_create":"2016-07-04 06:00:41","gmt_modify":"2016-07-04 06:00:41","pt":"2016-08-23"} at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:172) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":2446708,"order_id":162153513,"reason_from":"P","reason_id":39,"reason":"发错单了,我想要叫车","blame":"P","gmt_create":"2016-07-04 06:00:41","gmt_modify":"2016-07-04 06:00:41","pt":"2016-08-23"} at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:545) at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) ... 8 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveFatalException: [Error 20004]: Fatal error occurred when node tried to create too many dynamic partitions. The maximum number of dynamic partitions is controlled by hive.exec.max.dynamic.partitions and hive.exec.max.dynamic.partitions.pernode. Maximum was set to: 100 at org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:933) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:709) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:164) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:535) ... 9 more FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 58.62 sec HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 58 seconds 620 msec map to hive table error 3 RESULT:3
关系型数据库(如Oracle)中,对分区表Insert数据时候,数据库自动会根据分区字段的值,将数据插入到相应的分区中,Hive中也提供了类似的机制,即动态分区(Dynamic Partition),只不过,使用Hive的动态分区,需要进行相应的配置。
先看一个应用场景,源表t_lxw1234的数据如下:
SELECT day,url FROM t_lxw1234;
2015-05-10 url1
2015-05-10 url2
2015-06-14 url1
2015-06-14 url2
2015-06-15 url1
2015-06-15 url2
……
目标表为:
CREATE TABLE t_lxw1234_partitioned (
url STRING
) PARTITIONED BY (month STRING,day STRING)
stored AS textfile;
需求:将t_lxw1234中的数据按照时间(day),插入到目标表t_lxw1234_partitioned的相应分区中。
如果按照之前介绍的往指定一个分区中Insert数据,那么这个需求很不容易实现。
这时候就需要使用动态分区来实现,使用动态分区需要注意设定以下参数:
hive.exec.dynamic.partition
默认值:false
是否开启动态分区功能,默认false关闭。
使用动态分区时候,该参数必须设置成true;
hive.exec.dynamic.partition.mode
默认值:strict
动态分区的模式,默认strict,表示必须指定至少一个分区为静态分区,nonstrict模式表示允许所有的分区字段都可以使用动态分区。
一般需要设置为nonstrict
hive.exec.max.dynamic.partitions.pernode
默认值:100
在每个执行MR的节点上,最大可以创建多少个动态分区。
该参数需要根据实际的数据来设定。
比如:源数据中包含了一年的数据,即day字段有365个值,那么该参数就需要设置成大于365,如果使用默认值100,则会报错。
hive.exec.max.dynamic.partitions
默认值:1000
在所有执行MR的节点上,最大一共可以创建多少个动态分区。
同上参数解释。
hive.exec.max.created.files
默认值:100000
整个MR Job中,最大可以创建多少个HDFS文件。
一般默认值足够了,除非你的数据量非常大,需要创建的文件数大于100000,可根据实际情况加以调整。
hive.error.on.empty.partition
默认值:false
当有空分区生成时,是否抛出异常。
一般不需要设置。
那么,上面的需求可以使用如下的语句来完成:
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.exec.max.dynamic.partitions.pernode = 1000;
SET hive.exec.max.dynamic.partitions=1000;
INSERT overwrite TABLE t_lxw1234_partitioned PARTITION (month,day)
SELECT url,substr(day,1,7) AS month,day
FROM t_lxw1234;
注意:在PARTITION (month,day)中指定分区字段名即可;
在SELECT子句的最后两个字段,必须对应前面PARTITION (month,day)中指定的分区字段,包括顺序。
执行结果如下:
Loading data to table liuxiaowen.t_lxw1234_partitioned partition (month=null, day=null)
Loading partition {month=2015-05, day=2015-05-10}
Loading partition {month=2015-06, day=2015-06-14}
Loading partition {month=2015-06, day=2015-06-15}
Partition liuxiaowen.t_lxw1234_partitioned{month=2015-05, day=2015-05-10} stats: [numFiles=1, numRows=2, totalSize=10, rawDataSize=8]
Partition liuxiaowen.t_lxw1234_partitioned{month=2015-06, day=2015-06-14} stats: [numFiles=1, numRows=2, totalSize=10, rawDataSize=8]
Partition liuxiaowen.t_lxw1234_partitioned{month=2015-06, day=2015-06-15} stats: [numFiles=1, numRows=2, totalSize=10, rawDataSize=8]
使用show partitions t_lxw1234_partitioned;查看目标表有哪些分区:
hive> show partitions t_lxw1234_partitioned;
OK
month=2015-05/day=2015-05-10
month=2015-06/day=2015-06-14
month=2015-06/day=2015-06-15
动态分区过多导致运行时报错:
解决办法:将pt限定在一个星期或者一个月的时间期限内(分区是以时间划分的);这样不会导致时间过久而造成过大的分区