大家在使用shell脚本调用hive命令的时候,发现hive的中间过程竟然打印到错误输出流里面,这样在查看错误日志的时候,需要过滤这些没用的信息,那么可以使用如下的配置参数。
set hive.session.silent=true; (默认是false)
例如:
hive> select from_original,pv from tableName where rpt_date='2014-12-08' order by pv desc limit 4; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_1417682027300_928652, Tracking URL = http://l-hdpm4.data.cn6.qunar.com:9981/proxy/application_1417682027300_928652/ Kill Command = /home/q/hadoop/hadoop-2.2.0/bin/hadoop job -kill job_1417682027300_928652 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2015-01-09 11:28:07,561 Stage-1 map = 0%, reduce = 0% 2015-01-09 11:28:12,735 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.15 sec 2015-01-09 11:28:13,766 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.15 sec 2015-01-09 11:28:14,796 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.15 sec 2015-01-09 11:28:15,826 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.15 sec 2015-01-09 11:28:16,859 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.15 sec 2015-01-09 11:28:17,892 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.15 sec 2015-01-09 11:28:18,925 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.73 sec 2015-01-09 11:28:19,958 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.73 sec MapReduce Total cumulative CPU time: 2 seconds 730 msec Ended Job = job_1417682027300_928652 MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 Cumulative CPU: 2.73 sec HDFS Read: 11815 HDFS Write: 83 SUCCESS Total MapReduce CPU Time Spent: 2 seconds 730 msec OK suggest 6 ts_hotcity 5 suggest2 4 mps_remdd 3 Time taken: 18.502 seconds, Fetched: 4 row(s)
不过我们需要的信息就最后面那么几行,可以如此设置
hive> set hive.session.silent=true; hive> select from_original,pv from tableName where rpt_date='2014-12-08' order by pv desc limit 4; suggest 6 ts_hotcity 5 suggest2 4 mps_remdd 3
大家可能会有两个疑问:
1、hive在哪&怎么将info和warning信息打印到标准错误流里面的?
那么给大家看一段代码(org.apache.hadoop.hive.cli.CliDriver):
try { ss.out = new PrintStream(System.out, true, "UTF-8"); ss.info = new PrintStream(System.err, true, "UTF-8"); ss.err = new CachingPrintStream(System.err, true, "UTF-8"); } catch (UnsupportedEncodingException e) { return 3; }
可见,使用的是System.err方法。
2、为什么需要打印到错误输出流里面
我的理解是便于命令行获取结果,并经这些内容要输出,如果放到标准输出里面,和结果就混淆了