黑猴子的家:Hive 表的优化之 Count(distinct)

数据量小的时候无所谓,数据量大的情况下,由于COUNT DISTINCT操作需要用一个Reduce Task来完成,这一个Reduce需要处理的数据量太大,就会导致整个Job很难完成,一般COUNT DISTINCT使用先GROUP BY再COUNT的方式替换

案例实操

1、创建一张大表

hive (default)> create table bigtable(id bigint, time bigint, uid string, 
keyword string, url_rank int, click_num int, click_url string) 
row format delimited fields terminated by '\t';

2、加载数据

hive (default)> load data local inpath '/opt/module/datas/bigtable' 
into table bigtable;

3、设置5个reduce个数

hive (default)> set mapreduce.job.reduces = 5;

4、执行去重id查询

hive (default)> select count(distinct id) from bigtable;
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 7.12 sec   
HDFS Read: 120741990 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 7 seconds 120 msec
OK
c0
100001

Time taken: 23.607 seconds, Fetched: 1 row(s)

5、采用GROUP by去重id

hive (default)> select count(id) from (select id from bigtable group by id) a;
Stage-Stage-1: Map: 1  Reduce: 5   Cumulative CPU: 17.53 sec   
HDFS Read: 120752703 HDFS Write: 580 SUCCESS
Stage-Stage-2: Map: 3  Reduce: 1   Cumulative CPU: 4.29 sec   
HDFS Read: 9409 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 21 seconds 820 msec
OK
_c0
100001
Time taken: 50.795 seconds, Fetched: 1 row(s)

虽然会多用一个Job来完成,但在数据量大的情况下,这个绝对是值得的。

你可能感兴趣的:(黑猴子的家:Hive 表的优化之 Count(distinct))