一些SQL题目-Kuaishou面试

第一题:
tb1: url, ts
对于每个url,分别求在哪一秒点击量最大,最大值是多少?
ps:ts是毫秒级int的时间戳。

第二题:
tb2: uid, page
求:访问过page=A and page=B的设备数

第一题

建表,导入数据:
tb1: url, ts 
对于每个url,分别求在哪一秒点击量最大,最大值是多少?
ps:ts是毫秒级int的时间戳。



create table kuaishou(
url string, 
ts string) 
row format delimited fields terminated by ",";


vi kuaishou

url1,1234567890123
url1,1234567890113
url1,1234567891103
url1,1234527893123
url2,1234527892123
url2,1234527892123
url2,1234527890123
url2,1234527890123
url2,1234527890113
url2,1234527891103
url2,1234527893123
url2,1234527892123
url2,1234527892123
url2,1234527890123
url3,1234527892123
url3,1234527890123
url3,1234527890123
url3,1234527890113
url2,1234527891103
url2,1234527893123
url2,1234527892123
url2,1234567892123
url2,1234567890123

load data local inpath '/home/hadoop/tmp/kuaishou' into table kuaishou;
暂时解决思路:

1.切分毫秒为秒,
2.按照url,sec分组计数,
3.对2.计数结果逆序排序
4.取序号为1的记录。

select url,sec,cnt 
from
(
select url,sec,cnt,row_number() over(partition by url order by cnt desc) rn   --第三步
from
(
from (select url,substr(ts,1,10) sec from kuaishou)t1   --第一步
select url,sec,count(1) cnt   
group by url,sec       --第二步
)t3
)t4
where rn=1   --第四步
;
hive (default)> 
              > 
              > select url,sec,cnt 
              > from
              > (
              > select url,sec,cnt,row_number() over(partition by url order by cnt desc) rn   --第三步
              > from
              > (
              > from (select url,substr(ts,1,10) sec from kuaishou)t1   --第一步
              > select url,sec,count(1) cnt   
              > group by url,sec       --第二步
              > )t3
              > )t4
              > where rn=1   --第四步
              > ;
Automatically selecting local only mode for query
Query ID = hadoop_20200707072001_cad817b1-ab4e-4752-a47f-bbf591973f7f
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Job running in-process (local Hadoop)
2020-07-07 07:20:06,252 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_local1962055191_0030
Launching Job 2 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Job running in-process (local Hadoop)
2020-07-07 07:20:11,369 Stage-2 map = 100%,  reduce = 100%
Ended Job = job_local1720680700_0031
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 46950 HDFS Write: 19172 SUCCESS
Stage-Stage-2:  HDFS Read: 48422 HDFS Write: 19866 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
url sec cnt
url1    1234567890  4
url2    1234567892  7
url3    1234527890  3
Time taken: 9.94 seconds, Fetched: 3 row(s)
hive (default)> 

第二题

tb2: mid, page
求:访问过page=A and page=B的设备数

建表,导入数据:

create table kuaishou2(
uid string, 
page string) 
row format delimited fields terminated by ",";


vi kuaishou2

u1,A
u1,A
u3,A
u1,B
u2,A
u1,B
u2,A
u4,B
u1,A
u3,A
u1,C
u2,B
u1,B
u2,A
u4,B
u1,A
u3,D
u1,B
u2,A
u1,W
u2,A
u4,B

load data local inpath '/home/hadoop/tmp/kuaishou2' into table kuaishou2;

from kuaishou2 a,kuaishou2 b
select a.uid
where a.uid=b.uid and a.page='A' and b.page='B'
;
思路:一种用自连接 ;一种用size()取巧。

优化一点:
①可以对数据先刷选,然后再进行自连接;
②需要对结果去重,用group by代替distinct吧;
欢迎补充~~

自连接HQL Code如下:
优化后的:

1.筛选出访问过A,B的设备;
2.做自连接(on t1.uid=t2.uid);
3.count() group by做去重优化(优于distinct())

有2个job;Time taken: 26.834 seconds, Fetched: 1 row(s)


select count(uid) uid_cnt   -- 最后count设备数。
from (
    select 
        t1.uid
    from(
        select uid from kuaishou2 where page='A'  
        )t1
    join(
        select uid from kuaishou2 where page='B'
        )t2
    on 
        t1.uid=t2.uid
    group by 
        t1.uid 
)t3;

1.where做条件过滤和自连接;
2.count(distinct a.uid)求最终设备数
有1个job,Time taken: 20.426 seconds, Fetched: 1 row(s)


from kuaishou2 a,kuaishou2 b
select count(distinct a.uid) uid_cnt
where 
    a.uid=b.uid and 
    a.page='A' and 
    b.page='B'
;

第二种方法:思路见代码注释,

有2个job,Time taken: 9.919 seconds, Fetched: 1 row(s)
耗时最少。

--1.筛选访问过A,B的设备;
--2.按uid分组,
--3.按uid分组,用collect_set()自动去重,形成:uid,{访问过的页面}
--4.筛选出集合大小为2的记录
--5.count()求出最终设备数

select count(uid) uid_cnt           --5.count()求出最终设备数
from 
(
select uid,collect_set(page) ct     --3按uid分组,用collect_set()自动去重,
                                    --形成:uid,{访问过的页面}
from (
    from kuaishou2 a
    select a.uid,a.page
    where a.page='A' or a.page='B'  --1.筛选访问过A,B的设备;
)t1
group by uid                        --2 按uid分组,
having size(ct)=2                   --4.筛选出集合大小为2的记录
)t3
;

你可能感兴趣的:(一些SQL题目-Kuaishou面试)