Hive在查询中使用排序

一 按照某字段排序
hive> select empno,ename,sal from emp order by sal;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201708302015_0002, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201708302015_0002
Kill Command = /opt/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201708302015_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-08-30 20:30:16,554 Stage-1 map = 0%, reduce = 0%
2017-08-30 20:30:35,866 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.41 sec
2017-08-30 20:30:51,257 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 1.41 sec
2017-08-30 20:30:54,287 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.32 sec
MapReduce Total cumulative CPU time: 3 seconds 320 msec
Ended Job = job_201708302015_0002
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 3.32 sec HDFS Read: 635 HDFS Write: 319 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 320 msec
OK
7369.0 SMITH 4800.0
7900.0 JAMES 4950.0
1001.0 Tome 5000.0
1002.0 Tome 5000.0
7876.0 ADAMS 5100.0
7521.0 WARD 5250.0
7654.0 MARTIN 5250.0
7934.0 MILLER 5300.0
7844.0 TURNER 5500.0
7499.0 ALLEN 5600.0
7788.0 SCOTT 7000.0
7902.0 FORD 7000.0
7782.0 CLARK 8450.0
7698.0 BLAKE 8850.0
7566.0 JONES 9075.0
7839.0 KING 12111.0
Time taken: 61.037 seconds, Fetched: 16 row(s)

二 按表达式排序
hive> select empno,ename,sal,sal*12 from emp order by sal*12;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201708302015_0003, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201708302015_0003
Kill Command = /opt/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201708302015_0003
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-08-30 20:34:15,655 Stage-1 map = 0%, reduce = 0%
2017-08-30 20:34:25,747 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.07 sec
2017-08-30 20:34:42,236 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 2.07 sec
2017-08-30 20:34:45,276 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 2.07 sec
2017-08-30 20:34:46,288 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.24 sec
MapReduce Total cumulative CPU time: 4 seconds 240 msec
Ended Job = job_201708302015_0003
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 4.24 sec HDFS Read: 635 HDFS Write: 451 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 240 msec
OK
7369.0 SMITH 4800.0 57600.0
7900.0 JAMES 4950.0 59400.0
1001.0 Tome 5000.0 60000.0
1002.0 Tome 5000.0 60000.0
7876.0 ADAMS 5100.0 61200.0
7521.0 WARD 5250.0 63000.0
7654.0 MARTIN 5250.0 63000.0
7934.0 MILLER 5300.0 63600.0
7844.0 TURNER 5500.0 66000.0
7499.0 ALLEN 5600.0 67200.0
7788.0 SCOTT 7000.0 84000.0
7902.0 FORD 7000.0 84000.0
7782.0 CLARK 8450.0 101400.0
7698.0 BLAKE 8850.0 106200.0
7566.0 JONES 9075.0 108900.0
7839.0 KING 12111.0 145332.0
Time taken: 46.63 seconds, Fetched: 16 row(s)

三 按别名排序
hive> select empno,ename,sal,sal*12 annsal from emp order by annsal;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201708302015_0004, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201708302015_0004
Kill Command = /opt/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201708302015_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-08-30 20:36:11,063 Stage-1 map = 0%, reduce = 0%
2017-08-30 20:36:24,233 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.24 sec
2017-08-30 20:36:41,527 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 2.24 sec
2017-08-30 20:36:44,554 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 2.24 sec
2017-08-30 20:36:45,561 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.2 sec
MapReduce Total cumulative CPU time: 4 seconds 200 msec
Ended Job = job_201708302015_0004
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 4.2 sec HDFS Read: 635 HDFS Write: 451 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 200 msec
OK
7369.0 SMITH 4800.0 57600.0
7900.0 JAMES 4950.0 59400.0
1001.0 Tome 5000.0 60000.0
1002.0 Tome 5000.0 60000.0
7876.0 ADAMS 5100.0 61200.0
7521.0 WARD 5250.0 63000.0
7654.0 MARTIN 5250.0 63000.0
7934.0 MILLER 5300.0 63600.0
7844.0 TURNER 5500.0 66000.0
7499.0 ALLEN 5600.0 67200.0
7788.0 SCOTT 7000.0 84000.0
7902.0 FORD 7000.0 84000.0
7782.0 CLARK 8450.0 101400.0
7698.0 BLAKE 8850.0 106200.0
7566.0 JONES 9075.0 108900.0
7839.0 KING 12111.0 145332.0
Time taken: 50.667 seconds, Fetched: 16 row(s)

四 按序号排序
必须设置:
set hive.groupby.orderby.position.alias=true;
hive> select empno,ename,sal,sal*12 annsal from emp order by 4;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201708302015_0005, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201708302015_0005
Kill Command = /opt/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201708302015_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-08-30 20:37:59,324 Stage-1 map = 0%, reduce = 0%
2017-08-30 20:38:07,379 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.08 sec
2017-08-30 20:38:22,512 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 2.08 sec
2017-08-30 20:38:24,535 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.74 sec
MapReduce Total cumulative CPU time: 3 seconds 740 msec
Ended Job = job_201708302015_0005
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 3.74 sec HDFS Read: 635 HDFS Write: 451 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 740 msec
OK
1001.0 Tome 5000.0 60000.0
1002.0 Tome 5000.0 60000.0
7369.0 SMITH 4800.0 57600.0
7499.0 ALLEN 5600.0 67200.0
7521.0 WARD 5250.0 63000.0
7566.0 JONES 9075.0 108900.0
7654.0 MARTIN 5250.0 63000.0
7698.0 BLAKE 8850.0 106200.0
7782.0 CLARK 8450.0 101400.0
7788.0 SCOTT 7000.0 84000.0
7839.0 KING 12111.0 145332.0
7844.0 TURNER 5500.0 66000.0
7876.0 ADAMS 5100.0 61200.0
7900.0 JAMES 4950.0 59400.0
7902.0 FORD 7000.0 84000.0
7934.0 MILLER 5300.0 63600.0
Time taken: 42.186 seconds, Fetched: 16 row(s)

五 关于null
如果含有null
若是升序,则null排在最前面
若是降序,则null排在最后面

你可能感兴趣的:(Hive)