(7-12)hive导出数据

(7-12)hive导出数据


---------------------------------------1、重定向把hive中的数据导出来:-------------------------
例如把 t5表的数据导出来:

[root@baozi hive]# bin/hive -e "select * from t5" > t5

Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.14.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/hive-jdbc-0.14.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
OK
Time taken: 3.394 seconds, Fetched: 7 row(s)
[root@baozi hive]#


导出在磁盘的文件:
[root@baozi hive]# more t5
hadoop  23      job1
hive    24      job1
hbase   25      job1
hdfs    26      job1
mapreduce       27      job1
spark   28      job1
        NULL    job1
[root@baozi hive]#



---------------------------2、使用DIRECTORY导出数据:----------------------------------

执行语句:
hive> insert overwrite local directory '/usr/local/hehe' select name,age from t1 where class="job1";
Query ID = root_20150429214343_fc600c2a-4b19-4559-b493-e1c793bcdb25
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1430296708269_0005, Tracking URL = http://baozi:8088/proxy/application_1430296708269_0005/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1430296708269_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-04-29 21:44:18,002 Stage-1 map = 0%,  reduce = 0%
2015-04-29 21:44:59,046 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.63 sec
MapReduce Total cumulative CPU time: 2 seconds 630 msec
Ended Job = job_1430296708269_0005
Copying data to local directory /usr/local/hehe
Copying data to local directory /usr/local/hehe
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1   Cumulative CPU: 2.63 sec   HDFS Read: 274 HDFS Write: 61 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 630 msec
OK
Time taken: 77.196 seconds
hive>


//查看导出来的数据
[root@baozi local]# pwd
/usr/local
[root@baozi local]# ll hehe
总用量 4
-rw-r--r--. 1 root root 61 4月  29 21:45 000000_0
[root@baozi local]# more hehe/000000_0
hadoop23
hive24
hbase25
hdfs26
mapreduce27
spark28
\N
[root@baozi local]#






你可能感兴趣的:(7-12hive导出数据)