安装必备jar包 我是直接连接网络使用linux命令实现
wget http://mirror.bit.edu.cn/apache/hive/hive-0.11.0/hive-0.11.0-bin.tar.gz
tar -xzvf hive-0.11.0-bin.tar.gz
cd hive-0.11.0-bin
[root@localhost ~]# vi /etc/profile
sudo vi /etc/profile
进入profile文件,添加如下代码
增加:
export HIVE_HOME=/home/ysc/hive-0.10.0-bin
export PATH=$PATH:$HIVE_HOME/bin
source /etc/profile
hadoop fs -mkdir /tmp
hadoop fs -mkdir /user/hive/warehouse
hadoop fs -chmod g+w /tmp
hadoop fs -chmod g+w /user/hive/warehouse
cp conf/hive-log4j.properties.template conf/hive-log4j.properties
注意要相关模式
如使用local模式:SET mapred.job.tracker=local;
使用HADOOP集群(默认):SET mapred.job.tracker=host001:9001;
本地使用hive服务:
命令行执行HiveQL命令:创建表、准备文本数据、导入、查询
创建表:
hive> CREATE TABLE demo(foo INT, bar STRING);
Creates a table called demo with two columns, the first being an integer and the other a string
创建一个新表,结构与其他一样
hive> create table new_table like records;
创建分区表:
hive> create table logs(ts bigint,line string) partitioned by (dt String,country String);
加载分区表数据:
hive> load data local inpath '/home/hadoop/input/hive/partitions/file1' into table logs partition (dt='2013-11-14',country='GB');
展示表中有多少分区:
hive> show partitions logs;
展示所有表:
hive> SHOW TABLES;
lists all the tables
hive> SHOW TABLES '.*s';
lists all the table that end with 's'. The pattern matching follows Java regular
expressions. Check out this link for documentationhttp://java.sun.com/javase/6/docs/api/java/util/regex/Pattern.html
显示表的结构信息
hive> DESCRIBE invites;
shows the list of columns
更新表的名称:
hive> ALTER TABLE source RENAME TO target;
添加新一列
hive> ALTER TABLE invites ADD COLUMNS (new_col2 INT COMMENT 'a comment');
删除表:
hive> DROP TABLE records;
删除表中数据,但要保持表的结构定义
hive> dfs -rmr /user/hive/warehouse/records;
从本地文件加载数据:
hive> LOAD DATA LOCAL INPATH '/home/hadoop/input/ncdc/micro-tab/sample.txt' OVERWRITE INTO TABLE records;
显示所有函数:
hive> show functions;
查看函数用法:
hive> describe function substr;
查看数组、map、结构
hive> select col1[0],col2['b'],col3.c from complex;
内连接:
hive> SELECT sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);
查看hive为某个查询使用多少个MapReduce作业
hive> Explain SELECT sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);
外连接:
hive> SELECT sales.*, things.* FROM sales LEFT OUTER JOIN things ON (sales.id = things.id);
hive> SELECT sales.*, things.* FROM sales RIGHT OUTER JOIN things ON (sales.id = things.id);
hive> SELECT sales.*, things.* FROM sales FULL OUTER JOIN things ON (sales.id = things.id);
in查询:Hive不支持,但可以使用LEFT SEMI JOIN
hive> SELECT * FROM things LEFT SEMI JOIN sales ON (sales.id = things.id);
Map连接:Hive可以把较小的表放入每个Mapper的内存来执行连接操作
hive> SELECT /*+ MAPJOIN(things) */ sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);
INSERT OVERWRITE TABLE ..SELECT:新表预先存在
hive> FROM records2
> INSERT OVERWRITE TABLE stations_by_year SELECT year, COUNT(DISTINCT station) GROUP BY year
> INSERT OVERWRITE TABLE records_by_year SELECT year, COUNT(1) GROUP BY year
> INSERT OVERWRITE TABLE good_records_by_year SELECT year, COUNT(1) WHERE temperature != 9999 AND (quality = 0 OR quality = 1 OR quality = 4 OR quality = 5 OR quality = 9) GROUP BY year;
CREATE TABLE ... AS SELECT:新表表预先不存在
hive>CREATE TABLE target AS SELECT col1,col2 FROM source;
创建视图:
hive> CREATE VIEW valid_records AS SELECT * FROM records2 WHERE temperature !=9999;
查看视图详细信息:
hive> DESCRIBE EXTENDED valid_records;
创建hive表:
create table demo (key int, value string) row format delimited fields terminated by '=' stored as textfile;
加载数据到demo 表:
load data local inpath '/home/ysc/hive-0.11.0-bin/data.txt' into table demo;
查询:
select * from demo;
select * from demo where key>=100 and key<=120;
select *,count(*) as fre from demo group by value order by fre desc;
如果测试成功:
默认情况下没有mysql 的目前mysql在使用元数据时比较常见:
sudo apt-get install mysql-server mysql-client
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
sudo vi /etc/mysql/my.cnf
注释bind-address = 127.0.0.1
sudo service mysql restart
mysql -uroot -pysc
vi conf/hive-site.xml
内容为:
<?xmlversion="1.0"?>
<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>
<configuration>
<!--使用mysql-->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://host001:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>ysc</value>
</property>
<!--使用hwi-->
<property>
<name>hive.hwi.listen.host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>lib/hive-hwi-0.11.0.war</value>
</property>
<!--使用metastore-->
<property>
<name>hive.metastore.uris</name>
<value>thrift://host001:9083</value>
</property>
</configuration>
将mysql-connector-java-5.1.18.jar放置到hive-0.10.0-bin/lib目录
hive --service metastore &
hive --service hiveserver &
hive -h host001 -p 10000
hive --service hwi &
l
http://host001:9999/hwi/
1、一般来说我们对hive的操作都是通过cli来进行,也就是Linux的控制台,但是,这样做本质上是每个连接都存放一个元数据,各个之间都不相同,所以,对于这样的模式我建议是用来做一些测试比较合适,并不适合做产品的开发和应用。
2、因此,就产生的JDBC连接的方式,当然还有其他的连接方式,比如ODBC等。
把hadoop-core-1.1.2.jar以及HIVE_HOME/lib/*.jar加入构建路径
publicstaticvoid main(String[] args) throws Exception { Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver"); Connection con = DriverManager.getConnection("jdbc:hive://host001:10000/default"); String sql = "select * from person"; PreparedStatement pst = con.prepareStatement(sql); ResultSet rs = pst.executeQuery(); while(rs.next()){ System.out.println(rs.getString(1)+" "+rs.getString(2)); } }
import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.Statement; public class HiveTestCase { public static void main(String[] args) throws Exception { Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver"); //String dropSQL = "drop table javabloger"; //String createSQL = "create table javabloger (key int, value string)"; // hive插入数据支持两种方式一种:load文件,令一种为从另一个表中查询进行插入 // hive是不支持insert into...values(....)这种操作的 //String insterSQL = "LOAD DATA LOCAL INPATH '/work/hive/examples/files/kv1.txt' OVERWRITE INTO TABLE javabloger"; String querySQL = "SELECT name,address FROM people_test a"; Connection con = DriverManager.getConnection( "jdbc:hive://host001:10000/default", "root", "****"); Statement stmt = con.createStatement(); //stmt.executeQuery(dropSQL); // 执行删除语句 //stmt.executeQuery(createSQL); // 执行建表语句 //stmt.executeQuery(insterSQL); // 执行插入语句 ResultSet res = stmt.executeQuery(querySQL); // 执行查询语句 while (res.next()) { System.out.println("Result: key:" + res.getString(1) + " –> value:" + res.getString(2)); } } }
Hcatalog是apache开源的对于表和底层数据管理统一服务平台,目前最新release版本是0.5,不过需要hive 0.10支持,
由于hcatalog中所有的底层数据信息都是保存在hive metastore里,所以hive版本升级后schema变动或者api变动会对hacatalog产生影响,
因此在hive 0.11中已经集成了了hcatalog,以后也会成为hive的一部分,而不是独立的项目
sudo vi /etc/profile
增加:
export HADOOP_HOME=/home/szy/hadoop-1.2.1
export HCAT_HOME=/home/szy/hive-0.11.0-bin/hcatalog
export HCAT_PREFIX=$HCAT_HOME
export METASTORE_PORT=9083
export HCAT_LOG_DIR=/home/szy/hive-0.11.0-bin/hcatalog/logs
export PATH=$PATH:$HCAT_HOME/bin:$HCAT_HOME/sbin
source /etc/profile
mkdir /home/szy/hive-0.11.0-bin/hcatalog/logs
chmod +x /home/szy/hive-0.11.0-bin/hcatalog/bin/hcat
chmod +x /home/szy/hive-0.11.0-bin/hcatalog/sbin/*.sh
hcat -e "create table test(id int, value string)"
hcat -e "drop table test"
hcat -e "show tables"
hcat -e "desc test"
hcat_server.sh start & (注意不要启动后面的命令:hive --service metastore &)
hcat_server.sh stop