Hive安装和使用

Hive只在一个节点上安装即可

1.上传tar包
2.解压

tar -zxvf hive-0.9.0.tar.gz -C /cloud/

配置HIVE_HOME环境变量

3.将hive的metastore设置为mysql (切换到root用户)
配置hive
cp hive-default.xml.template hive-site.xml 
修改hive-site.xml(删除所有内容,只留一个
添加如下内容:

 javax.jdo.option.ConnectionURL
 jdbc:mysql://hadoop00:3306/hive?createDatabaseIfNotExist=true
 JDBC connect string for a JDBC metastore




 javax.jdo.option.ConnectionDriverName
 com.mysql.jdbc.Driver
 Driver class name for a JDBC metastore




 javax.jdo.option.ConnectionUserName
 root
 username to use against metastore database




 javax.jdo.option.ConnectionPassword
 123
 password to use against metastore database


5.安装hive和mysq完成后,将mysql的连接jar包拷贝到HIVE_HOME/lib目录下
如果出现没有权限的问题,在mysql授权

GRANT ALL PRIVILEGES ON hive.* TO 'root'@'%' IDENTIFIED BY '123' WITH GRANT OPTION;

FLUSH PRIVILEGES

使用:

1.建表(默认是内部表)
create table trade_detail(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t';
建分区表
create table td_part(id bigint, account string, income double, expenses double, time string) partitioned by (logdate string) row format delimited fields terminated by '\t';
建外部表
create external table td_ext(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t' location '/td_ext';


2.创建分区表
普通表和分区表区别:有大量数据增加的需要建分区表
create table book (id bigint, name string) partitioned by (pubdate string) row format delimited fields terminated by '\t'; 

3。分区表加载数据
load data local inpath './book.txt'  into table book partition (pubdate='2010-08-22');
load data local inpath './book.txt' overwrite into table book partition (pubdate='2010-08-22');

你可能感兴趣的:(hadoop)