hive官网(http://hive.apache.org/)在6月20日,,发布了Hive2.1.0版本,并宣称它是新时代大数据工程师、数据分析师的利器,Apache Hive 2.1新引入了6大性能,包括:
(1)LLAP。ApacheHive 2.0引入了LLAP(Live Long And Process),而2.1则对其进行了极大的优化,相比于ApacheHive 1,其性能提升约25倍;
(2)更鲁棒的SQL ACID支持;
(3)2X ETL性能提升。引入更智能的CBO(Cost Based Optimizer),更快的类型转换以及动态分区优化;
(4)支持存储过程。加大简化了从EDW迁移到Hive的流程。这是通过开源项目HPL/SQL(Apache开源协议,http://www.hplsql.org/)实现的,HPL/SQL的目的是为Apache Hive,SparkSQL, Impala 以及其他SQL-on-Hadoop实现,任何 NoSQL和 RDBMS增加存储过程的实现;
(5)对文本格式数据增加向量化计算的支持;
(6)引入新的诊断和监控工具,包括新的HiveServer2 UI,LLAPUI和改进的Tez UI。
在性能方面,据称,HIVE2.1性能提高约26倍。
接下来详细介绍对ApacheHive 2.1性能提升至关重要的优化:LLAP。
LLAP是“Live Long and Process”的简写,它引入了分布式持久化查询服务,并结合经优化的数据缓存机制,可快速启动查询计算作业并避免无需的磁盘IO操作。简而言之,LLAP是下一代分布式计算架构,它能够智能地将数据缓存到多台机器内存中,并允许所有客户端共享这些缓存的数据,同时保留了弹性伸缩能力。
相比于Hive 1 + Tez,Hive2+ Tez+LLAP性能提升约26倍,测试结果如下图所示(测试结果是通过https://github.com/hortonworks/hive-testbench得到的):
Hive2 LLAP的引入,标志着ApacheHive进入内存计算时代。总结起来,内存计算类型可分为以下三类:
其中,Type1已被Apachehadoop生态系统证明其性能不会太高,因而Hive直接进入Type2,目前对Type2中所有特性均支持地很好,包括分布式内存管理和优化,内存数据共享等。此外,ApacheHive正进一步优化性能,包括支持新型存储介质Flash,扩展LLAP能力,使其可以直接处理压缩数据而无需事先解压。
我这里hive的存放目录是/opt/bigdata/hive,并重命名为hive
tar -zxvf /home/thinkgamer/下载/apache-hive-2.1.0-bin.tar.gz -C /opt/bigdata/
mv apache-hive-2.1.0-bin/ hive
创建hive21用户,赋予权限,清除缓存
CREATE USER 'hive21' IDENTIFIED BY 'hive21';
grant all privileges on *.* to 'hive21' with grant option;
flush privileges;
cp /path/to/mysql-connector-Java-5.1.38-bin.jar hive/lib
<name>javax.jdo.option.ConnectionURLname>
<value>jdbc:mysql://localhost:3306/hive21?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=UTF-8value>
javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver
<name>javax.jdo.option.ConnectionUserNamename>
<value>hive21value>
<name>javax.jdo.option.ConnectionPasswordname>
<value>hive21value>
bin/hive
hive> show databases;
OK
default
Time taken: 1.123 seconds, Fetched: 1 row(s)
hive> create table table_name (
> id int,
> dtDontQuery string,
> name string
> );
OK
Time taken: 0.983 seconds
hive> show tables;
OK
table_name
Time taken: 0.094 seconds, Fetched: 1 row(s)
这个时候进入mysql数据库有一个hive21的数据库
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| hive |
| hive21 |
| mysql |
| performance_schema |
+--------------------+
报错如下:
Caused by: MetaException(message:Hive metastore database is not initialized. Please use schematool (e.g. ./schematool -initSchema -dbType ...) to create the schema.
If needed, don't forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. ?createDatabaseIfNotExist=true for mysql))
解决办法:
bin/schematool -initSchema -dbType mysql --verbose
此问题解决时在网上查阅资料有人说这里要初始化derby数据库,个人认为这是不正确的,因为我们已经配置使用了mysql作为元数据库
报错如下:
Logging initialized using configuration in file:/opt/bigdata/hive/conf/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.(Path.java:171)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:631)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:550)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:518)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
... 12 more
解决办法:
修改 hive-site.xml 替换${system:java.io.tmpdir} 和 ${system:user.name}为/opt/bigdata/hive/tmp
hive1.2
[master@master1 hive]$ bin/hive --service help
Usage ./hive --service serviceName
Service List: beeline cli help hiveburninclient hiveserver2 hiveserver hwi jar lineage metastore metatool orcfiledump rcfilecat schemaTool version
Parameters parsed:
--auxpath : Auxillary jars
--config : Hive configuration directory
--service : Starts specific service/component. cli is default
Parameters used:
HADOOP_HOME or HADOOP_PREFIX : Hadoop install directory
HIVE_OPT : Hive options
For help on a particular service:
./hive --service serviceName --help
Debug help: ./hive --debug --help
hive2.1
root@thinkgamer-pc:/opt/bigdata/hive# bin/hive --service help
Usage ./hive --service serviceName
Service List: beeline cleardanglingscratchdir cli hbaseimport hbaseschematool help hiveburninclient hiveserver2 hplsql hwi jar lineage llapdump llap llapstatus metastore metatool orcfiledump rcfilecat schemaTool version
Parameters parsed:
--auxpath : Auxillary jars
--config : Hive configuration directory
--service : Starts specific service/component. cli is default
Parameters used:
HADOOP_HOME or HADOOP_PREFIX : Hadoop install directory
HIVE_OPT : Hive options
For help on a particular service:
./hive --service serviceName --help
Debug help: ./hive --debug --help
我们可以看到在hive2.1中增加了对Hbase的支持,同时还增加了hplsql等等,这些都是hive2.1的新特性,这里介绍几个常用的
beeline:和hive1.2中beeline使用方法应该是一样的,至于性能方面的提升肯定是有的,beeline的使用,参考
http://blog.csdn.net/gamer_gyt/article/details/52062460
cleardanglingscratchdir:scratch directory(清楚缓存)
使用方法: bin/hive –service cleardanglingscratchdir
hbaseimport/hbaseschematool:与Hbase进行交互
hiveserver2:提供一个JDBC接口,供外部程序操作hive
hplsql:一个工具,实现sql在Apache hive,sparkSql,以及其他基于Hadoop的sql,Nosql和关系数据库的使用
官方解释:
HPL/SQL (previously known as PL/HQL) is an open source tool (Apache License 2.0) that implements procedural SQL language for Apache Hive, SparkSQL as well as any other SQL-on-Hadoop implementations, NoSQL and RDBMS.
HPL/SQL language is compatible to a large extent with Oracle PL/SQL, ANSI/ISO SQL/PSM (IBM DB2, MySQL, Teradata i.e), PostgreSQL PL/pgSQL (Netezza), Transact-SQL (Microsoft SQL Server and Sybase) that allows you leveraging existing SQL/DWH skills and familiar approach to implement data warehouse solutions on Hadoop. It also facilitates migration of existing business logic to Hadoop.
HPL/SQL is an efficient way to implement ETL processes in Hadoop.
下面附一张从网上看到的图片:
http://blog.csdn.net/Gamer_gyt/article/details/53002446
http://mp.weixin.qq.com/s?__biz=MjM5NzAyNTE0Ng==&mid=2649517151&idx=1&sn=4df5b43cdc9290b28150c4e8a5dc2320&chksm=bef8884a898f015cf8366afd3fb094e12306efdcf7e5c547743361bc7d8750f6f46ea583c573&mpshare=1&scene=1&srcid=1122P4mtzZ2Dyb08OngXOSU7#rd