Hadoop 单机搭建 Phoenix安装

Phoenix是构建在HBase上的一个SQL层,能让我们用标准的JDBC APIs而不是HBase客户端APIs来创建表,插入数据和对HBase数据进行查询。 
Phoenix完全使用Java编写,作为HBase内嵌的JDBC驱动。Phoenix查询引擎会将SQL查询转换为一个或多个HBase扫描,并编排执行以生成标准的JDBC结果集。直接使用HBase API、协同处理器与自定义过滤器,对于简单查询来说,其性能量级是毫秒,对于百万级别的行数来说,其性能量级是秒。 
Phoenix通过以下方式使我们可以少写代码,并且性能比我们自己写代码更好:

将SQL编译成原生的HBase scans。 
确定scan关键字的最佳开始和结束 
让scan并行执行 

参考文档: https://www.cnblogs.com/yfb918/p/10643190.html  https://www.cnblogs.com/funyoung/p/10249115.html       https://www.cnblogs.com/wumingcong/p/6044038.html

                   https://blog.csdn.net/programmeryu/article/details/91605682     https://mirror.bit.edu.cn/apache/hbase/

一、下载:http://archive.apache.org/dist/phoenix/

选择与Hbase之匹配的版本

Hadoop 单机搭建 Phoenix安装_第1张图片

hbase版本:hbase-2.1.9-bin.tar.gz

                 hbase-1.4.13-bin.tar.gz

phoenix版本:apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz

                   apache-phoenix-4.14.3-HBase-1.4-bin.tar.gz

二、安装phoenix

1、解压

tar -zxf apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz -C /opt/modules/
tar -zxf apache-phoenix-4.14.3-HBase-1.4-bin.tar.gz -C /opt/modules/

2、与Hbase集成

Hadoop 单机搭建 Phoenix安装_第2张图片

 

 将 phoenix-5.0.0-HBase-2.0-server.jar  复制到hbase的lib目录下(如果是集群,需要复制到每台机器)

复制这2个jar会造成hbase启动报错  phoenix-5.0.0-HBase-2.0-client.jar 、 phoenix-core-5.0.0-HBase-2.0.jar 
[hadoop@centos04 apache-phoenix-5.0.0-HBase-2.0-bin]$ cp ./phoenix-5.0.0-HBase-2.0-client.jar /opt/modules/hbase-2.1.9/lib [hadoop@centos04 apache-phoenix-5.0.0-HBase-2.0-bin]$ cp ./phoenix-core-5.0.0-HBase-2.0.jar /opt/modules/hbase-2.1.9/lib
复制下面这个
[hadoop@centos04 apache-phoenix-5.0.0-HBase-2.0-bin]$ cp ./
phoenix-5.0.0-HBase-2.0-server.jar /opt/modules/hbase-2.1.9/lib

[hadoop@centos04 apache-phoenix-4.14.3-HBase-1.4-bin]$ cp ./phoenix-4.14.3-HBase-1.4-server.jar /opt/modules/hbase-1.4.13/lib

将hbase/conf目录下 hbase-site.xml 文件放到phoenix的bin目录下

[hadoop@centos04 bin]$ pwd
/opt/modules/apache-phoenix-5.0.0-HBase-2.0-bin/bin
[hadoop@centos04 bin]$ cp /opt/modules/hbase-2.1.9/conf/hbase-site.xml ./

三、启动

1、启动Hbase 

开始
sh /opt/modules/hbase-2.1.9/bin/start-hbase.sh
sh /opt/modules/hbase-1.4.13/bin/start-hbase.sh

停止
sh /opt/modules/hbase-2.1.9/bin/stop-hbase.sh
sh /opt/modules/hbase-1.4.13/bin/stop-hbase.sh

连接
sh /opt/modules/hbase-2.1.9/bin/hbase shell
sh /opt/modules/hbase-1.4.13/bin/hbase shell
http://192.168.0.181:16010

Hadoop 单机搭建 Phoenix安装_第3张图片

2、使用客户端 

下载: http://squirrel-sql.sourceforge.net

Hadoop 单机搭建 Phoenix安装_第4张图片

 

 

 Hadoop 单机搭建 Phoenix安装_第5张图片

 未测试

 

3、启动Phoenix

 在Phoenix文件夹下执行,指定zk的地址作为hbase的访问入口:

bin/sqlline.py [hostname]:2181
注意:第一次启动可能需要1分多钟的时间,之后就会很快,主要是在hbase中做了一些初始化工作,会创建以下3张表
        SYSTEM.CATALOG
        SYSTEM.SEQUENCE
                SYSTEM.STATS 
第一次进入时间比较长,会创建关联表
[hadoop@centos04 bin]$ ./sqlline.py Setting property: [incremental, false] Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix: none none org.apache.phoenix.jdbc.PhoenixDriver Connecting to jdbc:phoenix: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/modules/apache-phoenix-4.14.3-HBase-1.4-bin/phoenix-4.14.3-HBase-1.4-client.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/modules/hadoop-2.8.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 20/03/25 07:27:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Connected to: Phoenix (version 4.14) Driver: PhoenixEmbeddedDriver (version 4.14) Autocommit status: true Transaction isolation: TRANSACTION_READ_COMMITTED Building list of tables and columns for tab-completion (set fastconnect to true to skip)... 133/133 (100%) Done Done sqlline version 1.2.0 0: jdbc:phoenix:>
在hbase shell中可以看到创建了表
[hadoop@centos04 ~]$ sh /opt/modules/hbase-1.4.13/bin/hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/modules/hbase-1.4.13/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/modules/hadoop-2.8.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
Version 1.4.13, r38bf65a22b7e9320f07aeb27677e4533b9a77ef4, Sun Feb 23 02:06:36 PST 2020

hbase(main):001:0> list
TABLE                                                                                                                   
SYSTEM.CATALOG                                                                                                          
SYSTEM.FUNCTION                                                                                                         
SYSTEM.LOG                                                                                                              
SYSTEM.MUTEX                                                                                                            
SYSTEM.SEQUENCE                                                                                                         
SYSTEM.STATS                                                                                                            
t1                                                                                                                      
7 row(s) in 0.4440 seconds

=> ["SYSTEM.CATALOG", "SYSTEM.FUNCTION", "SYSTEM.LOG", "SYSTEM.MUTEX", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "t1"]
hbase(main):002:0> 

 

1.建表
create table if not exists test(id bigint not null,equip_id integer,create_time date,status tinyint constraint my_pk primary key(id));

2.创建自增序列
CREATE SEQUENCE testip_sequence START WITH 10000 INCREMENT BY 1 CACHE 1;

3.插入测试数据
upsert into test values (1,1, TO_DATE('2019-06-05 09:00:02'),0);

UPSERT INTO test(id, equip_id, create_time,status) VALUES( NEXT VALUE FOR testip_sequence,1, TO_DATE('2019-06-05 09:00:03'),0);

4.创建二级索引
CREATE INDEX time_index ON test(create_time desc);

 

0: jdbc:phoenix:> create table if not exists test(id bigint not null,equip_id integer,create_time date,status tinyint constraint my_pk primary key(id));
No rows affected (1.27 seconds)
0: jdbc:phoenix:> select * from test;
+-----+-----------+--------------+---------+
| ID  | EQUIP_ID  | CREATE_TIME  | STATUS  |
+-----+-----------+--------------+---------+
+-----+-----------+--------------+---------+
No rows selected (0.081 seconds)
0: jdbc:phoenix:> CREATE SEQUENCE testip_sequence START WITH 10000 INCREMENT BY 1 CACHE 1;
No rows affected (0.046 seconds)
0: jdbc:phoenix:> upsert into test values (1,1, TO_DATE('2019-06-05 09:00:02'),0);
1 row affected (0.023 seconds)
0: jdbc:phoenix:> UPSERT INTO test(id, equip_id, create_time,status) VALUES( NEXT VALUE FOR testip_sequence,1, TO_DATE('2019-06-05 09:00:03'),0);
1 row affected (0.026 seconds)0: jdbc:phoenix:> select * from test;
+--------+-----------+--------------------------+---------+
|   ID   | EQUIP_ID  |       CREATE_TIME        | STATUS  |
+--------+-----------+--------------------------+---------+
| 1      | 1         | 2019-06-05 09:00:02.000  | 0       |
| 10000  | 1         | 2019-06-05 09:00:03.000  | 0       |
+--------+-----------+--------------------------+---------+
2 rows selected (0.044 seconds)
0: jdbc:phoenix:> 

 

 

 

 

 

⚠️

执行创建索引命令报错:
ERROR 1029 (42Y88): Mutable secondary indexes must have the hbase.regionserver.wal.codec property set to org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec in the hbase-sites.xml of every region server. tableName=TIME_INDEX

在hbase-site.xml 中配置中加入,重启服务
 
  hbase.regionserver.wal.codec 
  org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec 
 

 

 

结束

 

你可能感兴趣的:(Hadoop 单机搭建 Phoenix安装)