hbase-2.0.5
phoenix-5.0.0-HBase-2.0
类型 | zookeeper | Hbase | |
---|---|---|---|
master | NameNode | 否 | HMaster |
slave1 | DataNode | 是 | HRegionServer |
slave2 | DataNode | 是 | HRegionServer |
slave3 | DataNode | 是 | HRegionServer |
/home/hadoop/app/apache-phoenix-5.0.0-HBase-2.0-bin
phoenix-core-5.0.0-HBase-2.0.jar
phoenix-5.0.0-HBase-2.0-client.jar
到ps:将jar包修改为server运行sqlline.py master:2181
还是会拒绝连接,但运行:sqlline.py master,slave1,slave2,slave3:2181
和运行sqlline.py slave1:2181
则可以。原因可能是master不是datanode节点。
# 不应该传客户端phoenix-5.0.0-HBase-2.0-client.jar
# cp phoenix-core-5.0.0-HBase-2.0.jar phoenix-5.0.0-HBase-2.0-client.jar /home/hadoop/hbase-2.0.5/lib/
cp phoenix-core-5.0.0-HBase-2.0.jar phoenix-5.0.0-HBase-2.0-server.jar /home/hadoop/hbase-2.0.5/lib/
#不应该传客户端phoenix-5.0.0-HBase-2.0-client.jar
# scp phoenix-core-5.0.0-HBase-2.0.jar phoenix-5.0.0-HBase-2.0-client.jar hadoop@slave1:/home/hadoop/hbase-2.0.5/lib/
scp phoenix-core-5.0.0-HBase-2.0.jar phoenix-5.0.0-HBase-2.0-server.jar hadoop@slave1:/home/hadoop/hbase-2.0.5/lib/
hbase-site.xml
到phoenix安装目录下的bin中:cp hbase-site.xml /home/hadoop/app/apache-phoenix-5.0.0-HBase-2.0-bin/bin/
cp core-site.xml hdfs-site.xml /home/hadoop/app/apache-phoenix-5.0.0-HBase-2.0-bin/bin/
vi /etc/profile
#phoenix
export PHOENIX_HOME=/home/hadoop/app/apache-phoenix-5.0.0-HBase-2.0-bin
export PHOENIX_CLASSPATH=$PHOENIX_HOME
export PATH=$PATH:$PHOENIX_HOME/bin
chmod 777 psql.py
chmod 777 sqlline.py
sqlline.py master,slave1,slave2,slave3:2181
[hadoop@master bin]$ sqlline.py master,slave1,slave2,slave3:2181
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:master,slave1,slave2,slave3:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:master,slave1,slave2,slave3:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/apache-phoenix-5.0.0-HBase-2.0-bin/phoenix-5.0.0-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-3.1.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
19/08/26 22:31:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to: Phoenix (version 5.0)
Driver: PhoenixEmbeddedDriver (version 5.0)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
133/133 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:master,slave1,slave2,slave3:2>
第一次好像有点慢,没法退出,过了好久重新使用该命令才退出成功
0: jdbc:phoenix:master,slave1,slave2,slave3:2> !exit
由于Phoenix的新版本会自动将列编码,导致映射表无法映射到原有表的列,需要添加column_encoded_bytes=0
,否则无法与列建立映射关系如:
create table "DEMO"(pk varchar primary key, "f1"."name" varchar) column_encoded_bytes=0
<dependency>
<groupId>org.apache.phoenixgroupId>
<artifactId>phoenixartifactId>
<version>5.0.0-HBase-2.0version>
<type>pomtype>
dependency>
连接HBase测试源码:
public static void main(String[] args) throws Throwable {
try {
Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
//这里配置zookeeper的地址,可单个,多个(用","分隔)可以是域名或者ip
String url = "jdbc:phoenix:master,slave1,slave2,slave3:2181";
Connection conn = DriverManager.getConnection(url);
Statement statement = conn.createStatement();
long time = System.currentTimeMillis();
ResultSet rs = statement.executeQuery("select * from test");
while (rs.next()) {
String myName = rs.getString("name"); //表中的列名
System.out.println("myName=" + myName);
}
long timeUsed = System.currentTimeMillis() - time;
System.out.println("time " + timeUsed + "mm");
// 关闭连接
rs.close();
statement.close();
conn.close();
} catch (Exception e) {
e.printStackTrace();
}
}