hadoop+hbase部署,phoeni的安装和使用(三)

1. 安装

1.1 下载Phoenix

从Apache下载Phoenix(下载链接),从4.8.0到4.10.0各版本都有,本地安装的Hbase版本为1.1.9,所以选择对应Hbase是1.1版本的下载。这里我下载的是4.9.0版本,资源包 apache-phoenix-4.9.0-HBase-1.1-bin.tar.gz

(Hbase 1.1.9版本下载链接)

这里注意,使用的phoeni版本一定要和hbase的版本匹配,要不然会出错。

1.2 配置

首先,将下载的资源包解压到本地,将phoenix-4.9.0-HBase-1.1-client.jarphoenix-core-4.9.0-HBase-1.1.jar两个jar包拷贝至Hbase的/lib文件夹。另外,将Hbase的配置文件base-site.xml拷贝至Phoenix根目录中的/bin文件夹下到此就完成了配置。

1.3 启动

首先启动Hadoop,然后启动Hbase,在Phoenix文件夹下执行:

bin/sqlline.py master

如此,就可以开始对非关系型数据库进行sql 语言的操作了。

2. hadoop、hbase、phoeni的联合使用

三者的打开、关闭顺序:

先打开hadoop,再打开hbase,最后打开Phoenix;关闭是相反的顺序先关闭Phoenix,再关闭hbase,最后关闭hadoop。

启动

cc@cc-fibric:~$ sudo -i
[sudo] password for cc: 
root@cc-fibric:~# ssh localhost
root@localhost's password: 
Welcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.13.0-36-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

170 packages can be updated.
11 updates are security updates.

Last login: Fri Sep 28 11:20:52 2018 from 127.0.0.1
root@cc-fibric:~# 
root@cc-fibric:~# cd /usr/local/hadoop/
root@cc-fibric:/usr/local/hadoop# cd sbin/
启动hadoop:
root@cc-fibric:/usr/local/hadoop/sbin# ./start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
root@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-cc-fibric.out
root@localhost's password: 
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-cc-fibric.out
Starting secondary namenodes [account.jetbrains.com]
[email protected]'s password: 
account.jetbrains.com: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-cc-fibric.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-cc-fibric.out
root@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-cc-fibric.out
root@cc-fibric:/usr/local/hadoop/sbin# jps
12721 NodeManager
12850 Jps
12035 DataNode
11866 NameNode
12399 ResourceManager
12239 SecondaryNameNode
root@cc-fibric:/usr/local/hadoop/sbin# cd /usr/local/hbase/bin/

启动hbase:

root@cc-fibric:/usr/local/hbase/bin# start-hbase.sh 
starting master, logging to /usr/local/hbase/logs/hbase-root-master-cc-fibric.out
root@cc-fibric:/usr/local/hbase/bin# 
root@cc-fibric:/usr/local/hbase/bin# cd /usr/local/phoenix/bin/

启动Phoenix:

root@cc-fibric:/usr/local/phoenix/bin# ./sqlline.py localhost
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost
18/10/06 16:10:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to: Phoenix (version 4.14)
Driver: PhoenixEmbeddedDriver (version 4.14)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
137/137 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:localhost> 

Phoenix的使用:

Phoenix是构建在HBase上的一个SQL层,能让我们用标准的JDBC APIs而不是HBase客户端APIs来创建表,插入数据和对HBase数据进行查询。

打开服务

root@cc-fibric:/usr/local/phoenix/bin# ./queryserver.py start
starting Query Server, logging to /tmp/phoenix/phoenix-root-queryserver.log
root@cc-fibric:/usr/local/phoenix/bin# ./sqlline.py localhost
创建一个表:
0: jdbc:phoenix:localhost> create table person(id varchar primary key, name varchar);
No rows affected (1.328 seconds)
插入元素:
0: jdbc:phoenix:localhost> upsert into person values ('111','jone');
1 row affected (0.073 seconds)
查找表:
0: jdbc:phoenix:localhost> select * from person;
+------+-------+
|  ID  | NAME  |
+------+-------+
| 111  | jone  |
+------+-------+
1 row selected (0.074 seconds)
显示全部的表:
0: jdbc:phoenix:localhost> !tables 
+------------+--------------+-------------+---------------+----------+------------+--------------+
| TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMARKS  | TYPE_NAME  | SELF_REFEREN |
+------------+--------------+-------------+---------------+----------+------------+--------------+
|            | SYSTEM       | CATALOG     | SYSTEM TABLE  |          |            |              |
|            | SYSTEM       | FUNCTION    | SYSTEM TABLE  |          |            |              |
|            | SYSTEM       | LOG         | SYSTEM TABLE  |          |            |              |
|            | SYSTEM       | SEQUENCE    | SYSTEM TABLE  |          |            |              |
|            | SYSTEM       | STATS       | SYSTEM TABLE  |          |            |              |
|            |              | PERSON      | TABLE         |          |            |              |
+------------+--------------+-------------+---------------+----------+------------+--------------+
0: jdbc:phoenix:localhost> 

这里注意一个问题,在关系型数据库中,如mysql,不用特别指定一个主键元素,但是在非关系型数据库中,需要指定一个主元素,否则会报错:

下面的是一个错误的语句例子,使用Mysql 与Phoenix的区别:

错解:
0: jdbc:phoenix:localhost> create table "test11"(id varchar,names varchar);
Error: ERROR 509 (42888): The table does not have a primary key. tableName=test11 (state=42888,code=509)
java.sql.SQLException: ERROR 509 (42888): The table does not have a primary key. tableName=test11
    at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
...........
省略不写
...........
    at sqlline.SqlLine.start(SqlLine.java:398)
    at sqlline.SqlLine.main(SqlLine.java:291)
    
正解:
0: jdbc:phoenix:localhost> create table "test11"(id varchar primary key,names varchar);
No rows affected (1.309 seconds)
这里一定要指定一个为主键

关闭顺序:

关闭phoenix 服务:
root@cc-fibric:/usr/local/phoenix/bin# ./queryserver.py start

root@cc-fibric:/usr/local/phoenix/bin# cd /usr/local/hbase/
关闭Hbase:
root@cc-fibric:/usr/local/hbase# ls
bin          conf      hbase-webapps  lib          logs        README.txt
CHANGES.txt  data-tmp  LEGAL          LICENSE.txt  NOTICE.txt
root@cc-fibric:/usr/local/hbase# cd bin/
root@cc-fibric:/usr/local/hbase/bin# stop-hbase.sh 
stopping hbase....................
关闭Hadoop:
root@cc-fibric:/usr/local/hbase/bin# cd ../..
root@cc-fibric:/usr/local# cd hadoop/
root@cc-fibric:/usr/local/hadoop# ls
bin  etc  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share  tmp
root@cc-fibric:/usr/local/hadoop# cd sbin/
查看进程:
root@cc-fibric:/usr/local/hadoop/sbin# jps
6160 NodeManager
9730 Jps
5845 ResourceManager
5318 NameNode
5691 SecondaryNameNode
5486 DataNode
关闭:
root@cc-fibric:/usr/local/hadoop/sbin# ./stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
root@localhost's password: 
localhost: stopping namenode
root@localhost's password: 
localhost: stopping datanode
Stopping secondary namenodes [account.jetbrains.com]
[email protected]'s password: 
account.jetbrains.com: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
root@localhost's password: 
localhost: stopping nodemanager
no proxyserver to stop 
自此,全部关闭成功
root@cc-fibric:/usr/local/hadoop/sbin# jps
10479 Jps

你可能感兴趣的:(hadoop+hbase部署,phoeni的安装和使用(三))