Apache phoenix

Introduction

HBase is one of the most popular NoSQL databases, it is available in all major Hadoop distributions and also part of AWS Elastic MapReduce as an additional application. Out of the box it has its own data model operations such as Get, Put, Scan and Delete and it does not offer SQL-like capabilities, as oppose to, for instance, Cassandra query language, CQL. 
Apache Phoenix is a SQL layer on top of HBase to support the most common SQL-like operations such as CREATE TABLE, SELECT, UPSERT, DELETE, etc. Originally it was developed by Salesforce.com engineers for internal use and was open sourced. In 2013 it became an Apache incubator project.

Architecture

We have covered HBase in more detail inthis article.Just a quick recap: HBase architecture is based on three key components: HBase Master server, HBase Region Servers and Zookeeper.

Apache phoenix_第1张图片

The client needs to find the RegionServers in order to work with the data stored in HBase. In essence, regions are the basic elements for distributing tables across the cluster. In order to find the Region servers, the client first will have to talk to Zookeeper.

Apache phoenix_第2张图片

The key elements in the HBase datamodel are tables, column families, columns and rowkeys. The tables are made of columns and rows. The individual elements at the column and row intersections (cells in HBase term) are version based on timestamp. The rows are identified by rowkeys which are sorted – these rowkeys can be considered as primary keys and all the data in the table can be accessed via them.

The columns are grouped into column families; at table creation time you do not have to specify all the columns, only the column families. Columns have a prefix derived from the column family and its own qualifier,a column name looks like this: ‘contents:html’.

As we have seen, HBase classic data model is not designed with SQL in mind. Under the hood it is a sorted multidimensional Map. That is where Phoenix comes to the rescue; it offers a SQL skin on HBase. Phoenix is implemented as a JDBC driver. From architecture perspective a Java client using JDBC can be configured to work with Phoenix Driver and can connect to HBase using SQL-like statements. We will demonstrate how to use SQuirreL client , a popular Java-based graphical SQL client together with Phoenix.

Getting Started with Phoenix

You can download Phoenix from Apache download site . Different Phoenix versions are compatible with different HBase versions, so please, read Phoenix documentation to ensure you have the correct setup. In our tests we used Phoenix 3.0.0 with HBase 0.94, the Hadoop distribution was Cloudera CDH4.4 with Hadoop v1.. The Phoenix package contains both Hadoop version 1 and version 2 drivers for the clients so we had to use the appropriate Hadoop-1 files, see the details later on when talking about SQuirreL client.

Once you unzipped the downloaded Phoenix package, you need to copy the relevant Phoenix jar files to the HBase region servers in order to ensure that the Phoenix client can communicate with them, otherwise you may get an error message saying that the client and server jars are not compatible.

$ cd ~/phoenix/phoenix-3.0.0-incubating/common
$ cp phoenix-3.0.0-incubating-client-minimal.jar  /usr/lib/hbase/lib
$ cp phoenix-core-3.0.0-incubating.jar /usr/lib/hbase/lib

After you copied the jar files to the region servers, we had to restart them.

Phoenix provides a command line tool called sqlline – it is a utility written in Python. Its functionality is similar to Oracle SQLPlus or MySQL command line tools; not too sophisticated but does the job for simply use cases.

Before you start using sqlline, you can create a sample database table, populate it and run some simple queries as follows:

$ cd ~/phoenix/phoenix-3.0.0.0-incubating/bin
$ ./psql.py localhost ../examples/web_stat.sql ../examples/web_stat.csv ../examples/web_stat_queries.sql

This will run a CREATE TABLE statement:

CREATE TABLE IF NOT EXISTS WEB_STAT (
  HOST CHAR(2) NOT NULL,
  DOMAIN VARCHAR NOT NULL,
  FEATURE VARCHAR NOT NULL,
  DATE DATE NOT NULL,
  USAGE.CORE BIGINT,
  USAGE.DB BIGINT,
  STATS.ACTIVE_VISITOR INTEGER
  CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, FEATURE, DATE)
);

Then load the data stored in the web_stat CSV file:

NA,Salesforce.com,Login,2013-01-01 01:01:01,35,42,10
EU,Salesforce.com,Reports,2013-01-02 12:02:01,25,11,2
EU,Salesforce.com,Reports,2013-01-02 14:32:01,125,131,42
NA,Apple.com,Login,2013-01-01 01:01:01,35,22,40
NA,Salesforce.com,Dashboard,2013-01-03 11:01:01,88,66,44
...

And the run a few sample queries on the table, e.g.:

-- Average CPU and DB usage by Domain
SELECT DOMAIN, AVG(CORE) Average_CPU_Usage, AVG(DB) Average_DB_Usage 
FROM WEB_STAT 
GROUP BY DOMAIN 
ORDER BY DOMAIN DESC;

Now you can connect to HBase using sqlline:

$ ./sqlline.py localhost
[cloudera@localhost bin]$ ./sqlline.py localhost
..
Connecting to jdbc:phoenix:localhost
Driver: org.apache.phoenix.jdbc.PhoenixDriver (version 3.0)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
..
Done
sqlline version 1.1.2
0: jdbc:phoenix:localhost> select count(*) from web_stat;
+------------+
|  COUNT(1)  |
+------------+
| 39         |
+------------+
1 row selected (0.112 seconds)
0: jdbc:phoenix:localhost> select host, sum(active_visitor) from web_stat group by host;
+------+---------------------------+
| HOST | SUM(STATS.ACTIVE_VISITOR) |
+------+---------------------------+
| EU   | 698                       |
| NA   | 1639                      |
+------+---------------------------+
2 rows selected (0.294 seconds)
0: jdbc:phoenix:localhost>

Using SQuirreL with Phoenix

If you prefer to use a graphical SQL client with Phoenix, you can download e.g. SQuirreL from here.  After that the first step is to copy the appropriate Phoenix driver jar file to SQuirreL lib directory:

$ cd ~/phoenix
$ cp phoenix-3.0.0-incubating/hadoop-1/phoenix-3.0.0.-incubatibg-client.jar ~/squirrel/lib

Now you are ready to configure the JDBC driver in SQuirreL client, as shown in the picture below:

Apache phoenix_第3张图片

Then you can connect to Phoenix using the appropriate connect string (jdbc:phoenix:localhost in our test scenario):

Apache phoenix_第4张图片

Once connected, you can start executing your SQL queries: 
Apache phoenix_第5张图片

Phoenix on Amazon Web Services – AWS Elastic MapReduce with Phoenix

You can also use Phoenix with AWS Elastic MapReduce. When you create a cluster, you need to specify Apach Hadoop version, then configure HBase as additional application and define the bootsrap action to load Phoenix onto your AWS EMR cluster. See the details below in the pictures:

Apache phoenix_第6张图片

Apache phoenix_第7张图片

Once the cluster is running, you can login to the master node using ssh and check your Phoenix configuration. 
Apache phoenix_第8张图片

Conclusion

SQL is one of the most popular languages used by data scientists and it is likely to remain so. With the advent of Big Data and NoSQL databases the volume, variety and velocity of the data have significantly increased but still the demand for traditional, well-known languages to process them did not change too much. SQL on Hadoop solutions are gaining momentum. Apache Phoenix is interesting open source player to offer SQL layer on top of HBase.




近日,Salesforce.com开源了Phoenix,这是一个Java中间层,可以让开发者在Apache HBase上执行SQL查询。InfoQ有幸采访到了Salesforce.com的主开发者James Taylor以了解关于Phoenix的更多信息。

除了无数的SQL、NoSQL与NewSQL数据库,Salesforce.com又宣布了Phoenix项目,这是构建在Apache HBase(列式大数据存储)之上的一个SQL中间层。Phoenix完全使用Java编写,代码位于GitHub上,并且提供了一个客户端可嵌入的JDBC驱动。

根据项目所述,Phoenix被Salesforce.com内部使用,对于简单的低延迟查询,其量级为毫秒;对于百万级别的行数来说,其量级为秒。Phoenix并不是像HBase那样用于map-reduce job的,而是通过标准化的语言来访问HBase数据的。

根据项目创建者所述,对于10M到100M的行的简单查询来说,Phoenix要胜过Hive。对于使用了HBase API、协同处理器及自定义过滤器的Impala与OpenTSDB来说,进行相似的查询Phoenix的速度也会更快一些。

Phoenix查询引擎会将SQL查询转换为一个或多个HBase scan,并编排执行以生成标准的JDBC结果集。直接使用HBase API、协同处理器与自定义过滤器,对于简单查询来说,其性能量级是毫秒,对于百万级别的行数来说,其性能量级是秒。

Phoenix最值得关注的一些特性有:

  • 嵌入式的JDBC驱动,实现了大部分的java.sql接口,包括元数据API
  • 可以通过多部行键或是键/值单元对列进行建模
  • 完善的查询支持,可以使用多个谓词以及优化的扫描键
  • DDL支持:通过CREATE TABLE、DROP TABLE及ALTER TABLE来添加/删除列
  • 版本化的模式仓库:当写入数据时,快照查询会使用恰当的模式
  • DML支持:用于逐行插入的UPSERT VALUES、用于相同或不同表之间大量数据传输的UPSERT SELECT、用于删除行的DELETE
  • 通过客户端的批处理实现的有限的事务支持
  • 单表——还没有连接,同时二级索引也在开发当中
  • 紧跟ANSI SQL标准

Phoenix代码基于BSD许可开源。

下面是InfoQ采访Phoenix主开发者James Taylor的访谈内容。

InfoQ:为何要为Non-SQL数据存储提供SQL接口?现在已经有很多其他的SQL解决方案了。

JT:现有的SQL解决方案通常都不是水平可伸缩的,因此当数据量变大时会遇到阻碍。至于我们为何在NoSQL数据存储HBase上提供SQL接口,有如下几个原因:

  1. 使用诸如SQL这样易于理解的语言可以使人们能够更加轻松地使用HBase。相对于学习另一套私有API,人们可以使用熟悉的语言来读写数据。
  2. 使用诸如SQL这样更高层次的语言来编写减少了你所需编写的代码量。比如说,使用Phoenix,你可以编写下面这样的查询来获取Web的统计数据(我不想说使用原生的HBase API会有多少行代码,但肯定少不了):
    • SELECT
      • TRUNC(DATE,'DAY') DAY,
      • SUM(CORE) TOTAL_CPU_Usage,
      • MIN(CORE) MIN_CPU_Usage,
      • MAX(CORE) MAX_CPU_Usage
    • FROM WEB_STAT
    • WHERE DOMAIN LIKE 'Salesforce%'
    • GROUP BY TRUNC(DATE,'DAY');
  3. 执行查询时,在数据访问与运行时执行之间加上SQL这样一层抽象可以进行大量优化。比如说,对于GROUP BY查询来说,我们可以利用HBase中协同处理器这样的特性。借助于该特性,我们可以在HBase服务器上执行Phoenix代码。因此,聚合可以在服务端执行,而不必在客户端,这么做会极大减少客户端与服务端之间传输的数据量。此外,Phoenix还会在客户端并行执行GROUP BY,这是根据行键的范围来截断扫描而实现的。通过并行执行,结果会更快地返回。所有这些优化都无需用户参与,用户只需发出查询即可。
  4. 通过使用业界标准的API(如JDBC),我们可以利用现有的工具来使用这些API。比如说,你可以使用现成的SQL客户端(如SQuirrel,http://squirrel-sql.sourceforge.net/)连接HBase服务器并执行SQL。感兴趣的读者可以参见入门指南以了解更多信息:https://github.com/forcedotcom/phoenix/blob/master/README.md。

InfoQ:有没有性能评估呢?响应时间是否变快了?可伸缩性是否更好了?

JT:可以在这里https://github.com/forcedotcom/phoenix/wiki/Performance了解Phoenix与其他NoSQL产品/项目的性能对比。我们并没有发布Phoenix与现有的关系型技术之间的基准比较(网上已经有了HBase与他们之间的比较),但当行数与行宽增加时,NoSQL解决方案会更出众。这也取决于你是“如何”使用关系数据库的:是像Salesforce.com那样的多租模式抑或单租模式。HBase非常善于协同定位关系数据,这取决于行键是如何构成的,因此对于某些多租场景来说,其优势是很明显的。

InfoQ:何时才会增加连接支持呢?

JT:连接支持已经在我们的路线图上了,参见https://github.com/forcedotcom/phoenix/wiki#wiki-roadmap。我们已经在做一些基础工作了,现在还不能给出准确的时间点,因为有太多事情要做,但我们会尽快的。


你可能感兴趣的:(hadoop)