针对HBase上SQL解决方案,目前社区内比较热门的有Cloudera的Impala,Horntworks的Drill,以及Hive。根据与HBase的操作方式,可以分为三种:
- 以MapReduce为核心,单个任务使用hbase-client原始接口访问;
- 以Google Dremel为核心,单个任务使用hbase-client原始接口访问;
- 以HBase-Coprocessor为核心,结合Google Dremel的思想,客户端合并多个节点的处理结果。
Phoenix的安装:
PhoenixSQLLexer.java PhoenixSQLParser.java PhoenixSQL.tokens
FROM WEB_STAT
GROUP BY DOMAIN
ORDER BY DOMAIN DESC;
tokens
{
SELECT=’select’;
FROM=’from’;
USING=’using’;
WHERE=’where’;
NOT=’not’;
AND=’and’;
OR=’or’;
NULL=’null’;
TRUE=’true’;
FALSE=’false’;
LIKE=’like’;
AS=’as’;
OUTER=’outer’;
ON=’on’;
IN=’in’;
GROUP=’group’;
HAVING=’having’;
ORDER=’order’;
BY=’by’;
ASC=’asc’;
DESC=’desc’;
NULLS=’nulls’;
LIMIT=’limit’;
FIRST=’first’;
LAST=’last’;
DATA=’data’;
CASE=’case’;
WHEN=’when’;
THEN=’then’;
ELSE=’else’;
END=’end’;
EXISTS=’exists’;
IS=’is’;
FIRST=’first’;
DISTINCT=’distinct’;
JOIN=’join’;
INNER=’inner’;
LEFT=’left’;
RIGHT=’right’;
FULL=’full’;
BETWEEN=’between’;
UPSERT=’upsert’;
INTO=’into’;
VALUES=’values’;
DELETE=’delete’;
CREATE=’create’;
DROP=’drop’;
PRIMARY=’primary’;
KEY=’key’;
ALTER=’alter’;
COLUMN=’column’;
TABLE=’table’;
ADD=’add’;
SPLIT=’split’;
EXPLAIN=’explain’;
VIEW=’view’;
IF=’if’;
CONSTRAINT=’constraint’;
}
- 增删数据:ExecutableAddColumnStatement、ExecutableDropColumnStatement
- 创建/删除表格:ExecutableCreateTableStatement、ExecutableDropTableStatement
- Select操作:ExecutableSelectStatement
- 导入数据:ExecutableUpsertStatement
- 解释执行:ExecutableExplainStatement
- 创建QueryCompiler。
- 执行compile过程。(识别limit、having、where、order、projector等操作,生成ScanPlan)
- 封装Scanner,并根据识别出的修饰词,对于结果进行修饰,整合出ResultIterator的各种功能的实现,具体在com.salesforce.phoenix.iterator包下。
- 该SQL对应的包装类为:OrderedAggregatingResultIterator.//它是如何组织数据,保证数据按照DESC或者ASC的方式展示?
instance of com.salesforce.phoenix.expression.function.CountAggregateFunction$1(id=2409), instance of com.salesforce.phoenix.expression.function.CountAggregateFunction$1(id=2410), instance of com.salesforce.phoenix.expression.aggregator.LongSumAggregator(id=2411), instance of com.salesforce.phoenix.expression.aggregator.LongSumAggregator(id=2412)
}
descriptor.addCoprocessor(UngroupedAggregateRegionObserver.class.getName(), phoenixJarPath, 1, null);
descriptor.addCoprocessor(GroupedAggregateRegionObserver.class.getName(), phoenixJarPath, 1, null);
descriptor.addCoprocessor(HashJoiningRegionObserver.class.getName(), phoenixJarPath, 1, null);
本系列文章属于Binos_ICT在Binospace个人技术博客原创,原文链接为http://www.binospace.com/index.php/in-depth-analysis-hbase-phoenix,未经允许,不得转载。
From Binospace, post 深入分析HBase-Phoenix执行机制与原理