Hbase二级索引

HBase在0.92之后引入了coprocessors,提供了一系列的钩子,让我们能够轻易实现访问控制和二级索引的特性。下面简单介绍下两种coprocessors,第一种是Observers,它实际类似于触发器,第二种是Endpoint,它类似与存储过程。由于这里只用到了Observers,所以只介绍Observers,想要更详细的介绍请查阅(https://blogs.apache.org/hbase/entry/coprocessor_introduction)。observers分为三种:

RegionObserver:提供数据操作事件钩子;

WALObserver:提供WAL(write ahead log)相关操作事件钩子;

MasterObserver:提供DDL操作事件钩子。

相关接口请参阅hbase api。

下面给出一个例子,该例子使用RegionObserver实现在写主表之前将索引数据先写到另外一个表:

package com.dengchuanhua.testhbase;

import java.io.IOException;
import java.util.Iterator;
import java.util.List;

import org.apache.hadoop.hbase.CoprocessorEnvironment;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
import org.apache.hadoop.hbase.coprocessor.ObserverContext;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
import org.apache.hadoop.hbase.regionserver.wal.WALEdit;

public class TestCoprocessor extends BaseRegionObserver {

	private HTable table = null;
	
	@Override
	public void start(CoprocessorEnvironment env) throws IOException {  
//        pool = new HTablePool(env.getConfiguration(), 10);
        table = new HTable(env.getConfiguration(), "test_index");
    }
	
	@Override
	public void prePut(final ObserverContext<RegionCoprocessorEnvironment> e,
			final Put put, final WALEdit edit, final boolean writeToWAL)
			throws IOException {
		// set configuration
//		Configuration conf = new Configuration();
		// need conf.set...

		List<KeyValue> kv = put.get("family".getBytes(), "cog".getBytes());
		Iterator<KeyValue> kvItor = kv.iterator();
		while (kvItor.hasNext()) {
			KeyValue tmp = kvItor.next();
			Put indexPut = new Put(tmp.getValue());
			indexPut.add("family".getBytes(), "cog".getBytes(), tmp.getRow());
			table.put(indexPut);
			table.flushCommits();
		}
	}
	
	@Override
    public void stop(CoprocessorEnvironment env) throws IOException {
		table.close();
    }
}

 写完后要加载到table里面去,先把该文件打包成test.jar并上传到hdfs的/demo路径下,然后操作如下:

 

1. disable ‘testTable’

 

2. alter ‘testTable’, METHOD=>’table_att’,'coprocessor’=>’hdfs:///demo/test.jar|com.dengchuanhua.testhbase.TestCoprocessor|1001′

 

3. enable ‘testTable’

 

然后往testTable里面插数据就会自动往indexTableName写数据了。

你可能感兴趣的:(hbase)