在2.0之后的版本实现hbase.coprocessor.region.classes
类型的协处理器时,需要同时实现org.apache.hadoop.hbase.coprocessor.RegionCoprocessor
和org.apache.hadoop.hbase.coprocessor.RegionObserver
两个接口,并重写org.apache.hadoop.hbase.coprocessor.RegionCoprocessor#getRegionObserver
构造器。示例见官方文档: https://hbase.apache.org/book.html#cp_example
public class RegionObserverExample implements RegionCoprocessor, RegionObserver {
private static final byte[] ADMIN = Bytes.toBytes("admin");
private static final byte[] COLUMN_FAMILY = Bytes.toBytes("details");
private static final byte[] COLUMN = Bytes.toBytes("Admin_det");
private static final byte[] VALUE = Bytes.toBytes("You can't see Admin details");
@Override
public Optional<RegionObserver> getRegionObserver() {
return Optional.of(this);
}
@Override
public void preGetOp(final ObserverContext<RegionCoprocessorEnvironment> e, final Get get, final List<Cell> results)
throws IOException {
if (Bytes.equals(get.getRow(),ADMIN)) {
Cell c = CellUtil.createCell(get.getRow(),COLUMN_FAMILY, COLUMN,
System.currentTimeMillis(), (byte)4, VALUE);
results.add(c);
e.bypass();
}
}
}
值得注意的是:由于Java8的接口使用default关键字可以提供默认实现,因此在Hbase2.0之后的协处理器部分以上述两个接口替换掉了原有的BaseRegionObserver。然而,官方文档对该部分的描述,却在13.4.1. Changes of Note!
一节中的Coprocessor APIs have changed in HBase 2.0+
部分。
有趣的是,如果在实际的开发中,我们没有注意到这个规定,没有看到官方文档的这个位置,而是仅仅根据源码的线索实现RegionObserver
接口然后直接部署,那会怎么样呢?
创建表
项 | 值 |
---|---|
命名空间 | clgns |
表 | test_coprocessor |
列族 | f1 |
新建项目,添加maven依赖
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0modelVersion>
<groupId>xyz.xibladegroupId>
<artifactId>test_coprocessorartifactId>
<version>1.0version>
<properties>
<project.build.sourceEncoding>UTF-8project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8project.reporting.outputEncoding>
<maven.compiler.source>1.8maven.compiler.source>
<maven.compiler.target>1.8maven.compiler.target>
<java.version>1.8java.version>
properties>
<dependencies>
<dependency>
<groupId>org.apache.hbasegroupId>
<artifactId>hbase-clientartifactId>
<version>2.1.10version>
dependency>
<dependency>
<groupId>org.apache.hbasegroupId>
<artifactId>hbase-serverartifactId>
<version>2.1.10version>
dependency>
dependencies>
project>
在同一包下实现三个类:
package xyz.xiblade.test_coprocessor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.coprocessor.ObserverContext;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
import org.apache.hadoop.hbase.util.Bytes;
import java.io.IOException;
class Common {
private static final String CALL_LOG_TABLE_NAME = "clgns:test_coprocessor";
private static final String FAMILY = "f1";
private static final String COLUMN = "dummy";
static void insertTestData(ObserverContext<RegionCoprocessorEnvironment> c, String data) throws IOException {
// 比较当前执行表是否为测试表
String targetTableName = TableName.valueOf(CALL_LOG_TABLE_NAME).getNameAsString();
String currentTableName = c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString();
if(!targetTableName.equals(currentTableName)){
return;
}
// 构造Put向测试表插入数据
Put nput = new Put(Bytes.toBytes(data));
nput.addColumn(Bytes.toBytes(FAMILY), Bytes.toBytes(COLUMN), Bytes.toBytes(data));
TableName tn = TableName.valueOf(CALL_LOG_TABLE_NAME);
Table t = c.getEnvironment().createConnection(c.getEnvironment().getConfiguration()).getTable(tn);
t.put(nput);
}
}
package xyz.xiblade.test_coprocessor;
import org.apache.hadoop.hbase.client.Durability;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.coprocessor.ObserverContext;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessor;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
import org.apache.hadoop.hbase.coprocessor.RegionObserver;
import org.apache.hadoop.hbase.wal.WALEdit;
import java.io.IOException;
import java.util.Optional;
public class NiceOne implements RegionCoprocessor, RegionObserver {
@Override
public Optional<RegionObserver> getRegionObserver() {
return Optional.of(this);
}
@Override
public void postPut(ObserverContext<RegionCoprocessorEnvironment> c,
Put put,
WALEdit edit,
Durability durability) throws IOException {
Common.insertTestData(c, "nice");
c.bypass();
}
}
```
package xyz.xiblade.test_coprocessor;
import org.apache.hadoop.hbase.client.Durability;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.coprocessor.ObserverContext;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
import org.apache.hadoop.hbase.coprocessor.RegionObserver;
import org.apache.hadoop.hbase.wal.WALEdit;
import java.io.IOException;
public class AwfulOne implements RegionObserver {
@Override
public void postPut(ObserverContext<RegionCoprocessorEnvironment> c,
Put put,
WALEdit edit,
Durability durability) throws IOException {
Common.insertTestData(c, "awful");
}
}
在集群中hbase-site.xml
添加如下配置
<property>
<name>hbase.coprocessor.region.classesname>
<value>xyz.xiblade.test_coprocessor.AwfulOne,xyz.xiblade.test_coprocessor.NiceOnevalue>
property>
打包项目,拷贝至hbase根目录\lib
下,启动集群
插入数据
put 'clgns:test_coprocessor', 'this_is_the_rowkey', 'f1', 'this_is_the_data'
可以看到put过程会被阻塞
在另一个命令行窗口连接hbase shell,查看数据:
scan 'clgns:test_coprocessor'
可以看到结果:
hbase(main):001:0> scan 'clgns:test_coprocessor'
ROW COLUMN+CELL
nice column=f1:dummy, timestamp=1590328963702, value=nice
this_is_the_rowkey column=f1:, timestamp=1590328941153, value=this_is_the_data
2 row(s)
Took 2.3676 seconds
说明协处理器AwfulOne
无法成功地运行,并且其存在并不会立即将RegionServer异常中断。造成排查的难度加大。