Hbase2.0版本协处理器的编写、加载与卸载

协处理器代码编写

2.0版本之前,要想自己写协处理器,需要implement BaseRegionObserver

2.0版本之后,则与之前有所不同,需要implement RegionObserver,RegionCoprocessor。同时还必须实现一个方法

@Override
public Optional getRegionObserver() {
    return Optional.of(this);
}

如果没有添加此方法会报以下错误:

2020-07-05 18:46:50,740 WARN  [HBase-Metrics2-1] util.MBeans: Error creating MBean object name: Hadoop:service=HBase,name=RegionServer,sub=IPC
        at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:127)
        at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newMBeanName(DefaultMetricsSystem.java:102)
        at org.apache.hadoop.metrics2.util.MBeans.getMBeanName(MBeans.java:92)
        at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:55)
        at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
        at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:269)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$1.postStart(MetricsSystemImpl.java:240)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$3.invoke(MetricsSystemImpl.java:321)
        at com.sun.proxy.$Proxy7.postStart(Unknown Source)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:194)
        at org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:109)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.metrics2.MetricsException: Hadoop:service=HBase,name=RegionServer,sub=IPC already exists!
        at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:123)
        ... 21 more
2020-07-05 18:46:50,835 WARN  [HBase-Metrics2-1] impl.MetricsSystemImpl: Caught exception in callback postStart
java.lang.reflect.InvocationTargetException
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$3.invoke(MetricsSystemImpl.java:321)
        at com.sun.proxy.$Proxy7.postStart(Unknown Source)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:194)
        at org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:109)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.AbstractMethodError: org.apache.hadoop.hbase.ipc.RpcScheduler.getMetaPriorityQueueLength()I
        at org.apache.hadoop.hbase.ipc.MetricsHBaseServerWrapperImpl.getMetaPriorityQueueLength(MetricsHBaseServerWrapperImpl.java:74)
        at org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceImpl.getMetrics(MetricsHBaseServerSourceImpl.java:156)
        at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
        at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
        at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
        at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57)
        at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
        at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:269)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$1.postStart(MetricsSystemImpl.java:240)
        ... 14 more
                                                                    

同时根据需要对不同动作之前或者之后做对应的操作,比如我们是在PUT之后将数据写入另一个表

public class FruitTableCoprocessor1 implements RegionObserver,RegionCoprocessor{
    @Override
    public Optional getRegionObserver() {
        return Optional.of(this);
    }

    /**
     * 在put操作之后执行
     * @param c
     * @param put
     * @param edit
     * @param durability
     * @throws IOException
     */
    @Override
    public void postPut(ObserverContext c, Put put, WALEdit edit, Durability durability) throws IOException {

        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.zookeeper.quorum","hadoop102:2181,hadoop103:2181,hadoop104:2181");
        Connection connection = ConnectionFactory.createConnection(conf);
        //1、获取Admin
        Admin admin = connection.getAdmin();
        //2、判断B表是否存在
        if(!admin.tableExists(TableName.valueOf("B"))){
            System.out.println("B表不存在.......");
        //4、如果表不存在,则创建,再写入
            TableDescriptor tableDescriptor = TableDescriptorBuilder.newBuilder(TableName.valueOf("B")).setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder("info".getBytes()).build()).build();
            admin.createTable(tableDescriptor);

        }

        //3、如果表存在,则直接写入
        Table table = connection.getTable(TableName.valueOf("B"));
        System.out.println("插入数据rowkey为:"+ Bytes.toString(put.getRow()));
        table.put(put);

        connection.close();
    }
}

将协处理动态加载到表中

1、打包代码[打包的时候需要注意,maven项目中的依赖需要将scope设置为provided],不然会引发包冲突出现以下错误

ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: Coprocessor: 'org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionEnvironment@3632c57c' threw: 'java.lang.LinkageError: loader constraint violation: loader (instance of org/apache/hadoop/hbase/util/CoprocessorClassLoader) previously initiated loading for a different type with name "org/apache/hadoop/conf/Configuration"' and has been removed from the active coprocessor set.
	at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.handleCoprocessorThrowable(CoprocessorHost.java:455)
	at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:616)
	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postPut(RegionCoprocessorHost.java:955)
	at org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.lambda$doPostOpCleanupForMiniBatch$1(HRegion.java:3592)
	at org.apache.hadoop.hbase.regionserver.HRegion$BatchOperation.visitBatchOperations(HRegion.java:3087)
	at org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3587)
	at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4002)
	at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3910)
	at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3841)
	at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3832)
	at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3846)
	at org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4173)
	at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3030)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2807)
	at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42000)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

2、禁用表

disable 'user'

3、加载

alter 'user',METHOD=>'table_att','coprocessor'=>'hdfs://hadoop102:8020/flume_07.jar|com.atguigu.hbase.FruitTableCoprocessor1|1005|
hdfs://hadoop102:8020/flume_07.jar :  协处理器代码打的jar包
com.atguigu.hbase.FruitTableCoprocessor1: 协处理器的全类名
1005:协处理的优先级

你可能感兴趣的:(Hbase2.0版本协处理器的编写、加载与卸载)