hbase建立自定义endpoint协处理,结果还是没有成功

Hbase在建立协处理器的时候报错。导致表的状态不可用。但是启用也不行。。

然后使用hbase 的修改

1、hbase hbck

 Status:OK,表示没有发现不一致问题。

 Status:INCONSISTENT,表示有不一致问题。

 

 2.重新修复hbasemeta表

 hbase hbck -fixMeta

 

 3.重新将hbase meta表分给regionserver

 hbase hbck –fixAssignments

但是。一点用都不起。继续报错!

Exception in thread "main"java.io.IOException: Region {ENCODED => 06e7d41913ae20f5626a514c64d7aecc,NAME => 'twits,0ed8,1479894291482.06e7d41913ae20f5626a514c64d7aecc.',STARTKEY => '0ed8', ENDKEY => '0f3c'} failed to move out of transitionwithin timeout 120000ms

 

实在没办法我就采用网上的方案。在hdfs上删除对应的数据信息。

我删除 /hbase/default/表,

结果是hbase一直就是list各种命令都是这个错误。

ERROR:org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

         atorg.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:1933)

         atorg.apache.hadoop.hbase.master.MasterRpcServices.getTableDescriptors(MasterRpcServices.java:790)

         atorg.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44205)

         atorg.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)

         atorg.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)

         atorg.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

         atorg.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

         atjava.lang.Thread.run(Thread.java:745)

 

然后我继续使用hbase的修复。结果如果(开始是怀疑是zookeeper出现了问题。)

16/12/01 17:28:17 INFO zookeeper.ZooKeeper:Initiating client connection,connectString=datanode02:2181,datanode03:2181,datanode01:2181sessionTimeout=180000 watcher=hconnection-0x3e07e2990x0, quorum=datanode02:2181,datanode03:2181,datanode01:2181,baseZNode=/hbase

16/12/01 17:28:17 INFOzookeeper.ClientCnxn: Opening socket connection to serverdatanode01/192.168.1.41:2181. Will not attempt to authenticate using SASL (unknown error)

16/12/01 17:28:17 INFOzookeeper.ClientCnxn: Socket connection established, initiating session,client: /192.168.1.44:39775, server: datanode01/192.168.1.41:2181

16/12/01 17:28:17 INFOzookeeper.ClientCnxn: Session establishment complete on serverdatanode01/192.168.1.41:2181, sessionid = 0x358b985c9c20073, negotiated timeout= 60000

16/12/01 17:29:06 INFOclient.RpcRetryingCaller: Call exception, tries=10, retries=35, started=48503ms ago, cancelled=false, msg=

16/12/01 17:29:26 INFOclient.RpcRetryingCaller: Call exception, tries=11, retries=35, started=68648ms ago, cancelled=false, msg=

16/12/01 17:29:46 INFOclient.RpcRetryingCaller: Call exception, tries=12, retries=35, started=88688ms ago, cancelled=false, msg=

16/12/01 17:30:06 INFOclient.RpcRetryingCaller: Call exception, tries=13, retries=35, started=108757ms ago, cancelled=false, msg=

16/12/01 17:30:26 INFOclient.RpcRetryingCaller: Call exception, tries=14, retries=35, started=128845ms ago, cancelled=false, msg=

 

结果高手出现说。在cdh中看见hdfs是丢块了。

然后hdfs修复

hadoopfsck /

hdfsfsck / -delete //此方式会将丢失的块全部删除

注意看执行命令之后的状态。显示Healthy。。健康

 

然后我在重新执行hbase 的修复就好。在hbase客户端显示是没有表了

hbase(main):001:0> list

TABLE                                                                                                                                        

0 row(s) in 0.1670 seconds

 

但是在通过java调用创建表的时候。还是有一个错误!

18:30:19,493 [INFO ] connecting todatanode01 2181(FourLetterWordMain.java:46)

18:30:21,103 [ERROR] create table twits error:org.apache.hadoop.hbase.TableExistsException: twits

         atorg.apache.hadoop.hbase.master.handler.CreateTableHandler.checkAndSetEnablingTable(CreateTableHandler.java:160)

         atorg.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:133)

         atorg.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1317)

         atorg.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:407)

         atorg.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44239)

         atorg.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)

         atorg.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)

         atorg.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

         atorg.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

         atjava.lang.Thread.run(Thread.java:745)

(CreateHTable.java:75)

 

然后看zookeeper中对应的table信息

[zk: localhost:2181(CONNECTED) 7] ls/hbase/table

[hbase:meta, twits, hbase:namespace]

[zk: localhost:2181(CONNECTED) 8] get/hbase/table/twits

�master:60000�V`��za�PBUF

cZxid = 0x8000008d6

ctime = Thu Dec 01 18:17:57 CST 2016

mZxid = 0x8000008d7

mtime = Thu Dec 01 18:17:57 CST 2016

pZxid = 0x8000008d6

cversion = 0

dataVersion = 1

aclVersion = 0

ephemeralOwner = 0x0

dataLength = 31

numChildren = 0

[zk: localhost:2181(CONNECTED) 9]

然后删除这个表

[zk: localhost:2181(CONNECTED) 10] delete/hbase/table/twits 1 

 

然后我在使用java调用创建表,很高心,终于成功了!

但是。这个还是很失败。因为我是通过丢失数据的方式达到了目的,希望下次可以找到更加好的方式!!

 

2016-12-2

重新建立协处理器。把jar包对应hdfs的端口改成了8020,在java端显示成功了。但是查看hbase的日志,发现出现了错误!

Failed to loadcoprocessor com.sanshi.hbase.coprocessor.CoprocessorServerTest

java.lang.LinkageError:loader constraint violation in interface itable initialization: when resolvingmethod"com.sanshi.hbase.coprocessor.CoprocessorServerTest.getService()Lcom/google/protobuf/Service;"the class loader (instance oforg/apache/hadoop/hbase/util/CoprocessorClassLoader) of the current class,com/sanshi/hbase/coprocessor/CoprocessorServerTest, and the class loader(instance of sun/misc/Launcher$AppClassLoader) for interfaceorg/apache/hadoop/hbase/coprocessor/CoprocessorService have different Classobjects for the type st.getService()Lcom/google/protobuf/Service; used in thesignature

         atjava.lang.Class.getDeclaredConstructors0(Native Method)

         atjava.lang.Class.privateGetDeclaredConstructors(Class.java:2585)

         atjava.lang.Class.getConstructor0(Class.java:2885)

         at java.lang.Class.newInstance(Class.java:350)

         atorg.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:232)

         atorg.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:195)

         atorg.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:326)

         atorg.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:225)

         atorg.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:773)

         at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:681)

         atsun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

         atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

         atjava.lang.reflect.Constructor.newInstance(Constructor.java:526)

         atorg.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5684)

         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5994)

         atorg.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5966)

         atorg.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5922)

         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5873)

         atorg.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:356)

         atorg.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:126)

         atorg.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)

         atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

         atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

    atjava.lang.Thread.run(Thread.java:745)

你可能感兴趣的:(Hbase)