hbase常见错误

 

    1. hbase启动错误
      1. org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of this process is 4818@localhost85

[root@localhost86 local]# hbase shell

HBase Shell; enter 'help' for list of supported commands.

Type "exit" to leave the HBase Shell

Version 0.94.7, r1471806, Wed Apr 24 18:44:36 PDT 2013

 

hbase(main):001:0> list

TABLE                                                                                          

17/04/01 19:15:06 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries

17/04/01 19:15:06 WARN zookeeper.ZKUtil: hconnection Unable to set watcher on znode (/hbase/hbaseid)

org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid

    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)

    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)

    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:172)

    at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:450)

    at org.apache.hadoop.hbase.zookeeper.ClusterId.readClusterIdZNode(ClusterId.java:61)

    at org.apache.hadoop.hbase.zookeeper.ClusterId.getId(ClusterId.java:50)

    at org.apache.hadoop.hbase.zookeeper.ClusterId.hasId(ClusterId.java:44)

    at

 

根据日志文件记录分析:

 vim hbase-root-master-localhost85.log

INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of this process is 4818@localhost85

原因:主要是hbase-site.xml中hbase.zookeeper.quorum的值配置错误

hbase.zookeeper.quorum

修改为以下内容

    192.168.1.85,192.168.1.86

 

 

 

      1. org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet

hbase(main):003:0* list

TABLE                                                                                                

 

ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet

    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2445)

    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:946)

    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58521)

    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

 

Here is some help for this command:

List all tables in hbase. Optional regular expression parameter could

be used to filter the output. Examples:

 

  hbase> list

  hbase> list 'abc.*'

  hbase> list 'ns:abc.*'

  hbase> list 'ns:.*'

 

原因:

    Hadoop目录hdfs在访问时是受包含模式,取消hdfs目录访问安全模式。

解决办法:

进入hadoop安装目录中执行命令: bin/hdfs dfsadmin –safemode leave

 

 

      1. localhosti65: ssh: Could not resolve hostname localhosti65: Name or service not known

[root@localhost65 hbase-1.3.1]# bin/start-hbase.sh

localhost65: starting zookeeper, logging to /usr/local/hbase-1.3.1/bin/../logs/hbase-root-zookeeper-localhost65.out

starting master, logging to /usr/local/hbase-1.3.1/logs/hbase-root-master-localhost65.out

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

localhosti65: ssh: Could not resolve hostname localhosti65: Name or service not known

[root@localhost65 hbase-1.3.1]#

问题原因:

    Hbase中链接Zookeeper配置写错。

解决办法:

    1.检查hbase-site.xml中的zookeeper链接ip,端口配置是否正确。如下:

                hbase.zookeeper.quorum

                localhost65

       

       

                hbase.zookeeper.property.clientPort

                2181

       

    2.检查regionservers 文件配置地址是否正确。如下:

    [root@localhost65 hbase-1.3.1]# vim conf/regionservers

localhost65 #域名

[root@localhost65 hbase-1.3.1]#

 

      1. org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException):`/hbase/WALs/localhost65,16201,1503546750714-splitting is non empty': Directory is not empty

2017-08-29 09:57:42,398 WARN  [ProcedureExecutor-0] master.SplitLogManager: Returning success without actually splitting and deleting all the log files in path hdfs://192.168.3.65:9000/hbase/WALs/localhost65,16201,1503546750714-splitting: [FileStatus{path=hdfs://192.168.3.65:9000/hbase/WALs/localhost65,16201,1503546750714-splitting/localhost65%2C16201%2C1503546750714.meta.1503568364024.meta; isDirectory=false; length=83; replication=1; blocksize=134217728; modification_time=1503568364030; access_time=1503568364030; owner=root; group=supergroup; permission=rw-r--r--; isSymlink=false}]

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException): `/hbase/WALs/localhost65,16201,1503546750714-splitting is non empty': Directory is not empty

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4012)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3968)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3952)

    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:825)

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:589)

    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)

    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)

    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

 

该问题原因较多,从以下几个方面检查:

1、系统或hdfs是否有空间
2、datanode数是否正常 
3、是否在safemode 
4、防火墙关闭
5、配置方面
6、把NameNode的tmp文件清空,然后重新格式化NameNode

 

 

      1. ERROR:org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

ERROR:org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2452)

    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:792)

    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58519)

    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

can't get master address from ZooKeeper

 

解决办法:

通过查看日志信息来分析,可以看出hbase数据库 shell运行失败的原因大概就是时钟不同步了。安装ntpdate, sudo apt-get install ntpdate后,运行shell命令:ntpdate  0.cn.pool.ntp.org    这个命令很简单,参数可以选择任意一个时间服务器的地址,然后重启hbase数据库:bin/stop-hbase.sh     bin/start-hbase.sh  即可。可能会出现 can't get master address from ZooKeeper错误,这可能是由于ZooKeeper不稳定造成的,我试着又重启了一下,就可以了。

 

 

      1. Can't get master address from ZooKeeper; znode data == null

hbase(main):001:0> status

 

ERROR: Can't get master address from ZooKeeper; znode data == null

 

Here is some help for this command:

Show cluster status. Can be 'summary', 'simple', 'detailed', or 'replication'. The

default is 'summary'. Examples:

 

 

      1. java.net.SocketTimeoutException: callTimeout=60000, callDuration=68489

hbase(main):029:0>

hbase(main):029:0> [root@node3 ~]# hive

 

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-common-1.1.0-cdh5.10.0.jar!/hive-log4j.properties

WARNING: Hive CLI is deprecated and migration to Beeline is recommended.

hive> select *  FROM TempLogTerminal WHERE RESOURCETYPE=32 AND  AreaCode='610103' AND KEY LIKE 'T%';

Query ID = root_20180227182020_72cd9a43-95b7-4e89-82d1-15feea3af10e

Total jobs = 1

Launching Job 1 out of 1

Number of reduce tasks is set to 0 since there's no reduce operator

Starting Job = job_1519724327083_0007, Tracking URL = http://node1:8088/proxy/application_1519724327083_0007/

Kill Command = /opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/bin/hadoop job  -kill job_1519724327083_0007

Hadoop job information for Stage-1: number of mappers: 22; number of reducers: 0

2018-02-27 18:21:08,970 Stage-1 map = 0%,  reduce = 0%

2018-02-27 18:21:31,265 Stage-1 map = 9%,  reduce = 0%, Cumulative CPU 8.69 sec

2018-02-27 18:21:32,334 Stage-1 map = 23%,  reduce = 0%, Cumulative CPU 17.66 sec

2018-02-27 18:21:34,470 Stage-1 map = 27%,  reduce = 0%, Cumulative CPU 21.59 sec

2018-02-27 18:21:36,574 Stage-1 map = 32%,  reduce = 0%, Cumulative CPU 25.94 sec

2018-02-27 18:21:38,662 Stage-1 map = 36%,  reduce = 0%, Cumulative CPU 33.28 sec

2018-02-27 18:21:39,704 Stage-1 map = 41%,  reduce = 0%, Cumulative CPU 33.44 sec

2018-02-27 18:21:40,730 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 37.89 sec

2018-02-27 18:21:48,923 Stage-1 map = 55%,  reduce = 0%, Cumulative CPU 39.94 sec

2018-02-27 18:21:50,987 Stage-1 map = 59%,  reduce = 0%, Cumulative CPU 43.99 sec

2018-02-27 18:21:52,008 Stage-1 map = 73%,  reduce = 0%, Cumulative CPU 55.82 sec

2018-02-27 18:21:53,032 Stage-1 map = 77%,  reduce = 0%, Cumulative CPU 60.51 sec

2018-02-27 18:21:54,067 Stage-1 map = 86%,  reduce = 0%, Cumulative CPU 69.67 sec

2018-02-27 18:21:56,109 Stage-1 map = 91%,  reduce = 0%, Cumulative CPU 73.65 sec

2018-02-27 18:22:56,560 Stage-1 map = 91%,  reduce = 0%, Cumulative CPU 73.65 sec

2018-02-27 18:23:57,063 Stage-1 map = 91%,  reduce = 0%, Cumulative CPU 73.65 sec

2018-02-27 18:24:55,426 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 73.65 sec

MapReduce Total cumulative CPU time: 1 minutes 13 seconds 650 msec

Ended Job = job_1519724327083_0007 with errors

Error during job, obtaining debugging information...

Examining task ID: task_1519724327083_0007_m_000002 (and more) from job job_1519724327083_0007

Examining task ID: task_1519724327083_0007_m_000016 (and more) from job job_1519724327083_0007

Examining task ID: task_1519724327083_0007_m_000020 (and more) from job job_1519724327083_0007

 

Task with the most failures(4):

-----

Task ID:

  task_1519724327083_0007_m_000020

 

URL:

  http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1519724327083_0007&tipid=task_1519724327083_0007_m_000020

-----

Diagnostic Messages for this Task:

Error: java.io.IOException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:

Tue Feb 27 18:24:45 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68489: row 'TC09F0534B145' on table 'LOG20180108' at region=LOG20180108,TC09F0534B145,1515254515969.dbd4f4e281c3c7d6d765352f6f990af7., hostname=node2,60020,1519721631359, seqNum=4326

 

    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)

    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)

    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:252)

    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:703)

    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)

    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)

    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)

    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:415)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)

    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:

Tue Feb 27 18:24:45 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68489: row 'TC09F0534B145' on table 'LOG20180108' at region=LOG20180108,TC09F0534B145,1515254515969.dbd4f4e281c3c7d6d765352f6f990af7., hostname=node2,60020,1519721631359, seqNum=4326

 

    at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:286)

    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:231)

    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)

   at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)

    at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)

    at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)

    at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)

    at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:155)

    at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867)

    at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.restart(TableRecordReaderImpl.java:91)

    at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.initialize(TableRecordReaderImpl.java:169)

    at org.apache.hadoop.hbase.mapreduce.TableRecordReader.initialize(TableRecordReader.java:134)

    at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.initialize(TableInputFormatBase.java:211)

    at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getRecordReader(HiveHBaseTableInputFormat.java:118)

    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:250)

    ... 9 more

Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68489: row 'TC09F0534B145' on table 'LOG20180108' at region=LOG20180108,TC09F0534B145,1515254515969.dbd4f4e281c3c7d6d765352f6f990af7., hostname=node2,60020,1519721631359, seqNum=4326

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)

    at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

    at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region LOG20180108,TC09F0534B145,1515254515969.dbd4f4e281c3c7d6d765352f6f990af7. is not online on node2,60020,1519721631359

    at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2921)

    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1053)

    at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2384)

    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)

    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)

    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)

    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)

    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)

 

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)

    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:327)

    at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:402)

    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:203)

    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)

   at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)

    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:381)

    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:355)

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)

    ... 4 more

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NotServingRegionException): org.apache.hadoop.hbase.NotServingRegionException: Region LOG20180108,TC09F0534B145,1515254515969.dbd4f4e281c3c7d6d765352f6f990af7. is not online on node2,60020,1519721631359

    at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2921)

    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1053)

    at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2384)

    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)

    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)

    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)

    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)

    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)

 

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1269)

    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)

    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)

    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)

    at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:394)

    ... 10 more

 

 

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

MapReduce Jobs Launched:

Stage-Stage-1: Map: 22   Cumulative CPU: 73.65 sec   HDFS Read: 445399 HDFS Write: 2667943 FAIL

Total MapReduce CPU Time Spent: 1 minutes 13 seconds 650 msec

hive>

解决 办法:

设置连接超时时间:

hbase.rpc.timeout",20000

hbase.client.operation.timeout",30000

hbase.client.scanner.timeout.period",200000

 

你可能感兴趣的:(大数据-hadoop)