Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException

问题如下:

因为我要使用mapreduce操作hbase,所以我把hbase下所有的.jar文件都导入了eclipse下的mapreduce工程,在操作hbase时,遇到了下面的问题,弄了好久也不知道问题的所在,提示如下:

Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 176 actions: event_logs: 176 times, 

at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:192)
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:176)
at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:913)
at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:984)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1252)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1289)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:112)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:667)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)


17/06/29 20:59:13 INFO mapreduce.Job:  map 0% reduce 0%
17/06/29 20:59:32 INFO mapreduce.Job:  map 68% reduce 0%
17/06/29 20:59:35 INFO mapreduce.Job:  map 100% reduce 0%
17/06/29 20:59:35 INFO mapreduce.Job: Task Id : attempt_1498732164664_0003_m_000000_1, Status : FAILED
Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 176 actions: event_logs: 176 times, 
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:192)
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:176)
at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:913)
at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:984)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1252)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1289)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:112)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:667)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)


17/06/29 20:59:36 INFO mapreduce.Job:  map 0% reduce 0%
17/06/29 20:59:55 INFO mapreduce.Job:  map 43% reduce 0%
17/06/29 20:59:58 INFO mapreduce.Job:  map 100% reduce 0%
17/06/29 20:59:59 INFO mapreduce.Job: Task Id : attempt_1498732164664_0003_m_000000_2, Status : FAILED
Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 176 actions: event_logs: 176 times, 
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:192)
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:176)
at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:913)
at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:984)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1252)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1289)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:112)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:667)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)


17/06/29 21:00:00 INFO mapreduce.Job:  map 0% reduce 0%
17/06/29 21:00:21 INFO mapreduce.Job:  map 100% reduce 0%
17/06/29 21:00:26 INFO mapreduce.Job: Job job_1498732164664_0003 failed with state FAILED due to: Task failed task_1498732164664_0003_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0


17/06/29 21:00:27 INFO mapreduce.Job: Counters: 9
Job Counters 
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=91969
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=91969
Total vcore-seconds taken by all map tasks=91969
Total megabyte-seconds taken by all map tasks=94176256

17/06/29 21:00:27 INFO etl.AnalyserLogDataRunner: 任务执行失败!

问题是出在建立的表上面,前面建立的表是(在 hbase shell下):create 'even_logs','info',然后我改为 :create 'event_logs','info'就没有出错了,所以应该是建立的表的family的名字应该和文件里的一样才行。在hbase shell 下看 mrtable的数据如下:


你可能感兴趣的:(hbase)