HBase中regionserver启动后自动关闭问题解决

查看日志,发现错误

2019-06-24 20:16:29,432 ERROR [regionserver/slave2:16020] regionserver.HRegionServer: ***** ABORTING region server slave2,16020,1561378582102: Unhandled: cannot get log writer *****
java.io.IOException: cannot get log writer
	at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:95)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:273)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:66)
	at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:756)
	at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:486)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:216)
	at org.apache.hadoop.hbase.wal.FSHLogProvider.createWAL(FSHLogProvider.java:104)
	at org.apache.hadoop.hbase.wal.FSHLogProvider.createWAL(FSHLogProvider.java:39)
	at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:152)
	at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:60)
	at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:276)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2091)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1318)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1200)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1001)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: hflush
	at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:99)
	at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166)
	at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:76)
	... 15 more

hbase.unsafe.stream.capability.enforce:使用本地文件系统设置为false,使用hdfs设置为true。但根据HBase 官方手册的说明:HBase 从2.0.0 开始默认使用的是asyncfs。

解决办法:(每个节点)编辑配置文档:vim hbase-site.xml,并添加:

    <property>
         <name>hbase.unsafe.stream.capability.enforcename>
         <value>falsevalue>
    property>

master上再次启动start-hbase.sh,后查看:

hbase(main):027:0> status
1 active master, 0 backup masters, 3 servers, 29 dead, 0.3333 average load
Took 0.0158 seconds 

这里有很多dead节点,hbase在节点重启后不会清理(达到最大限制),可以手动清理。

参考链接:https://www.cndba.cn/dave/article/3321

你可能感兴趣的:(大数据)