namenode启动异常问题解决

HA集群配置完成并启动后,namenode不能正常启动。刚启动的时候 jps看到了namenode,但是隔了一两分钟,再看namenode就不见了。如果不启动journalnode,namenode运行正常,一旦启动journalnode,则namenode过一会就会挂掉。查看namenode日志,发现报错如下:

2019-10-18 15:32:36,835 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: host-xxx/xxx:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=100, sleepTime=10000 MILLISECONDS)
2019-10-18 15:32:37,263 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [xxx1:8485, xxx2:8485, xxx3:8485]. Skipping.
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
	at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:278)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1463)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1487)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:212)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:324)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:412)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
2019-10-18 15:32:37,267 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2019-10-18 15:32:37,267 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:337)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:412)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
2019-10-18 15:32:46,837 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: host-xxx/xxx:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=100, sleepTime=10000 MILLISECONDS)
2019-10-18 15:32:46,838 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: host-xxx/xxx:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=100, sleepTime=10000 MILLISECONDS)
2019-10-18 15:32:56,841 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: host-xxx/xxx:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=100, sleepTime=10000 MILLISECONDS)

错误原因:

我们在执行start-dfs.sh的时候,默认启动顺序是namenode>datanode>journalnode>zkfc,如果journalnode和namenode不在一台机器启动的话,很容易因为网络延迟问题导致namenode无法连接journalnode,无法实现选举,最后导致刚刚启动的namenode会突然挂掉。虽然namenode启动时有重试机制等待journalnode的启动,但是由于重试次数限制,可能网络情况不好,导致重试次数用完了,也没有启动成功。

 

解决方法:

1、手动启动namenode,避免了网络延迟等待journalnode的步骤,一旦两个namenode连入journalnode,实现了选举,则不会出现失败情况。(已尝试,但无效果)

2、先启动journalnode然后再运行start-dfs.sh。(已尝试,但无效果)

3、把namenode对journalnode的容错次数或时间调成更大的值,保证能够对正常的启动延迟、网络延迟能容错。在hdfs-site.xml中修改ipc参数,namenode对journalnode检测的重试次数,默认为10次,每次1000ms,故网络情况差需要增加。(已尝试,但无效果)

具体修改信息为:

 
     ipc.client.connect.max.retries
     100
     Indicates the number of retries a client will make to establish
      a server connection.
     


     ipc.client.connect.retry.interval
    10000
    Indicates the number of milliseconds a client will wait for
  before retrying to establish a server connection.
    

4、有资料说可能是因为/etc/hosts文件的问题,检查了下,发现有些机器确实配置不全,重新修改补充。

经过上述一番折腾之后,发现不再报连接错误,但是又出现了下面这种报错:

2019-10-18 16:41:41,800 INFO org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Will roll logs on active node at /xxx:9000 every 120 seconds.
2019-10-18 16:41:41,813 INFO org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Starting standby checkpoint thread...
Checkpointing active NN at http://xxx:50070
Serving checkpoints at http://xxx:50070
2019-10-18 16:41:41,964 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2019-10-18 16:41:41,965 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:337)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:412)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
2019-10-18 16:41:42,072 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2019-10-18 16:41:42,097 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
2019-10-18 16:41:42,161 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Successfully started new epoch 20
2019-10-18 16:41:42,161 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Beginning recovery of unclosed segment starting at txid 1
2019-10-18 16:41:42,225 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Recovery prepare phase complete. Responses:
192.168.80.5:8485: segmentState { startTxId: 1 endTxId: 1 isInProgress: true } lastWriterEpoch: 3 lastCommittedTxId: 8
192.168.80.4:8485: segmentState { startTxId: 1 endTxId: 1 isInProgress: true } lastWriterEpoch: 3 lastCommittedTxId: 8
2019-10-18 16:41:42,229 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Using longest log: xxx:8485=segmentState {
  startTxId: 1
  endTxId: 1
  isInProgress: true
}
lastWriterEpoch: 3
lastCommittedTxId: 8

2019-10-18 16:41:42,230 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [xxx1:8485, xxx2:8485, xxx3:8485], stream=null))
java.lang.AssertionError: Decided to synchronize log to startTxId: 1
endTxId: 1
isInProgress: true
 but logger 192.168.80.5:8485 had seen txid 8 committed
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnclosedSegment(QuorumJournalManager.java:336)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:455)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1394)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1149)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1655)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
	at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1533)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1246)
	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
	at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2019-10-18 16:41:42,233 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2019-10-18 16:41:42,235 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

解决方法:

在namenode上执行命令:

hdfs namenode -initializeSharedEdits

然后重启集群,结果namenode运行正常。

 

参考链接:

https://my.oschina.net/tearsky/blog/631038?p={{currentPage+1}}

https://blog.csdn.net/u012425536/article/details/79216300

你可能感兴趣的:(Hadoop)