hadoop HA 备NN无法启动的问题

今天在把原来的hadoop HA的环境给变换了,在start-dfs.sh时,报了以下一个错误:
013-09-23 16:39:33,248 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: slave3:50070
2013-09-23 16:39:33,248 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-09-23 16:39:33,282 INFO org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Starting standby checkpoint thread...
2013-09-23 16:33:59,012 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: master
2013-09-23 16:33:59,012 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true
2013-09-23 16:33:59,033 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2013-09-23 16:33:59,672 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-09-23 16:33:59,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2013-09-23 16:33:59,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-09-23 16:33:59,675 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2013-09-23 16:33:59,677 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/hadoop/dw/data/hadoop/dfs/name does not exist
2013-09-23 16:33:59,678 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2013-09-23 16:33:59,679 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2013-09-23 16:33:59,679 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2013-09-23 16:33:59,680 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot start an HA namenode with name dirs that need recovery. Dir: Storage Directory /dw/data/hadoop/dfs/name state: NON_EXISTENT
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:289)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:400)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:610)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:591)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1162)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1226)
2013-09-23 16:33:59,683 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-09-23 16:33:59,684 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at slave3/10.95.3.64
************************************************************/
2013-09-23 16:39:30,752 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

这个错误导致备NN的进程起不起来,原因是主NN和备NN的元数据不一致导致的,只要把主NN的元数据目录拷贝到备NN的元数据目录即可。

你可能感兴趣的:(hadoop)