hadoop second namenode异常 Inconsistent checkpoint fields

hadoop second namenode异常 Inconsistent checkpoint fields

java.io.IOException: Inconsistent checkpoint fields.
LV = -47 namespaceID = 524164388 cTime = 0 ; clusterId = CID-5c38c719-cc2c-47d9-a44b-d231b2de0375 ; blockpoolId = BP-623213881-xx.xx.xxx.115-1387942327732.
Expecting respectively: -47; 1014895574; 0; CID-14eeb766-f6db-46c7-8676-108f8fb9b1c4; BP-2013924350-xx.xx.xxx.115-1386898055930.
    at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:133)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:519)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:380)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:346)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
    at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:342)
    at java.lang.Thread.run(Thread.java:662)

造成如上异常的原因很多,其中一个原因为:second namenode的数据目录中的edit log与当前的数据版本不一致导致 
解决方法: 
手动删除second nodenode目录下的文件,然后重启hadoop:

$HADOOP_HOME/sbin/stop-all.sh

$HADOOP_HOME/sbin/start-all.sh

问题解决

你可能感兴趣的:(hadoop)