hadoop启动后jps查看总是不显示namenode进程,然后重新格式化hdfs

有时候hadoop老是抽风,启动服务后用jps查看总是无法显示相应的进程,现在我们就在原来的基础上进行重新格式化(已亲测可以实现):
重新格式化hdfs系统的方法:


(1)查看hdfs-site.xml:

<property>
          <name>dfs.namenode.name.dirname>
          <value>/usr/local/hadoop/hadoop-2.4.0/hdfs/namevalue>
          <description>namenode上存储hdfs名字空间元数据description>  
    property>
    <property>
          <name>dfs.datanode.data.dirname>
          <value>/usr/local/hadoop/hadoop-2.4.0/hdfs/datavalue>
          <description>datanode上数据块的物理存储位置description>
    property>

将dfs.namenode.name.dir和dfs.datanode.data.dir目录下的内容全部删除


(2)查看core-site.xml:

<property>
          <name>hadoop.tmp.dirname>
          <value>/usr/local/hadoop/hadoop-2.4.0/hadoop_tmpvalue>
          <description>namenode上本地的hadoop临时文件夹description>  
property>

将hadoop.tmp.dir目录下的内容全部删除


(3)在hadoop启动的情况下:
重新执行命令:hadoop namenode -format
最后出现如下提示表面格式化完毕且格式化成功:

Re-format filesystem in QJM to [192.168.2.113:8485, 192.168.2.114:8485, 192.168.2.115:8485] ? (Y or N) Y
15/07/31 10:02:05 INFO common.Storage: Storage directory /usr/local/hadoop/hadoop-2.4.0/hdfs/name has been successfully formatted.
15/07/31 10:02:06 INFO namenode.FSImage: Saving image file /usr/local/hadoop/hadoop2.4.0/hdfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
15/07/31 10:02:06 INFO namenode.FSImage: Imagefile/usr/local/hadoop/hadoop-2.4.0/hdfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
15/07/31 10:02:06 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/07/31 10:02:06 INFO util.ExitUtil: ***Exiting with status 0***
15/07/31 10:02:06 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop01/192.168.2.111
************************************************************/

注意:
1.原来的数据全部被清空了。产生了一个新的hdfs。
2.确认格式化前已经删除对应目录下的内容,且启动了hdfs。
3.如果在hadoop没有启动的情况下就执行hadoop namenode -format会出现错误:

15/07/31 09:53:58 INFO ipc.Client: Retrying connect to server: hadoop03/192.168.2.113:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/31 09:53:58 INFO ipc.Client: Retrying connect to server: hadoop05/192.168.2.115:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/31 09:53:58 FATAL namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 2 exceptions thrown:
192.168.2.115:8485: Call From hadoop01/192.168.2.111 to hadoop05:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.2.113:8485: Call From hadoop01/192.168.2.111 to hadoop03:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

你可能感兴趣的:(hadoop)