Name node is in safe mode

相信hadoop用户经常为安全模式头痛,但是又是难以避免的。安全模式开启通常是由于hdfs文件系统数据完整性缺失造成。所以核心就是想办法让hdfs文件系统数据变得完整,相对没有损坏。

错误代码:

Name node is in safe mode.The reported blocks 356 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 358. Safe mode will be turned off automatically.

解决办法:

法一、暴力的直接将整个文件系统格式化:hdfs namenode -format

法二、将其控制安全模式的阀值调小:dfs.safemode.threshold.pct (将其该值调整整)

法三、使用命令离强制开安全模式然后再检测损坏block并删除:

第一步. hdfs dfsadmin -safemode leave (强制离开安全模式)

第二步. hdfs fsck / -delete (删除掉损坏的blocks)




hdfs fsck的简单option如下:

[hadoop@hadoop007 hadoop]$ hdfs fsck
Usage: DFSck [-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]]
start checking from this path
-move move corrupted files to /lost+found
-delete delete corrupted files
-files print out files being checked
-openforwrite print out files opened for write
-includeSnapshotsinclude snapshot data if the given path indicates a snapshottable directory or there are snapshottable directories under it
-list-corruptfileblocksprint out list of missing blocks and files they belong to
-blocks print out block report
-locations print out locations for every block
-racks print out network topology for data-node locations

Please Note:
1. By default fsck ignores files opened for write, use -openforwrite to report such files. They are usually  tagged CORRUPT or HEALTHY depending on their block allocation status
2. Option -includeSnapshots should not be used for comparing stats, should be used only for HEALTH check, as this may contain duplicates if the same file present in both original fs tree and inside snapshots.

你可能感兴趣的:(大数据,hadoop,safemode,safemode,hadoop,safe,mode,Name,node,is,in,safe)