hdfs损坏block定位以及修复

在HDFS中,提供了fsck命令,用于检查HDFS上文件和目录的健康状态、获取文件的block信息和位置信息等。
fsck命令必须由HDFS超级用户来执行,普通用户无权限。
可通过hdfs fsck来查看该命令的帮助文档,如下图所示:

1.手工修复 hdfs debug

1)造一份数据上传到hdfs

 [hadoop@hadoop001 data]$ hadoop fs -put test.txt /blockrecover

2)通过hdfs fsck定位该文件块所在位置

[hadoop@hadoop001 data]$ hdfs fsck /blockrecover/test.txt -files -blocks -locations

19/04/04 12:38:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop001:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.0.108 for path /blockrecover/test.txt at Thu Apr 04 12:38:48 PDT 2019
/blockrecover/test.txt 2 bytes, 1 block(s):  Under replicated BP-1279338148-192.168.199.200-1491145566665:blk_1073742362_1538. Target Replicas is 3 but found 1 replica(s).
0. BP-1279338148-192.168.199.200-1491145566665:blk_1073742362_1538 len=2 Live_repl=1 [DatanodeInfoWithStorage[192.168.0.108:50010,DS-9841e6c3-9a74-4bc8-869e-3543dcf7de90,DISK]]

Status: HEALTHY
 Total size:	2 B
 Total dirs:	0
 Total files:	1
 Total symlinks:		0
 Total blocks (validated):	1 (avg. block size 2 B)
 Minimally replicated blocks:	1 (100.0 %)
 Over-replicated blocks:	0 (0.0 %)
 Under-replicated blocks:	1 (100.0 %)
 Mis-replicated blocks:		0 (0.0 %)
 Default replication factor:	3
 Average block replication:	1.0
 Corrupt blocks:		0
 Missing replicas:		2 (66.666664 %)
 Number of data-nodes:		1
 Number of racks:		1
FSCK ended at Thu Apr 04 12:38:48 PDT 2019 in 1 milliseconds

(顺带说一下,如上命令是执行在伪分布式的机器上,一开始副本系数设为1,按照BP-1279338148-192.168.199.200-1491145566665:blk_1073742362_1538 找到block块以及meta信息手动删除后,通过hdfs debug无法恢复,因为副本系数只有1份,无从恢复。所以两个以上的DN以及副本系数为2以上,才可以通过hdfs debug恢复(可能不正确,仅个人理解)),


与此同时还发现一个问题,当手动删除掉block以及mata信息以后,调用hdfs fsck命令,并不会显示有块丢失现象,并且在重启hdfs后,block以及meta信息会自动修复好,有点不解。。。


3)根据打印信息BP-1279338148-192.168.199.200-1491145566665:blk_1073742362_1538找到对应的块以及meta信息删除
在这里插入图片描述
4)执行如下命令:

[hadoop@hadoop001 subdir2]$ hdfs debug recoverLease -path /blockrecover/test.txt -retries 10 recoverLease SUCCEEDED on /blockrecover/test.txt

即可恢复
注:手动恢复前提要手动删除损坏的block块,是删除损坏block文件和meta文件,而不是是删除hdfs文件。

以下内容摘抄自若泽大数据文档

2.自动修复

当数据块损坏后,DN节点执 directoryscan操作之前,都会发现损坏; 
也就是directoryscan操作是间隔6h
dfs.datanode.directoryscan.interval : 21600

在DN向NN进 blockreport前,都会恢复数据块; 
也就是blockreport操作是间隔6h 
dfs.blockreport.intervalMsec : 21600000

当NN收到blockreport才会进 恢复操作。

3.断电导致HDFS块的损坏如何恢复

1.现象:
断电 导致HDFS服务不正常或者显示块损坏

2.检查HDFS系统文件健康
hdfs fsck /

3.检查hdfs fsck -list-corruptfileblocks
Connecting to namenode via http://hadoop36:50070/fsck?ugi=hdfs&listcorruptfileblocks=1&path=%2F
The list of corrupt files under path ‘/’ are:
blk_1075229920 /hbase/data/JYDW/WMS_PO_ITEMS/c71f5f49535e0728ca72fd1ad0166597/0/f4d3d97bb3f64820b24cd9b4a1af5cdd
blk_1075229921 /hbase/data/JYDW/WMS_PO_ITEMS/c96cb6bfef12795181c966a8fc4ef91d/0/cf44ae0411824708bf6a894554e19780
The filesystem under path ‘/’ has 2 CORRUPT files

4.分析:
MySQL–》大数据平台
只需要从MySQL这个表的数据重新刷新一份到HDFS平台
(前提是知道数据源,知道怎么刷新到HDFS上,那么可以直接将HDFS文件删除)

5.想要知道文件的哪些块分布在哪些机器上面?手工删除linux文件/dfs/dn/…
hadoop36:hdfs:/var/lib/hadoop-hdfs:>

-files 文件分块信息,
-blocks 在带-files参数后才显示block信息
-locations 在带-blocks参数后才显示block块所在datanode的具体IP位置,
-racks 在带-files参数后显示机架位置

无法显示,无法手工删除块文件:
hdfs fsck /hbase/data/JYDW/WMS_PO_ITEMS/c71f5f49535e0728ca72fd1ad0166597/0/f4d3d97bb3f64820b24cd9b4a1af5cdd -files -locations -blocks -racks
Connecting to namenode via http://hadoop36:50070/fsck?ugi=hdfs&locations=1&blocks=1&files=1&path=%2Fhbase%2Fdata%2FJYDW%2FWMS_PO_ITEMS%2Fc71f5f49535e0728ca72fd1ad0166597%2F0%2Ff4d3d97bb3f64820b24cd9b4a1af5cdd
FSCK started by hdfs (auth:SIMPLE) from /192.168.1.100 for path /hbase/data/JYDW/WMS_PO_ITEMS/c71f5f49535e0728ca72fd1ad0166597/0/f4d3d97bb3f64820b24cd9b4a1af5cdd at Sat Jan 20 15:46:55 CST 2018
/hbase/data/JYDW/WMS_PO_ITEMS/c71f5f49535e0728ca72fd1ad0166597/0/f4d3d97bb3f64820b24cd9b4a1af5cdd 2934 bytes, 1 block(s):
/hbase/data/JYDW/WMS_PO_ITEMS/c71f5f49535e0728ca72fd1ad0166597/0/f4d3d97bb3f64820b24cd9b4a1af5cdd: CORRUPT blockpool BP-1437036909-192.168.1.100-1509097205664 block blk_1075229920
MISSING 1 blocks of total size 2934 B
0. BP-1437036909-192.168.1.100-1509097205664:blk_1075229920_1492007 len=2934 MISSING!

Status: CORRUPT
Total size: 2934 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 2934 B)


UNDER MIN REPL’D BLOCKS: 1 (100.0 %)
dfs.namenode.replication.min: 1
CORRUPT FILES: 1
MISSING BLOCKS: 1
MISSING SIZE: 2934 B
CORRUPT BLOCKS: 1


Minimally replicated blocks: 0 (0.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 0.0
Corrupt blocks: 1
Missing replicas: 0
Number of data-nodes: 12
Number of racks: 1
FSCK ended at Sat Jan 20 15:46:55 CST 2018 in 0 milliseconds

The filesystem under path ‘/hbase/data/JYDW/WMS_PO_ITEMS/c71f5f49535e0728ca72fd1ad0166597/0/f4d3d97bb3f64820b24cd9b4a1af5cdd’ is CORRUPT
hadoop36:hdfs:/var/lib/hadoop-hdfs:>

好的文件是显示块分布情况的:
hadoop36:hdfs:/var/lib/hadoop-hdfs:>hdfs fsck /hbase/data/JYDW/WMS_TO/011dea9ae46dae6c1f1f3a24a75af100/0/1d60f56773984e4cac614a8b5f7e93a6 -files -locations -blocks -racks
Connecting to namenode via http://hadoop36:50070/fsck?ugi=hdfs&files=1&locations=1&blocks=1&racks=1&path=%2Fhbase%2Fdata%2FJYDW%2FWMS_TO%2F011dea9ae46dae6c1f1f3a24a75af100%2F0%2F1d60f56773984e4cac614a8b5f7e93a6
FSCK started by hdfs (auth:SIMPLE) from /192.168.1.100 for path /hbase/data/JYDW/WMS_TO/011dea9ae46dae6c1f1f3a24a75af100/0/1d60f56773984e4cac614a8b5f7e93a6 at Sat Jan 20 15:58:25 CST 2018
/hbase/data/JYDW/WMS_TO/011dea9ae46dae6c1f1f3a24a75af100/0/1d60f56773984e4cac614a8b5f7e93a6 1697 bytes, 1 block(s): OK
0. BP-1437036909-192.168.1.100-1509097205664:blk_1075227504_1489591 len=1697 Live_repl=3 [/default/192.168.1.150:50010, /default/192.168.1.153:50010, /default/192.168.1.145:50010]

blk_1075227504_1489591 len=1697 Live_repl=3
[/default/192.168.1.150:50010, /default/192.168.1.153:50010, /default/192.168.1.145:50010]

6.最终选择一了百了,删除损坏的块文件,然后业务系统数据重刷
hadoop36:hdfs:/var/lib/hadoop-hdfs:>hdfs fsck / -delete

7.假设数据仅有HDFS上
7.1 hdfs dfs -ls /xxxx
hdfs dfs -get /xxxx ./
hdfs dfs -rm /xxx
hdfs dfs -put xxx /

log文件丢一丢丢 没有关系
文件是业务数据 订单数据 丢了,需要报告

你可能感兴趣的:(大数据)