生产HDFS Block副本缺失或者副本损坏恢复

一、文件

上传:

[hadoop@hadoop001 ~]$ hdfs dfs -mkdir /blockrecover
[hadoop@hadoop001 ~]$ hdfs dfs -put data/genome-scores.csv /blockrecover
[hadoop@hadoop001 ~]$ hdfs dfs -ls /blockrecover                                                            
Found 1 items
-rw-r--r--   3 hadoop hadoop  323544381 2019-08-21 18:48 /blockrecover/genome-scores.csv

校验健康状态:

[hadoop@hadoop001 ~]$ hdfs fsck /
Connecting to namenode via http://hadoop001:50070/fsck?ugi=hadoop&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.174.121 for path / at Thu Aug 22 15:44:41 CST 2019
.Status: HEALTHY
 Total size:    323544381 B
 Total dirs:    10
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      3 (avg. block size 107848127 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     3.0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Thu Aug 22 15:44:41 CST 2019 in 4 milliseconds


The filesystem under path '/' is HEALTHY

二、直接DN节点上删除文件一个block的一个副本(3副本)

删除块和meta文件:

[hadoop@hadoop002 subdir0]$ ll
total 318444
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 22 15:27 blk_1073741825
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 22 15:27 blk_1073741825_1001.meta
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 21 18:48 blk_1073741826
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 21 18:48 blk_1073741826_1002.meta
-rw-rw-r-- 1 hadoop hadoop  55108925 Aug 21 18:48 blk_1073741827
-rw-rw-r-- 1 hadoop hadoop    430547 Aug 21 18:48 blk_1073741827_1003.meta
[hadoop@hadoop002 subdir0]$ rm -rf blk_1073741825 blk_1073741825_1001.meta
[hadoop@hadoop002 subdir0]$ ll
total 186344
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 21 18:48 blk_1073741826
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 21 18:48 blk_1073741826_1002.meta
-rw-rw-r-- 1 hadoop hadoop  55108925 Aug 21 18:48 blk_1073741827
-rw-rw-r-- 1 hadoop hadoop    430547 Aug 21 18:48 blk_1073741827_1003.meta

直接重启HDFS,直接模拟损坏效果,然后fsck检查:

[hadoop@hadoop001 ~]$ hdfs fsck /
Connecting to namenode via http://hadoop002:50070/fsck?ugi=hadoop&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.174.121 for path / at Thu Aug 22 15:49:54 CST 2019
.
/blockrecover/genome-scores.csv:  Under replicated BP-1685056456-192.168.174.121-1566207286072:blk_1073741825_1001. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: HEALTHY
 Total size:    323544381 B
 Total dirs:    10
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      3 (avg. block size 107848127 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (33.333332 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.6666667
 Corrupt blocks:                0
 Missing replicas:              1 (11.111111 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Thu Aug 22 15:49:54 CST 2019 in 26 milliseconds


The filesystem under path '/' is HEALTHY

三、手动修复

[hadoop@hadoop001 ~]$ hdfs | grep debug

没有输出debug参数的任何信息结果! 故hdfs命令帮助是没有debug的,但是确实有hdfs debug这个组合命令,切记。

#修复命令
[hadoop@hadoop001 ~]$ hdfs debug  recoverLease  -path /blockrecover/genome-scores.csv -retries 10

直接DN节点查看,block文件和meta文件恢复

[hadoop@hadoop002 subdir0]$ ll
total 318444
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 22 15:50 blk_1073741825
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 22 15:50 blk_1073741825_1001.meta
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 21 18:48 blk_1073741826
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 21 18:48 blk_1073741826_1002.meta
-rw-rw-r-- 1 hadoop hadoop  55108925 Aug 21 18:48 blk_1073741827
-rw-rw-r-- 1 hadoop hadoop    430547 Aug 21 18:48 blk_1073741827_1003.meta

四、自动修复

当数据块损坏后,DN节点执行directoryscan操作之前,都不会发现损坏;
也就是directoryscan操作是间隔6h 
dfs.datanode.directoryscan.interval : 21600

在DN向NN进行blockreport前,都不会恢复数据块; 
也就是blockreport操作是间隔6h 
dfs.blockreport.intervalMsec : 21600000

当NN收到blockreport才会进⾏行行恢复操作

五、直接在一个block块副本写入脏数据,造成副本损坏

[hadoop@hadoop002 subdir0]$ ll
total 318448
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 22 17:34 blk_1073741828
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 22 17:34 blk_1073741828_1004.meta
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 22 17:35 blk_1073741829
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 22 17:35 blk_1073741829_1005.meta
-rw-rw-r-- 1 hadoop hadoop  55108925 Aug 22 17:35 blk_1073741830
-rw-rw-r-- 1 hadoop hadoop    430547 Aug 22 17:35 blk_1073741830_1006.meta
[hadoop@hadoop002 subdir0]$ echo "gfdgfdgdf" >> blk_1073741830

直接重启HDFS,直接模拟损坏效果,然后fsck检查:

[hadoop@hadoop001 logs]$ hdfs fsck /
Connecting to namenode via http://hadoop001:50070/fsck?ugi=hadoop&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.174.121 for path / at Fri Aug 23 09:05:39 CST 2019
.
/blockrecover/genome-scores.csv:  Under replicated BP-1685056456-192.168.174.121-1566207286072:blk_1073741830_1006. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: HEALTHY
 Total size:    323544381 B
 Total dirs:    10
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      3 (avg. block size 107848127 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (33.333332 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.6666667
 Corrupt blocks:                0
 Missing replicas:              1 (11.111111 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Fri Aug 23 09:05:39 CST 2019 in 59 milliseconds


The filesystem under path '/' is HEALTHY

定位到Block损坏的副本位置

[hadoop@hadoop001 ~]$ hdfs fsck /blockrecover/genome-scores.csv -files -locations -blocks -racks 
Connecting to namenode via http://hadoop001:50070/fsck?ugi=hadoop&files=1&locations=1&blocks=1&racks=1&path=%2Fblockrecover%2Fgenome-scores.csv
FSCK started by hadoop (auth:SIMPLE) from /192.168.174.121 for path /blockrecover/genome-scores.csv at Fri Aug 23 09:57:52 CST 2019
/blockrecover/genome-scores.csv 323544381 bytes, 3 block(s):  Under replicated BP-1685056456-192.168.174.121-1566207286072:blk_1073741830_1006. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
0. BP-1685056456-192.168.174.121-1566207286072:blk_1073741828_1004 len=134217728 Live_repl=3 [/default-rack/192.168.174.123:50010, /default-rack/192.168.174.121:50010, /default-rack/192.168.174.122:50010]
1. BP-1685056456-192.168.174.121-1566207286072:blk_1073741829_1005 len=134217728 Live_repl=3 [/default-rack/192.168.174.123:50010, /default-rack/192.168.174.121:50010, /default-rack/192.168.174.122:50010]
2. BP-1685056456-192.168.174.121-1566207286072:blk_1073741830_1006 len=55108925 Live_repl=2 [/default-rack/192.168.174.123:50010, /default-rack/192.168.174.121:50010]

Status: HEALTHY
 Total size:    323544381 B
 Total dirs:    0
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      3 (avg. block size 107848127 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (33.333332 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.6666667
 Corrupt blocks:                0
 Missing replicas:              1 (11.111111 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Fri Aug 23 09:57:52 CST 2019 in 2 milliseconds


The filesystem under path '/blockrecover/genome-scores.csv' is HEALTHY

结果:没办法定位到损坏副本的位置

解决方式:

  • 手动找到损坏副本的位置,删除对应的block文件和meta文件,然后执行
    hdfs debug recoverLease -path /blockrecover/genome-scores.csv -retries 10
  • 先把文件get下载,然后hdfs删除,再对应上传

六、总结

生产上本人一般倾向于使用 手动修复方式.

当然还可以先把文件get下载,然后hdfs删除,再对应上传。

你可能感兴趣的:(生产HDFS Block副本缺失或者副本损坏恢复)