(转)hadoop 常规错误问题(一)

转至:http://www.freeoa.net/osuport/db/my-hbase-usage-problem-sets_2979.html

本文是我在使用Hbase的过程碰到的一些问题和相应的解决方法,现将这些经过总结分析,以免日后忘记。

hadoop新增节点

hadoop集群要增加3个节点。在3台主机分别配置了host、与集群内所有机器的ssh登录互信、jdk1.7 等。所有配置、目录等与集群其他机器保持一致。把安装文件拷贝到3台主机后,同时将修改后的配置文件分发到三台主机相应目录中。修改namenode的slaves文件,增加三台主机的host,然后分发到所有datanode。

在三台主机中分别执行:
yarn-daemon.sh start nodemanager
hadoop-daemon.sh start datanode

两个命令分别启动nodemanager、datanode 。

再执行数据分布负载均衡,start-balancer.sh -threshold 5

保证hadoop的文件是平均分布在各个节点中。

Hbase 排错

INFO client.HConnectionManager$HConnectionImplementation: This client just lost it's session with ZooKeeper, will automatically reconnect when needed.

region server的超时时间需要设置为以下,防止FULL GC,带来的zookeeper超时。
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:CMSInitiatingOccupancyFraction=70


hbase.regionserver.lease.period
240000


hbase.rpc.timeout
280000


集群假死,日志中出现zk通信超时的情况,不都是zk出现了问题,下面这两个问题在表现上是zk超时,实则是hbase或hadoop出现了问题。

1、region server 所分配的内存堆过小
,hb的数据量占用空间量大的情况下
在系统的profile中,将此参数设置大一些:
export HBASE_HEAPSIZE=30720

在hb的hbase-env.sh中
# The maximum amount of heap to use, in MB. Default is 1000.
# export HBASE_HEAPSIZE=1000

2、zk是最大连接数过小
默认值是300,在查询量与记录数量特大的集群中,可能会造成相互间通信连接过多从而拒绝连接服务。

hbase.zookeeper.property.maxClientCnxns
15000


3、HRegionServer启动不正常
在namenode上执行jps,则可看到hbase启动是否正常,进程如下:
[root@master bin]# jps
26341 HMaster
26642 Jps
7840 ResourceManager
7524 NameNode
7699 SecondaryNameNode

由上可见,hadoop启动正常。HBase少了一个进程,猜测应该是有个节点regionserver没有启动成功。

进入节点slave1 ,执行jps查看启动进程:
[root@master bin]# ssh slave1
Last login: Thu Jul 17 17:29:11 2014 from master
[root@slave1 ~]# jps
4296 DataNode
11261 HRegionServer
11512 Jps
11184 QuorumPeerMain

由此可见Slave1节点正常。进入节点slave2节点,执行jps查看启动进程:
[root@slave2 ~]# jps
3795 DataNode
11339 Jps
11080 QuorumPeerMain

OK,问题找到了 HRegionServer没有启动成功。进入HBase日志:

2014-07-17 09:28:19,392 INFO  [regionserver60020] regionserver.HRegionServer: STOPPED: Unhandled: org.apache.hadoop.hbase.ClockOutOfSyncException: Server slave2,60020,1405560498057 has been rejected; Reported time is too far out of sync with master.  Time difference of 28804194ms > max allowed of 30000ms
at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:314)
at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:215)
at org.apache.hadoop.hbase.master.HMaster.regionServerStartup(HMaster.java:1292)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:5085)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1889)

根据错误日志,可得到slave2和maste机器时间差太多,查看各个系统的时间,果真如此,同步即可。另外一种方法就是配置hbase的配置文件:

配置:hbase.master.maxclockske


 hbase.master.maxclockskew
 200000
 Time difference of regionserver from master


(这种方法不推荐)

4、Zookeeper启动不正常
在启动hbase时,总是报错,提示zookeeper连接不上,查看zookeeper日志,发现:
ClientCnxn$SendThread@966] - Opening socket connection to server slave1. Will not attempt to authenticate using SASL (无法定位登录配置)。经过百度可得

由于hosts文件的问题,于是vi /etc/hosts 发现 ip slave1配置中ip错误。汗!幸亏hbase和zookeeper都有日志。于是重启zookeeper和hbase,上述问题解决。

5、HBase shell执行list命令报错
在Hbase shell执行list命令报错:
...
实在太长

关键错误信息:client.HConnectionManager$HConnectionImplementation: Can't get connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase。根据信息可以判断zk无法连接。执行jps查看zk都正常。查看hbase-site.xml中zk节点配置正常。根据经验,应该是防火墙没有关闭,2181端口无法访问。ok执行service iptables stop关闭防火墙,重启hbase。进入hbase shell,执行list:
hbase(main):001:0> list
TABLE
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/hadoop/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2014-07-17 14:06:26,013 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
0 row(s) in 1.0070 seconds

=> []

一切正常,问题解决。

6、HBase Shell 增删改异常
在hbase shell上做增删改就会报异常:
zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect。
经判断是hbase版本的jar包和hadoop中的jar包不兼容的问题。解决方法:将hadoop中hadoop-2.2.0相关的jar包copy过来(${HABASE_HOME}/lib)替换即可。

7、hb下面的regionserver全部掉线

不是zk的问题 是hdfs的问题
does not have any open files.   提高datanode节点间的最大传输数dfs.datanode.max.transfer.threads

EndOfStreamException: Unable to read additional data from client sessionid 0x4f6ce1baef1cc5, likely client has closed socket
    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
    at java.lang.Thread.run(Thread.java:662)
2015-09-09 12:00:04,636 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Closed socket connection for client /172.16.0.175:39889 which had sessionid 0x4f6ce1baef1cc5
2015-09-09 12:00:07,570 WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x4f6ce1baef1d4f, likely client has closed socket
    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
    at java.lang.Thread.run(Thread.java:662)
2015-09-09 13:19:20,232 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /172.16.0.161:55772
2015-09-09 13:19:20,274 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /172.16.0.161:55772
2015-09-09 13:19:20,276 INFO  [CommitProcessor:0] server.ZooKeeperServer: Established session 0x4f6ce1baef1f96 with negotiated timeout 90000 for client /172.16.0.161:55772
2015-09-09 13:20:21,000 INFO  [SessionTracker] server.ZooKeeperServer: Expiring session 0x24f6ce1bd0f207c, timeout of 90000ms exceeded
2015-09-09 13:20:21,000 INFO  [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x24f6ce1bd0f207c
2015-09-09 13:24:21,037 WARN [QuorumPeer[myid=0]/0.0.0.0:2181] quorum.LearnerHandler: Closing connection to peer due to transaction timeout.
2015-09-09 13:24:21,237 WARN [LearnerHandler-/192.168.40.35:56545] quorum.LearnerHandler: ****** GOODBYE /192.168.40.35:56545 ******
2015-09-09 13:24:21,237 WARN [LearnerHandler-/192.168.40.35:56545] quorum.LearnerHandler: Ignoring unexpected exception

第二种可能情况

hadoop的 namenode重新格式化以后,重启hbase,发现它的hmaster进程启动后马上消失,查看一大堆日志,最后在zookeeper的日志里发现如下问题:
Unable to read additional data from client sessionid 0x14e86607c850007, likely client has closed socket

解决方法:删除掉hbase的hbase-site.xml中一下内容所配置路径下的目录,重启zookeeper集群,再重启hbase让该目录重新生成即可。


    hbase.zookeeper.property.dataDir
    /home/freeoa/zookeeper/data


hbase.zookeeper.property.dataDir:这个是ZooKeeper配置文件zoo.cfg中的dataDir。zookeeper存储数据库快照的位置。

8、设置分数少于3时datanode节点失败导致hdfs写失败
任务量大,集群节点少(4个DN),集群不断出现问题,导致收集数据出现错误,以致数据丢失。出现数据丢失,最先拿来开刀的就是数据收集,先看看错误日志:

Caused by: java.io.IOException: Failed to add a datanode.  User may turn off this feature by setting dfs.client.block.write.replace-datanode-

on-failure.policy in configuration, where the current policy is DEFAULT.  (Nodes: current=[10.0.2.163:50010, 10.0.2.164:50010], original=[10.0.2.163:50010, 10.0.2.164:50010])

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:817)

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:877)

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:983)

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:780)

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)
    
错误:
Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT

从日志上看是说添加DN失败,需要关闭dfs.client.block.write.replace-datanode-on-failure.policy特性。但是我没有添加节点啊?看来问题不是这么简单。

通过查看官方配置文档对上述的参数配置:
参数 默认值 说明
dfs.client.block.write.replace-datanode-on-failure.enable
true     
If there is a datanode/network failure in the write pipeline, DFSClient will try to remove the failed datanode from the pipeline and then continue writing with the remaining datanodes. As a result, the number of datanodes in the pipeline is decreased. The feature is to add new datanodes to the pipeline. This is a site-wide property to enable/disable the feature. When the cluster size is extremely small, e.g. 3 nodes or less, cluster administrators may want to set the policy to NEVER in the default configuration file or disable this feature. Otherwise, users may experience an unusually high rate of pipeline failures since it is impossible to find new datanodes for replacement. See also dfs.client.block.write.replace-datanode-on-failure.policy

dfs.client.block.write.replace-datanode-on-failure.policy     
DEFAULT     
This property is used only if the value of dfs.client.block.write.replace-datanode-on-failure.enable is true. ALWAYS: always add a new datanode when an existing datanode is removed. NEVER: never add a new datanode. DEFAULT: Let r be the replication number. Let n be the number of existing datanodes. Add a new datanode only if r is greater than or equal to 3 and either (1) floor(r/2) is greater than or equal to n; or (2) r is greater than n and the block is hflushed/appended.

来自: https://hadoop.apache.org/docs/current2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

然后寻找源码位置在dfsclient中,发现是客户端在pipeline写数据块时候的问题,也找到了两个相关的参数:
dfs.client.block.write.replace-datanode-on-failure.enable
dfs.client.block.write.replace-datanode-on-failure.policy

前者是,客户端在写失败的时候,是否使用更换策略,默认是true没有问题。
后者是,更换策略的具体细节,默认是default。

default在3个或以上备份的时候,是会尝试更换结点次数??次datanode。而在两个备份的时候,不更换datanode,直接开始写。由于我的节点只有4个,当集群负载太高的时候,同时两台以上DN没有响应,则出现HDFS写的问题。当集群比较小的时候我们可以关闭这个特性。

9、hbase启动过程排错
在进行灾难测试时,将其中的一台hdfs及region server直接进行了重启,那么如何在其启动之后来启动之前的服务呢。
启动hdfs:
sbin/hadoop-daemon.sh start datanode

启动hbase regionserver:
bin/hbase-daemon.sh  start regionserver

但通过jps查看时,发现没有HQuorumPeer进程。经过在网上查证,说可能是服务器时间有较大的'漂移',重新对时后,发现快了30s。重新同步后,再重启hbase服务,发现该进程依然没有起来。

看来只有重启整个hbase集群。
hadoop@xdm:/usr/local/hbase$ bin/stop-hbase.sh
stopping hbase...................
xd1: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid
xd0: stopping zookeeper.
xd2: stopping zookeeper.

原来这个进程是zookeeper的。

bash bin/hbase-daemon.sh start zookeeper

进程HQuorumPeer没有启动;
原因:/hbase/conf/hbase-site.xml文件有关hbase.zookeeper.property.dataDir配置的目录不存在
解决方案:hbase.zookeeper.property.dataDir属性对应的value值/home/grid/zookeeper,在本地建立/home/grid/zookeeper目录

Hadoop一般通过主机名来与集群中的节点进行通信,因此我们要将所有节点的ip与主机名映射关系写入到/etc/hosts中来保证我们的通信正常。如果有的朋友在配置文件中使用了ip后操作不正常请修改成对应的主机名即可。hadoop与hbase内核版本不兼容问题,因为hbase的lib目录下的hadoop的包比我安装的0.20.2的版本要新,需要用0.20.2的hadoop包替换,这点官网的文档是有说明的。

如果你使用的是cloudera公司的定制版hadoop和hbase那么就免去了替换jar的过程,因为cloudera公司已经把所有的兼容性问题都解决了。

10、Hadoop新加datanode节点时碰到的一些问题
在所有机器中加入新的主机名到/etc/hosts文件中

将主节点的ssh-key加入到新机器中
ssh-copy-id 192.168.0.83

将新节点的主机名加入到slaves中(新加一行):
vim /usr/local/hadoop/etc/hadoop/slaves

重新同步配置文件到所有节点:
for i in {80..83};do rsync -av /usr/local/hadoop/etc/ 192.168.0.$i:/usr/local/hadoop/etc/;done

注意:对于所有非新加节点不用重启,只需要在新加节点上开启datanode,在开启成功后,要开启平衡脚本(/sbin/start-balancer.sh)。让其从其它节点同步数据过来,相应的其它节点所占用的系统空间会减少。

还需要注意的是:数据节点之间的同步带宽默认只有1M,这个对有大量数据的集群来说实在太少了,可在新加节点上修改为10M:

dfs.balance.bandwidthPerSec
10485760


开启datanode后,master的日志有如下:
2015-10-19 15:26:31,738 WARN BlockStateChange: BLOCK* addStoredBlock: Redundant addStoredBlock request received for blk_1073742014_1202 on node 192.168.0.83:50010 size 60
2015-10-19 15:26:31,738 WARN BlockStateChange: BLOCK* addStoredBlock: Redundant addStoredBlock request received for blk_1073742015_1203 on node 192.168.0.83:50010 size 60
2015-10-19 15:26:31,739 INFO BlockStateChange: BLOCK* processReport: from storage DS-14d45702-1346-4f53-8178-4c9db668729d node DatanodeRegistration(192.168.0.83, datanodeUuid=c82b8d07-5ffb-46d5-81f7-47c06e673384, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-c06612c4-aa83-4145-b2b5-e4faffa08074;nsid=531546365;c=0), blocks: 48, hasStaleStorage: false, processing time: 5 msecs

出现了很多的警告,在50070端口的页面中不断的刷新时,发现xd0与xd3交替的出现,原来我是将xd0的机器克隆为xd3,没有将datanode下面的目录清理,只能停机清理干净后在开启。主节点的目录记录如下:
2015-10-19 15:38:51,415 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 170 Total time for transactions(ms): 13 Number of transactions batched in Syncs: 2 Number of syncs: 131 SyncTimes(ms): 2621
2015-10-19 15:38:51,433 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742974_2164 192.168.0.81:50010 192.168.0.82:50010 192.168.0.80:50010
2015-10-19 15:38:51,446 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742975_2165 192.168.0.82:50010 192.168.0.81:50010 192.168.0.80:50010
2015-10-19 15:38:53,844 INFO BlockStateChange: BLOCK* BlockManager: ask 192.168.0.82:50010 to delete [blk_1073742974_2164, blk_1073742975_2165]
2015-10-19 15:38:56,844 INFO BlockStateChange: BLOCK* BlockManager: ask 192.168.0.81:50010 to delete [blk_1073742974_2164, blk_1073742975_2165]
2015-10-19 15:38:56,848 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2015-10-19 15:38:56,848 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2015-10-19 15:38:57,220 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.0.83, datanodeUuid=b99253b7-6535-4d0e-85ad-7ca6332181af, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-c06612c4-aa83-4145-b2b5-e4faffa08074;nsid=531546365;c=0) storage b99253b7-6535-4d0e-85ad-7ca6332181af
2015-10-19 15:38:57,220 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-10-19 15:38:57,220 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.0.83:50010
2015-10-19 15:38:57,273 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-10-19 15:38:57,273 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-40fd35ff-cb26-47e8-aa50-654dbdfbbd4a for DN 192.168.0.83:50010
2015-10-19 15:38:57,291 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Processing first storage report for DS-40fd35ff-cb26-47e8-aa50-654dbdfbbd4a from datanode b99253b7-6535-4d0e-85ad-7ca6332181af
2015-10-19 15:38:57,291 INFO BlockStateChange: BLOCK* processReport: from storage DS-40fd35ff-cb26-47e8-aa50-654dbdfbbd4a node DatanodeRegistration(192.168.0.83, datanodeUuid=b99253b7-6535-4d0e-85ad-7ca6332181af, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-c06612c4-aa83-4145-b2b5-e4faffa08074;nsid=531546365;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
2015-10-19 15:38:59,844 INFO BlockStateChange: BLOCK* BlockManager: ask 192.168.0.80:50010 to delete [blk_1073742974_2164, blk_1073742975_2165]
2015-10-19 15:39:26,848 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2015-10-19 15:39:26,848 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).

本机节点的日志中也无报错。但这会影响原始机xd0的数据,之后发现上面的数据会有损坏,只好将其datanode下面的数据删除后再开启datanode。

开启平衡同步脚本:
hadoop@xd3:/usr/local/hadoop$ sbin/start-balancer.sh

日志记录如下:
hadoop@xd3:/usr/local/hadoop$ more logs/hadoop-hadoop-balancer-xd3.log
2015-10-19 15:43:44,875 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 172 Total time for transactions(ms): 13 Number of transactions batched in Syncs: 2 Number of syncs: 133 SyncTimes(ms): 2650
2015-10-19 15:43:44,950 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /system/balancer.id. BP-668197744-192.168.0.9-1444641099117 blk_1073742981_2173{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-40fd35ff-cb26-47e8-aa50-654dbdfbbd4a:NORMAL:192.168.0.83:50010|RBW], ReplicaUnderConstruction[[DISK]DS-f7b1834a-9174-445a-8802-8c6f908d0a94:NORMAL:192.168.0.81:50010|RBW], ReplicaUnderConstruction[[DISK]DS-6b35073c-4673-4fce-8d46-fe0377a17739:NORMAL:192.168.0.82:50010|RBW]]}
2015-10-19 15:43:45,226 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /system/balancer.id for DFSClient_NONMAPREDUCE_912969379_1
2015-10-19 15:43:45,285 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 192.168.0.82:50010 is added to blk_1073742981_2173{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-40fd35ff-cb26-47e8-aa50-654dbdfbbd4a:NORMAL:192.168.0.83:50010|RBW], ReplicaUnderConstruction[[DISK]DS-f7b1834a-9174-445a-8802-8c6f908d0a94:NORMAL:192.168.0.81:50010|RBW], ReplicaUnderConstruction[[DISK]DS-6b35073c-4673-4fce-8d46-fe0377a17739:NORMAL:192.168.0.82:50010|RBW]]} size 3
2015-10-19 15:43:45,287 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 192.168.0.81:50010 is added to blk_1073742981_2173{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-40fd35ff-cb26-47e8-aa50-654dbdfbbd4a:NORMAL:192.168.0.83:50010|RBW], ReplicaUnderConstruction[[DISK]DS-f7b1834a-9174-445a-8802-8c6f908d0a94:NORMAL:192.168.0.81:50010|RBW], ReplicaUnderConstruction[[DISK]DS-6b35073c-4673-4fce-8d46-fe0377a17739:NORMAL:192.168.0.82:50010|RBW]]} size 3
2015-10-19 15:43:45,308 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 192.168.0.83:50010 is added to blk_1073742981_2173 size 3
2015-10-19 15:43:45,309 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /system/balancer.id is closed by DFSClient_NONMAPREDUCE_912969379_1
2015-10-19 15:43:45,331 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742981_2173 192.168.0.81:50010 192.168.0.82:50010 192.168.0.83:50010
2015-10-19 15:43:47,861 INFO BlockStateChange: BLOCK* BlockManager: ask 192.168.0.81:50010 to delete [blk_1073742981_2173]
2015-10-19 15:43:47,862 INFO BlockStateChange: BLOCK* BlockManager: ask 192.168.0.82:50010 to delete [blk_1073742981_2173]
2015-10-19 15:43:50,862 INFO BlockStateChange: BLOCK* BlockManager: ask 192.168.0.83:50010 to delete [blk_1073742981_2173]

11、hadoop decommission时因block的replicas不够时久不能退役

hadoop decommission一个节点Datanode,几万个block都同步过去了,但是唯独剩下2个block一直停留在哪,导致该节点几个小时也无法下线。hadoop UI中显示在Under Replicated Blocks里面有2个块始终无法消除。

Under Replicated Blocks 2 Under Replicated Blocks In Files Under Construction 2

Under Replicated Blocks 2
Under Replicated Blocks In Files Under Construction 2

Namenode日志里面一直有这样的滚动:
2015-01-20 15:04:47,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Block: blk_8859027644264991843_26141120, Expected Replicas: 3, live replicas: 2, corrupt replicas: 0, decommissioned replicas: 1, excess replicas: 0, Is Open File: true, Datanodes having this block: 10.11.12.13:50010 10.11.12.14:50010 10.11.12.15:50010 , Current Datanode: 10.11.12.13:50010, Is current datanode decommissioning: true

2015-01-20 15:04:47,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Block: blk_8859027644264991843_26141120,Expected Replicas: 3, live replicas: 2, corrupt replicas: 0, decommissioned replicas: 1, excess replicas: 0, Is Open File: true, Datanodes having this block: 10.11.12.13:50010 10.11.12.14:50010 10.11.12.15:50010 , Current Datanode: 10.11.12.13:50010, Is current datanode decommissioning:true

google了好久,貌似是一个hadoop的bug,https://issues.apache.org/jira/browse/HDFS-5579
NameNode发现block的Replicas不够(期待应该有3个,实际有两个),或许是namenode认为数据不完整,执着地不让这个DataNode下架。。。

最终尝试如下方式解决,把replications设置成2:
hadoop fs -setrep -R 2 /

执行完后很快,该节点就下线了,神奇的replications。

移除任何一个Datanode都会导致某些文件不能满足replica factor的最低要求。当试图移除一个Datanode的时候,会一直处在Decommissioning的状态,因为它找不到别的机器来迁移它的数据了。这个问题通常容易出现在小集群上。

一个解决办法就是临时把相应文件的replica factor调低。用如下命令来查看HDFS中所有文件的replica factor
hdfs fsck / -files -blocks

其中repl=1表示该文件的该block的replica factor为1,通过这个命令就可以找到那些replica factor比较高的文件了。

调整文件的replicafactor

需要注意的是,replica factor是文件的属性,而不是集群的属性,也就是说同一个集群中的文件可以有不同的replica factor。因此需要针对文件修改replica factor。相应的命令是:
hdfs dfs -setrep [-R] [-w]

其中
-R表示recursive,可以对一个目录及其子目录设置replica factor
表示需要设置的replica factor的值
表示需要设置的replica factor的文件或目录路径
-w表示等待复制完成,可能需要等待很长时间


通过搜索引擎找到关于它们使用过程中的问题集的文章:
Hadoop常见错误及解决办法汇总【时常更新】

感谢原作者!


12、Hadoop开启安全模式引起的hbase列出表时出错

在列出表时报错
hbase(main):001:0> list
TABLE                                                                                                                                
ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet

在其日志文件中会有如下报错:

2016-07-13 10:33:14,576 INFO  [master:hadoop1:60000] catalog.CatalogTracker: Failed verification of hbase:meta,,1 at address=hadoop3,60020,1453280257576, exception=org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: 1468377187016

经查看原因原来是 hadoop 集群处于安全模式。执行如下指令:
hdfs dfsadmin -safemode leave

重新启动habase,问题可得到解决。相关参数:
safemode enter|leave|get|wait

安全模式维护命令,安全模式是Namenode的一个状态,这种状态:
1.不接受对名字空间的更改(只读)
2.不复制或删除块

Namenode会在启动时自动进入安全模式,当配置的块最小百分比数满足最小的副本数条件时,会自动离开安全模式。安全模式可以手动进入,但是这样的话也必须手动关闭安全模式。

13、测试datanode上的块恢复

测试将一datanode机器上的一个数据存储目录(至少两个存储目录)清空

hadoop@htcom:/usr/local/hadoop$ bin/hdfs fsck /
17/03/17 15:57:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://htcom:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.0.9 for path / at Fri Mar 17 15:57:24 CST 2017
.....
/hbase/data/default/wz/7c8b18620febde82c7ec60de1d26e362/.regioninfo:  Under replicated BP-1176962362-192.168.0.9-1489729250270:blk_1073742108_1284. Target Replicas is 3 but found 2 replica(s).
.
/hbase/data/default/wz/7c8b18620febde82c7ec60de1d26e362/cf/efe459c273c24635b4a571c6edbb907d:  Under replicated BP-1176962362-192.168.0.9-1489729250270:blk_1073742112_1288. Target Replicas is 3 but found 2 replica(s).
....
/hbase/data/default/wz/9b3289559bb9e511a6dcafcf95b32419/cf/22b36ccb56124f51a0537ccb6825d965:  Under replicated BP-1176962362-192.168.0.9-1489729250270:blk_1073742056_1232. Target Replicas is 3 but found 2 replica(s).
.....
/hbase/data/default/wz/d6791ab507e752b01de90b3c07efdb72/.regioninfo:  Under replicated BP-1176962362-192.168.0.9-1489729250270:blk_1073742022_1198. Target Replicas is 3 but found 2 replica(s).
.
/hbase/data/default/wz/d6791ab507e752b01de90b3c07efdb72/cf/5267d4e39ac84a14942e26d4f97d3ed2:  Under replicated BP-1176962362-192.168.0.9-1489729250270:blk_1073742026_1202. Target Replicas is 3 but found 2 replica(s).
.....
/hbase/data/default/wz/f639a4fddc8be893b8489f85b9f4bb67/.regioninfo:  Under replicated BP-1176962362-192.168.0.9-1489729250270:blk_1073742136_1312. Target Replicas is 3 but found 2 replica(s).
............Status: HEALTHY
 Total size:    2126046745 B
 Total dirs:    85
 Total files:    57
 Total symlinks:        0 (Files currently being written: 3)
 Total blocks (validated):    58 (avg. block size 36655978 B) (Total open file blocks (not validated): 3)
 Minimally replicated blocks:    58 (100.0 %)
 Over-replicated blocks:    0 (0.0 %)
 Under-replicated blocks:    11 (18.965517 %)
 Mis-replicated blocks:        0 (0.0 %)
 Default replication factor:    2
 Average block replication:    2.6724138
 Corrupt blocks:        0
 Missing replicas:        11 (6.626506 %)
 Number of data-nodes:        3
 Number of racks:        1
FSCK ended at Fri Mar 17 15:57:24 CST 2017 in 21 milliseconds


The filesystem under path '/' is HEALTHY

namenode的日志中会不断的报错:
2017-03-17 16:02:40,150 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2017-03-17 16:02:40,150 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2017-03-17 16:02:40,150 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2017-03-17 16:02:40,150 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2017-03-17 16:02:40,150 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2017-03-17 16:02:40,150 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}

datanode日志会报错:
2017-03-17 15:44:16,301 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: xd1:50010:DataXceiver error processing READ_BLOCK operation  src: /192.168.0.81:47294 dst: /192.168.0.81:50010
java.io.IOException: Block BP-1176962362-192.168.0.9-1489729250270:blk_1073742112_1288 is not valid. Expected block file at /opt/edfs/current/BP-1176962362-192.168.0.9-1489729250270/current/finalized/subdir0/subdir1/blk_1073742112 does not exist.
    at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:585)
    at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:375)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:514)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:237)
    at java.lang.Thread.run(Thread.java:745)


bin/hdfs dfsadmin -report
会发现该datanode节点依然是正常状态,'fsck /'时起初不会报错,后来会慢慢地如上的错误,但文件系统依然是"HEALTHY"。被清空的目录也慢慢地生成新分配给它的数据块。重启后该数据节点会快速地将数据块同步过来。不知道会不会随着时间在不重启该数据节点的情况下,完成该节点数据块的重新同步。

将另外一台datanode节点xd0的一半的目录清空(/opt/edfs),过几个小时看看。

过几个小时去看日志时,还是有报错,第二天中午再看时对应的被清空目录文件大小已经恢复到之前,数据节点和namenode的日志中不在有块不存在的报警日志。从web(端口50070)页面中也能看到Blocks、Block pool used这两项比较均匀。同时核对测试表中的数据数量与上次也是相同的。

重启对应的datanode
重启成功后,该datanode会很快将相应目录缺少的Block从对应的机器上同步过来:
2017-03-17 16:02:48,598 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-1176962362-192.168.0.9-1489729250270:blk_1073741876_1052 src: /192.168.0.83:51588 dest: /192.168.0.81:50010 of size 73
2017-03-17 16:02:48,643 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1176962362-192.168.0.9-1489729250270:blk_1073741864_1040 src: /192.168.0.83:51589 dest: /192.168.0.81:50010
2017-03-17 16:02:51,373 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1176962362-192.168.0.9-1489729250270:blk_1073741833_1009 src: /192.168.0.80:52879 dest: /192.168.0.81:50010
2017-03-17 16:02:51,383 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-1176962362-192.168.0.9-1489729250270:blk_1073741833_1009 src: /192.168.0.80:52879 dest: /192.168.0.81:50010 of size 1745
2017-03-17 16:02:51,483 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1176962362-192.168.0.9-1489729250270:blk_1073741846_1022 src: /192.168.0.80:52878 dest: /192.168.0.81:50010
2017-03-17 16:02:51,527 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-1176962362-192.168.0.9-1489729250270:blk_1073741846_1022 src: /192.168.0.80:52878 dest: /192.168.0.81:50010 of size 1045
2017-03-17 16:02:51,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1176962362-192.168.0.9-1489729250270:blk_1073741860_1036 src: /192.168.0.83:51590 dest: /192.168.0.81:50010
2017-03-17 16:02:51,751 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-1176962362-192.168.0.9-1489729250270:blk_1073741860_1036 src: /192.168.0.83:51590 dest: /192.168.0.81:50010 of size 54
2017-03-17 16:02:53,841 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-1176962362-192.168.0.9-1489729250270:blk_1073742130_1306 src: /192.168.0.83:51586 dest: /192.168.0.81:50010 of size 79136111
2017-03-17 16:02:55,065 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-1176962362-192.168.0.9-1489729250270:blk_1073741894_1070 src: /192.168.0.80:52875 dest: /192.168.0.81:50010 of size 95193917
2017-03-17 16:02:55,289 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-1176962362-192.168.0.9-1489729250270:blk_1073741880_1056 src: /192.168.0.80:52877 dest: /192.168.0.81:50010 of size 76640469


2017-03-17 16:07:40,683 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1176962362-192.168.0.9-1489729250270:blk_1073742151_1329 src: /192.168.0.81:50162 dest: /192.168.0.81:50010
2017-03-17 16:07:40,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.81:50162, dest: /192.168.0.81:50010, bytes: 27729, op: HDFS_WRITE, cliID: DFSClient_hb_rs_xd1,60020,1489730648396_-196772784_32, offset: 0, srvID: 06fc9536-5e47-4c43-884e-7be0d65a6aed, blockid: BP-1176962362-192.168.0.9-1489729250270:blk_1073742151_1329, duration: 17845556
2017-03-17 16:07:40,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1176962362-192.168.0.9-1489729250270:blk_1073742151_1329, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2017-03-17 16:09:04,905 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1176962362-192.168.0.9-1489729250270:blk_1073742152_1330 src: /192.168.0.81:50168 dest: /192.168.0.81:50010
2017-03-17 16:09:16,552 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1176962362-192.168.0.9-1489729250270:blk_1073741994_1170
2017-03-17 16:10:32,560 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1176962362-192.168.0.9-1489729250270:blk_1073742130_1306


namenode 上的日志
2017-03-17 16:03:16,065 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2017-03-17 16:03:46,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2017-03-17 16:03:46,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2017-03-17 16:04:14,715 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 102
2017-03-17 16:04:14,734 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /hbase/WALs/xd1,60020,1489730648396/xd1%2C60020%2C1489730648396.1489737854594.meta. BP-1176962362-192.168.0.9-1489729250270 blk_1073742150_1328{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-032e61b4-2a55-4b0f-b497-0856d1aef4ee:NORMAL:192.168.0.81:50010|RBW], ReplicaUnderConstruction[[DISK]DS-68cfba4b-8112-43b1-864d-fc9622c36ab8:NORMAL:192.168.0.83:50010|RBW], ReplicaUnderConstruction[[DISK]DS-79a15dc0-0c3e-4cf3-8898-7ad1e6f25788:NORMAL:192.168.0.80:50010|RBW]]}
2017-03-17 16:04:14,974 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /hbase/WALs/xd1,60020,1489730648396/xd1%2C60020%2C1489730648396.1489737854594.meta for DFSClient_hb_rs_xd1,60020,1489730648396_-196772784_32
2017-03-17 16:04:16,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2017-03-17 16:04:16,065 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).


2017-03-17 16:09:04,998 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 19 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 10 SyncTimes(ms): 199
2017-03-17 16:09:05,022 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /hbase/WALs/xd1,60020,1489730648396/xd1%2C60020%2C1489730648396.1489738144857. BP-1176962362-192.168.0.9-1489729250270 blk_1073742152_1330{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-493b36f7-1dbf-4d2f-83d4-288360686852:NORMAL:192.168.0.81:50010|RBW], ReplicaUnderConstruction[[DISK]DS-79a15dc0-0c3e-4cf3-8898-7ad1e6f25788:NORMAL:192.168.0.80:50010|RBW], ReplicaUnderConstruction[[DISK]DS-60259b9b-c57d-46d8-841e-100fcbf9ff18:NORMAL:192.168.0.83:50010|RBW]]}
2017-03-17 16:09:05,052 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /hbase/WALs/xd1,60020,1489730648396/xd1%2C60020%2C1489730648396.1489738144857 for DFSClient_hb_rs_xd1,60020,1489730648396_-196772784_32

从web页面的"Datanode Information"页的"Block pool used"能再次看到其所使用的空间与其它节点相近。


如果不想在日志文件看到这些报警,可以在配置文件中就是设定。

    dfs.datanode.data.dir
    /freeoa/pdb,/freeoa/pdc,/freeoa/pdd,


这些目录都是独立硬盘的挂载点,当它们中有磁盘出现i/o错误时就会报上述的错误。如果不想让hadoop报出这些,可将"dfs.datanode.failed.volumes.tolerated"这个选项设置为1,默认为0。
The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown.


    dfs.datanode.failed.volumes.tolerated
    1


14、修复hadoop因强制退出而又开启平衡后corrupt问题

因同事将hadoop v2.2.0 的一台datanode强制关闭(因长时间不能Decommissioned),而后在另外一台新加入的datanode开启balancer.sh平衡,导致下面的错误警告.

...................................................................................................
Status: CORRUPT
 Total size:    17877963862236 B (Total open files size: 1601620148224 B)
 Total dirs:    18291
 Total files:    16499
 Total symlinks:        0 (Files currently being written: 9)
 Total blocks (validated):    146222 (avg. block size 122265896 B) (Total open file blocks (not validated): 11939)
  ********************************
  CORRUPT FILES:    2
  MISSING BLOCKS:    70
  MISSING SIZE:        9395240960 B
  CORRUPT BLOCKS:     70
  ********************************
 Minimally replicated blocks:    146152 (99.952126 %)
 Over-replicated blocks:    0 (0.0 %)
 Under-replicated blocks:    0 (0.0 %)
 Mis-replicated blocks:        0 (0.0 %)
 Default replication factor:    2
 Average block replication:    1.9990425
 Corrupt blocks:        70
 Missing replicas:        0 (0.0 %)
 Number of data-nodes:        13
 Number of racks:        1
FSCK ended at Mon Apr 17 10:31:04 CST 2017 in 1357 milliseconds

The filesystem under path '/' is CORRUPT

看了下官方的FAQ,大致意思就是,有几个块的数据,在现有的DataNode节点上,没有一个存储的,但是在NameNode的元数据里却存在。观察hbase master web ui的主页,也会有大量的警告。

怎么解决这个问题呢,下面介绍一个hadoop的健康监测命令fsck。

fsck工具来检验HDFS中的文件是否正常可用。这个工具可以检测文件块是否在DataNode中丢失,是否低于或高于文件副本。 fsck命令用法如下:
注:此处必须是启动hadoop hdfs的账号才有权查看
Usage: DFSck [-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]]
   检查的起始目录
 -move   将损坏的文件移到到/lost+found
 -delete 删除损坏的文件
 -files  打印出所有被检查的文件
 -openforwrite   打印出正在写的文件
 -list-corruptfileblocks print out list of missing blocks and files they belong to
 -blocks 打印出block报告
 -locations      打印出每个block的位置
 -racks  打印出data-node的网络拓扑结构

默认情况下,fsck会忽略正在写的文件,使用-openforwrite可以汇报这种文件。

最后一点,需要注意的是,这个命令在namenode文件信息较大的时候,会比较影响hadoop性能,所以应该慎用,通常可以在集群空闲的时间段,执行一次,查看整体的HDFS副本健康状况!

重启hbase或hadoop也不能解决该问题。可以将那台强制退出的那台datanode的机器重新加入回来,停止平衡脚本,让hadoop其自动恢复,在fsck时没有错误报出时再将这台datanode机器移除;如果这台机器不可找回时,可以考虑下面的方法(数据会丢失,但整个hadoop系统会起来)。


尝试使用 hbase hbck -fix以及 hbase hbck -repair 命令来修复,结果失败

通过hdfs fsck / -delete bad_region_file 直接干掉坏掉的hbase corrupt blocks,然后重启hbase集群,发现region全部online,问题解决。

在看看块文件
hdfs fsck / -files –blocks

hdfs fsck / | egrep -v '^\.+$' | grep -i "corrupt blockpool"| awk '{print $1}' |sort |uniq |sed -e 's/://g' >corrupted.flst
hdfs dfs -rm /path/to/corrupted.flst
hdfs dfs -rm -skipTrash /path/to/corrupted.flst

How would I repair a corrupted file if it was not easy to replace?

This might or might not be possible, but the first step would be to gather information on the file's location, and blocks.

hdfs fsck /path/to/filename/fileextension -locations -blocks -files
hdfs fsck hdfs://ip.or.hostname.of.namenode:50070/path/to/filename/fileextension -locations -blocks -files

注意:通过 hdfs fsck / -delete 方式删除了坏掉的hdfs block会造成数据丢失。

15、HBase创建快照(snapshot)出现异常的处理方法


在hbase中创建快照的时候遇到了如下错误:
> snapshot 'freeoa', 'freeoa-snapshot-20140912'

ERROR: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot
...
Caused by: org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via timer-java.util.Timer@69db0cb4:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: org.apache.hadoop.hbase.errorhandling.TimeoutException: Timeout elapsed! Source:Timeout caused Foreign Exception Start:1410453067992, End:1410453127992, diff:60000, max:60000 ms  
    at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)  
    at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:320)  
    at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:332)  
    … 10 more  
Caused by: org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: org.apache.hadoop.hbase.errorhandling.TimeoutException: Timeout elapsed! Source:Timeout caused Foreign Exception Start:1410453067992, End:1410453127992, diff:60000, max:60000 ms  
    at org.apache.hadoop.hbase.errorhandling.TimeoutExceptionInjector$1.run(TimeoutExceptionInjector.java:70)  
    at java.util.TimerThread.mainLoop(Timer.java:555)  
    at java.util.TimerThread.run(Timer.java:505)

出现这种问题的原因是因为和服务器通信超时导致的。所以需要将下面两个参数的默认值进行调整。
1、hbase.snapshot.region.timeout
2、hbase.snapshot.master.timeoutMillis

这两个值的默认值为60000,单位是毫秒,也即1min。如果通信时间超过该值,就会报上面的错误。

Snapshot就是一个metadata info集合,它能够让admin将一个table回复到先前的的一个状态。
Operations:
Take a snapshot: 对一个指定的table创建snapshot,在table进行balance,split,compact时,可能会失败;
Clone a snapshot: 基于上述创建的snapshot,创建一个新的table,该table和上述的table有相同的schema和data, 新表的操作不会影响原始表;
Restore a snapshot: 将一个table回复到一个snapshot状态;
Delete a snapshot: 删除一个snapshot,释放空间,不会影响clone的表和其他的snapshot;
Export a snapshot: 将一个snapshot的metadata和data copy到另一个集群中,HDFS层面的操作,不会影响RS;     

具体操作:
hbase> snapshot ‘tableName’, ‘snapshotName’
hbase> clone_snapshot 'snapshotName', 'newTableName'
hbase> delete_snapshot 'snapshotName'
hbase> restore_snapshot 'snapshotName'
hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot SnapshotName -copy-to hdfs:///srv2:8082/hbase

一些局限
涉及到snapshot的region合并时,在snapshot和clone table中会丢失数据。
一个带有replication属性的table进行恢复到一个snapshot状态时,该table在另外一個集群里replica不会进行恢复。

这里提供一个自写的hbase snapshot 脚本,它通过crontab调用,生产snapshot的同时会删除7天前的快照。

use v5.12;
use utf8;
use Encode;
use Mojo::Log;
use Time::Piece;
use Data::Dumper;
use Time::Seconds;
use HBase::JSONRest;
use Cwd qw(abs_path realpath);
use File::Basename qw(dirname);

binmode(STDIN, ":encoding(utf8)");
binmode(STDOUT, ":encoding(utf8)");

my $mydir=dirname(abs_path($0));
chdir($mydir);

my $cdt=localtime;
my $pdt7=$cdt - 7 * ONE_DAY;

my $log = Mojo::Log->new(path => 'log/hbase.snapshot.log');

$log = $log->format(sub {
  my ($time, $level, @lines) = @_;
  #my $idt=strftime("%Y-%m-%d %H:%M:%S",localtime);
  my $idt=localtime->datetime;
  return qq{[$idt] [$level] @lines \n};
});

#my $ymd=$cdt->ymd;#取得当前年月日
my $ymd=$cdt->strftime("%Y%m%d");#同上,但其中不含有'-'
my $ymd7=$pdt7->strftime("%Y%m%d");#同上,前7天的

#Hbase
my ($hostname,$hbdir)=('192.168.1.120:8080','/usr/local/hbase');
my $hbase=HBase::JSONRest->new(host=>$hostname);
#取得所有表的名称
my $hbtabs=$hbase->list;

foreach my $tab (@$hbtabs){
    #say 'Take SnapShot for table:'.$tab->{name};
    make_snap_shot($tab->{name});
    purge_snap_shot($tab->{name});
}
#做快照
sub make_snap_shot{
    my $tab=shift;
    my $cmd=qq[echo "snapshot '$tab', 'snapshot_$tab\_$ymd'" | $hbdir/bin/hbase shell];
    my ($rs,$henv)=(system($cmd));
    #$henv.="$_:$ENV{$_}\n" foreach (keys %ENV);
    $log->info("Take snapshot on $tab,return code is:$rs.");
}
#删除之前的快照
sub purge_snap_shot{
    my $tab=shift;
    my $cmd=qq[echo "delete_snapshot 'snapshot_$tab\_$ymd7'" | $hbdir/bin/hbase shell];
    my $rs=system($cmd);
    $log->info("Delete snapshot for table:$tab by snapshot name:snapshot_$tab\_$ymd7,return code is:$rs.");
}


HBase Snapshot 相关操作原理

16、快照时间过短导致的快照不能完成且日志中有大量临时文件找不到的错误

hbase(main):004:0> snapshot 'table','snapshot_freeoa'

ERROR: Snapshot 'snapshot_freeoa' wasn't completed in expectedTime:60000 ms

Here is some help for this command:
Take a snapshot of specified table. Examples:

  hbase> snapshot 'sourceTable', 'snapshotName'
  hbase> snapshot 'namespace:sourceTable', 'snapshotName', {SKIP_FLUSH => true}

默认的60s在有许多Regions的表上显得不够用,需要加长一些。

hbase-site.xml


    hbase.snapshot.enabled
    true


    hbase.snapshot.master.timeoutMillis
    1800000


    hbase.snapshot.region.timeout
    1800000


17、大量写入导致整个系统变慢的问题

too many store files; delaying flush up to 90000ms

2017-08-22 12:35:21,506 WARN  [B.DefaultRpcServer.handler=2969,queue=469,port=60020] hdfs.DFSClient: Failed to connect to /192.168.20.50:50010 for file /hbase/data/default/wz_content/aa010d3b1f1d325063edb83d9c971057/content/bf6454cd2bcd49c890cea6afaa5fe99b for block BP-1591618693-192.168.20.125-1431793334085:blk_1184188910_110455556:java.io.IOException: Connection reset by peer
2017-08-22 12:35:21,506 WARN  [B.DefaultRpcServer.handler=2969,queue=469,port=60020] hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 2665.0594974049063 msec.
2017-08-22 12:35:21,521 WARN  [B.DefaultRpcServer.handler=1790,queue=290,port=60020] hdfs.DFSClient: Failed to connect to /192.168.20.50:50010 for file /hbase/data/default/wz_content/aa010d3b1f1d325063edb83d9c971057/content/bf6454cd2bcd49c890cea6afaa5fe99b for block BP-1591618693-192.168.20.125-1431793334085:blk_1184188990_110455636:java.net.ConnectException: Connection timed out
2017-08-22 12:35:21,981 WARN  [B.DefaultRpcServer.handler=3370,queue=370,port=60020] hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 2089.5811774229255 msec.
2017-08-22 12:35:22,091 WARN  [B.DefaultRpcServer.handler=2525,queue=25,port=60020] hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 1630.3766567661871 msec.
2017-08-22 12:39:59,789 WARN  [regionserver60020.logRoller] regionserver.ReplicationSource: Queue size: 1394 exceeds value of replication.source.log.queue.warn: 2
2017-08-22 12:47:58,802 WARN  [regionserver60020.logRoller] regionserver.ReplicationSource: Queue size: 1395 exceeds value of replication.source.log.queue.warn: 2
2017-08-22 12:48:34,586 WARN  [MemStoreFlusher.1] regionserver.MemStoreFlusher: Region filter_content,c0007ac84,1502808497104.198fe43f10a1c7d980f38a4ff661957c. has too many store files; delaying flush up to 90000ms
2017-08-22 12:48:49,947 WARN  [B.DefaultRpcServer.handler=1031,queue=31,port=60020] hdfs.DFSClient: Failed to connect to /192.168.20.50:50010 for file /hbase/data/default/filter_content/cf69a9be117f48f8c084d61bf9c71290/content/c20b398c292d4adebc410cda249c29c1 for block BP-1591618693-192.168.20.125-1431793334085:blk_1184568976_110835622:java.net.ConnectException: Connection timed out
2017-08-22 12:48:49,947 WARN  [B.DefaultRpcServer.handler=1031,queue=31,port=60020] hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 658.8251327374326 msec.

参考来源:

Hbase万亿级存储性能优化总结

HBase最佳实践--写性能优化策略

实时系统HBase读写优化--大量写入无障碍

Hbase写入量大导致region过大无法split问题

18、压缩功能在某些无法在RegionServer上开启的问题

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
[regionserver60020-smallCompactions-1507591166085] compress.CodecPool: Got brand-new decompressor [.gz]
[B.DefaultRpcServer.handler=3900,queue=400,port=60020] compress.CodecPool: Got brand-new decompressor [.gz]

问题表现:
在新增的两台RegionServer的日志里,'compress.CodecPool'此类的日志刷过不停。经过分析初步定位是与压缩功能的问题,因为我有相当多的表开启了压缩功能。难道是这两台RS上的压缩功能支持有问题,测试一下:
$ ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker
2017-10-13 15:11:38,717 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop: false
zlib:   false
snappy: false
lz4:    false
bzip2:  false
2017-10-13 15:11:38,863 INFO  [main] util.ExitUtil: Exiting with status 1

而在老的RS上至少支持zlib与lz4,看来是没有正常安装了。在'/usr/local'目录下,只发现了hbase目录,没有与之平级hadoop目录,而在另外老的三台RS上是有的。这样问题找到了,将老RS上的hadoop复制到'/usr/local'下,设置好hadoop路径变量(/etc/profile.d/hadoop.sh),问题得以解决。

$ ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker
2017-10-13 15:16:59,739 WARN  [main] bzip2.Bzip2Factory (Bzip2Factory.java:isNativeBzip2Loaded(73)) - Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
2017-10-13 15:16:29,744 INFO  [main] zlib.ZlibFactory (ZlibFactory.java:(48)) - Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/local/hadoop/lib/native/libhadoop.so.1.0.0
zlib:   true /lib64/libz.so.1
snappy: false
lz4:    true revision:43
bzip2:  false

原因分析:hbase是基于hadoop开发而来,其中的一些高级功能还是需要调用hadoop中的接口,因此,在其运行环境中提供相兼容的hadoop环境是必须的。

参考来源:
Appendix A: Compression and Data Block Encoding In HBase
Hadoop Unable to load native-hadoop library for your platform warning
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

19、datanode节点的数据盘出现损坏时意外退出或拒绝启动

日志如下:
...
2017-11-02 10:42:03,361 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1500672018;bpid=BP-1591618693-192.168.20.125-1431793334085;lv=-47;nsInfo=lv=-47;cid=CID-4227a144-0f0a-4034-a71d-e7c7ebcb13bf;nsid=1500672018;c=0;bpid=BP-1591618693-192.168.20.125-1431793334085
2017-11-02 10:42:03,380 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1591618693-192.168.20.125-1431793334085 (storage id DS-1034782909-192.168.20.51-50010-1489925973516) service to nameNode.hadoop1/192.168.20.125:9000
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 5, volumes configured: 6, volumes failed: 1, volume failures tolerated: 0
    at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.(FsDatasetImpl.java:201)
...
2017-11-02 10:42:05,482 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-11-02 10:42:05,485 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-11-02 10:42:05,488 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at dataNode11.hadoop1/192.168.20.51
************************************************************/

这台dn有6块硬盘,其中一块损坏了,导致进程中止并拒绝启动,后修改了hdfs-site.xml配置文件,将这块硬盘的挂载点从配件文件中移除,该节点可以被启动,但容量会少去这部分。分析一下不能启动的原因:

dfs.datanode.failed.volumes.tolerated - By default this is set to 0 in hdfs-site.xml

其含义是:The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown. 即datanode可以忍受的磁盘损坏的个数,默认为0。

在hadoop集群中,经常会发生磁盘损坏的情况。datanode在启动时会使用dfs.datanode.data.dir下配置的文件夹(用来存储block),若是有一些不可以用且个数>配置的值,DataNode就会启动失败。volFailuresTolerated和volsConfigured的值都为1,所以会导致代码里的设定起作用从而拒绝启动。


转载于:https://www.cnblogs.com/yjt1993/p/10557348.html

你可能感兴趣的:((转)hadoop 常规错误问题(一))