Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being availa

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.2.1:50010], original=[192.168.2.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

 在hadoop hdfs-size.xml 中加入:


<property>

<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>true</value>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>NEVER</value>
</property>

你可能感兴趣的:(Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being availa)