java.nio.channels.SocketChannel[connection-pending remote=/xx.xx.xx.xx:9866]

目录

背景

问题描述

解决办法


背景

CDH集群在内网中部署,外网客户端需要正常提交任务到内网集群Yarn上,但外网客户端和内网网络不能直接连通,于是通过将内网中的每台主机绑定一个浮动ip,然后开通外网客户端和浮动ip之间的网络来实现上述需求。

问题描述

外网客户端通过连接浮动ip来提交任务到内网集群,任务提交到Yarn之后,集群返回响应内容给客户端,但响应内容中涉及的节点信息均为内网ip,导致客户端无法连接。具体报错如下:

[INFO] 2023-09-20 16:44:50.515  - [taskAppId=TASK-12637-0-7787]:[138] -  -> 2023-09-20 16:44:49,952 INFO  org.apache.hadoop.hdfs.DataStreamer                          [] - Exception in createBlockOutputStream
	org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/172.17.0.8:9866]
		at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:259) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1692) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1648) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
	2023-09-20 16:44:49,964 WARN  org.apache.hadoop.hdfs.DataStreamer                          [] - Abandoning BP-1309512692-172.17.0.6-1691719706686:blk_1073803089_62280
	2023-09-20 16:44:49,980 WARN  org.apache.hadoop.hdfs.DataStreamer                          [] - Excluding datanode DatanodeInfoWithStorage[172.17.0.8:9866,DS-961a5b2e-c2a1-46a3-bfdd-3910d2570bb3,DISK]
[INFO] 2023-09-20 16:45:50.524  - [taskAppId=TASK-12637-0-7787]:[138] -  -> 2023-09-20 16:45:50,043 INFO  org.apache.hadoop.hdfs.DataStreamer                          [] - Exception in createBlockOutputStream
	org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/172.17.0.6:9866]
		at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:259) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1692) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1648) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
	2023-09-20 16:45:50,044 WARN  org.apache.hadoop.hdfs.DataStreamer                          [] - Abandoning BP-1309512692-172.17.0.6-1691719706686:blk_1073803091_62282
	2023-09-20 16:45:50,053 WARN  org.apache.hadoop.hdfs.DataStreamer                          [] - Excluding datanode DatanodeInfoWithStorage[172.17.0.6:9866,DS-3a03d2ae-c218-44f6-80b6-253cb6ada508,DISK]
[INFO] 2023-09-20 16:46:50.415  - [taskAppId=TASK-12637-0-7787]:[127] - shell exit status code:1
[ERROR] 2023-09-20 16:46:50.415  - [taskAppId=TASK-12637-0-7787]:[137] - process has failure , exitStatusCode : 1 , ready to kill ...
[INFO] 2023-09-20 16:46:50.534  - [taskAppId=TASK-12637-0-7787]:[138] -  -> 2023-09-20 16:46:50,083 INFO  org.apache.hadoop.hdfs.DataStreamer                          [] - Exception in createBlockOutputStream
	org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/172.17.0.4:9866]
		at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:259) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1692) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1648) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
	2023-09-20 16:46:50,084 WARN  org.apache.hadoop.hdfs.DataStreamer                          [] - Abandoning BP-1309512692-172.17.0.6-1691719706686:blk_1073803093_62284
	2023-09-20 16:46:50,091 WARN  org.apache.hadoop.hdfs.DataStreamer                          [] - Excluding datanode DatanodeInfoWithStorage[172.17.0.4:9866,DS-5363866a-d143-42f7-85bb-a8236e0bbc41,DISK]
	2023-09-20 16:46:50,105 WARN  org.apache.hadoop.hdfs.DataStreamer                          [] - DataStreamer Exception
	org.apache.hadoop.ipc.RemoteException: File /user/hdfs/.flink/application_1691720545069_0007/chunjun/bin/chunjun-docker.sh could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
		at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2102)
		at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
		at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2673)
		at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:872)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
		at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
		at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
		at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
		at java.security.AccessController.doPrivileged(Native Method)
		at javax.security.auth.Subject.doAs(Subject.java:422)
		at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
		at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
	
		at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.ipc.Client.call(Client.java:1435) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.ipc.Client.call(Client.java:1345) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at com.sun.proxy.$Proxy30.addBlock(Unknown Source) ~[?:?]
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:444) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_211]
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_211]
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_211]
		at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_211]
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at com.sun.proxy.$Proxy31.addBlock(Unknown Source) ~[?:?]
		at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1838) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1638) ~[flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
		at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704) [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:2.8.3-10.0]
	2023-09-20 16:46:50,112 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Error while running the Flink session.

解决办法

  • 思路1

客户端配置主机映射,将内网ip映射为浮动ip,经过尝试,该方案不可行。

  • 思路2

修改HDFS配置

  
     dfs.clientuse.datanode.hostname
     true
  

你可能感兴趣的:(hdfs,linux)