【概述】
对于配置了HA模式的RM或者NN,客户端如果向standby的节点发送请求,会因为不可连接或standby拒绝提供服务导致请求失败,转而向Active的节点发送请求,这个转换是hadoop客户端内部自动完成的,无须上层业务感知(本质上是向其中一个节点发送请求,如果失败则继续向另外一个节点发送请求)。
上周排查了一个相关的问题,在集群正常的情况下,向两个节点发送请求都失败,并且是持续失败,从而陷入死循环。最后发现是hadoop内部RPC机制的问题,并且在2.X版本中,该问题都是存在的。本文就来聊聊这个问题。
【问题现象】
某天,上层业务部分的兄弟反馈了一个问题,其现象是yarn client请求某个应用(application)的状态失败。
了解到问题现象后,首先查看了两个RM的日志,并未发现有什么错误的日志信息;接着通过命令行与yarn client分别尝试获取了"有问题"application的状态,发现也都是可以正确获取到的。
再次与该兄弟沟通后发现只有该application有问题,其他application都能正确获取到。同时给出了该application获取时的报错信息
22/06/20 20:48:06 DEBUG ipc.Client: IPC Client (1291113768) connection to resourcemanager-0/172.16.55.7:8032 from 28573: closed
22/06/20 20:48:06 TRACE ipc.ProtobufRpcEngine: 1: Exception <- resourcemanager-0/172.16.55.7:8032: getApplicationReport {java.net.ConnectException: Call From pvs285731713/10.33.72.132 to resourcemanager-0:8032 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused}
22/06/20 20:48:06 TRACE retry.RetryInvocationHandler: Call#-2: ApplicationBaseProtocol.getApplicationReport([application_id { id: 1 cluster_timestamp: 1655720645233 }])
java.net.ConnectException: Call From pvs285731713/10.33.72.132 to resourcemanager-0:8032 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
at org.apache.hadoop.ipc.Client.call(Client.java:1495)
at org.apache.hadoop.ipc.Client.call(Client.java:1394)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy7.getApplicationReport(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:236)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy8.getApplicationReport(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationReport(YarnClientImpl.java:509)
at Test.main(Test.java:27)
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:814)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:423)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1610)
at org.apache.hadoop.ipc.Client.call(Client.java:1441)
... 16 more
22/06/20 20:48:06 INFO retry.RetryInvocationHandler: java.net.ConnectException: Call From pvs285731713/10.33.72.132 to resourcemanager-0:8032 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ApplicationClientProtocolPBClientImpl.getApplicationReport over rm1 after 4 failover attempts. Trying to failover after sleeping for 1017ms.
22/06/20 20:48:06 TRACE retry.RetryInvocationHandler: #-2 processRetryInfo: retryInfo=RetryInfo{retryTime=5131870305, delay=1017, action=RetryAction(action=FAILOVER_AND_RETRY, delayMillis=1017, reason=null), expectedFailoverCount=12, failException=null}, waitTime=1016
22/06/20 20:48:08 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
22/06/20 20:48:08 TRACE ipc.ProtobufRpcEngine: 1: Call -> resourcemanager-1:8032: getApplicationReport {application_id { id: 1 cluster_timestamp: 1655720645233 }}
22/06/20 20:48:08 TRACE ipc.ProtobufRpcEngine: 1: Exception <- resourcemanager-1:8032: getApplicationReport {java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "resourcemanager-1":8032; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost}
22/06/20 20:48:08 TRACE retry.RetryInvocationHandler: Call#-2: ApplicationBaseProtocol.getApplicationReport([application_id { id: 1 cluster_timestamp: 1655720645233 }])
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "resourcemanager-1":8032; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:770)
at org.apache.hadoop.ipc.Client$Connection.(Client.java:461)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1590)
at org.apache.hadoop.ipc.Client.call(Client.java:1441)
at org.apache.hadoop.ipc.Client.call(Client.java:1394)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy7.getApplicationReport(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:236)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy8.getApplicationReport(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationReport(YarnClientImpl.java:509)
at Test.main(Test.java:27)
Caused by: java.net.UnknownHostException
... 19 more
22/06/20 20:48:08 INFO retry.RetryInvocationHandler: java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "resourcemanager-1":8032; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost, while invoking ApplicationClientProtocolPBClientImpl.getApplicationReport over rm2 after 5 failover attempts. Trying to failover after sleeping for 2978ms.
【问题分析】
结合上面的情况整体进行分析:RM没有报错,并且通过命令行可以正确获取到application的状态,因此基本排除服务端存在问题的可能性;而在业务中,只有该application不能查到,其他application都正常;并且在业务端的使用方式为:每个application使用一个独立的yarn client对象进行查询。到这里,基本可以确定是在客户端一侧出了问题。
再从上面的报错日志可以看出,因为RM1是standby,并未监听8032端口,因此客户端向RM1建立连接失败这个是正常的逻辑,接着继续向RM2建立连接发送请求,但与RM2连接时,抛出了UnknownHost的异常,重新又转向RM1请求,如此反复循环,导致出现了该问题。因此UnknownHost异常应该是导致请求失败的最大疑点。
我们还是通过走读源码,从掌握交互逻辑流程来进一步分析该问题。
首先,客户端创建连接对象时,会判断服务端的地址是否已经解析,如果未解析则直接抛出异常(这也就是前面问题抛异常的地方)
public Connection(ConnectionId remoteId, int serviceClass) throws IOException {
this.remoteId = remoteId;
this.server = remoteId.getAddress();
if (server.isUnresolved()) {
throw NetUtils.wrapException(
server.getHostName(),
server.getPort(),
null,
0,
new UnknownHostException());
}
...
}
其次,对于服务端采用HA模式部署时,客户端的RPC代理层会有一个重试逻辑:对于单个rpc请求过程中的异常,通过回调切换到另外一个RM,并获取对应的proxy对象,继续进行请求访问。
在获取proxy对象时,内部实际上是对不同RM分别创建proxy对象,并缓存在map中,下次使用时直接从map中获取。
// ConfiguredRMFailoverProxyProvider.java
public synchronized ProxyInfo getProxy() {
String rmId = rmServiceIds[currentProxyIndex];
T current = proxies.get(rmId);
if (current == null) {
current = getProxyInternal();
proxies.put(rmId, current);
}
return new ProxyInfo(current, rmId);
}
在首次创建proxy对象时,对服务端的地址进行解析,如果无法解析出地址,则创建一个未解析的套接字,保存在proxy对象中(注:建立连接时使用的就是该套接字)
// ConfiguredRMFailoverProxyProvider.java
// 获取proxy对象
protected T getProxyInternal() {
try {
// 解析RM的地址
final InetSocketAddress rmAddress = rmProxy.getRMAddress(conf, protocol);
return rmProxy.getProxy(conf, protocol, rmAddress);
} catch (IOException ioe) {
LOG.error(
"Unable to create proxy to the ResourceManager " +
rmServiceIds[currentProxyIndex], ioe);
return null;
}
}
// ClientRMProxy.java
public InetSocketAddress getRMAddress(YarnConfiguration conf, Class> protocol)
throws IOException {
if (protocol == ApplicationClientProtocol.class) {
return conf.getSocketAddr(
YarnConfiguration.RM_ADDRESS,
YarnConfiguration.DEFAULT_RM_ADDRESS,
YarnConfiguration.DEFAULT_RM_PORT);
}
...
}
// Configuration.java
public InetSocketAddress getSocketAddr(
String name, String defaultAddress, int defaultPort) {
final String address = getTrimmed(name, defaultAddress);
return NetUtils.createSocketAddr(address, defaultPort, name);
}
// NetUtils.java
public static InetSocketAddress createSocketAddr(String target, int defaultPort, String configName) {
...
return createSocketAddrForHost(host, port);
}
public static InetSocketAddress createSocketAddrForHost(String host, int port) {
String staticHost = getStaticResolution(host);
String resolveHost = (staticHost != null) ? staticHost : host;
InetSocketAddress addr;
try {
InetAddress iaddr = SecurityUtil.getByName(resolveHost);
// if there is a static entry for the host, make the returned
// address look like the original given host
if (staticHost != null) {
iaddr = InetAddress.getByAddress(host, iaddr.getAddress());
}
addr = new InetSocketAddress(iaddr, port);
} catch (UnknownHostException e) {
// 捕获异常并创建未解析的套接字
addr = InetSocketAddress.createUnresolved(host, port);
}
return addr;
}
看到这里,可以分析出原因:即只有首次创建proxy对象时才会对服务端的地址进行解析保存,同时proxy对象会缓存在map中循环使用;而真正进行连接时会判断地址是否已经解析,如果未解析则直接抛出异常,如果未解析出的地址的RM恰好是Active的话,就会导致出现该问题。
另外,该问题仅仅对单个客户端(yarn client)有问题,不会影响其他客户端,这也就可以解释为什么业务侧只有某个application无法正确获取到,其他都正常,同时再次通过命令行或者客户端获取时又能正确获取到。
另外,如果业务侧对于异常的处理的方式是新建一个客户端,而不是继续复用该客户端对象发送请求,也不会出现该问题。
【问题解决】
问题的解决其实比较简单,在社区中也已经有人发现了该问题,并提交了patch,具体修改为:去除了创建连接时对服务端地址是否解析的判断,同时在真正建立连接时,对于未解析的地址抛出异常并捕获触发重新解析。
因此只需要引入该patch即可解决。
【总结】
小结一下,本文通过一个案例,讲述了hadoop中rpc内部缓存导致的一个问题,除此之外,hadoop的rpc中还有不少细节,我们也都踩过一些坑,后面我们再展开聊聊。
好了,这就是本文的全部内容,如果觉得本文对您有帮助,不要吝啬点赞在看转发,也欢迎加我微信交流~