OkHttp源码之socket连接池

在整个okhttp中,相对来说最耗资源的应该属于socket连接了,所以为了节省tcp的连接释放以及TLS协议的握手等时间,socket连接池是必不可少的。研究它的连接池,我们重点关注以下两点:

  • socket复用有何标准
  • 一个socket何时会被关闭?

okhttp的连接池代码在ConnectionPool中,首先看下大致的结构:

public final class ConnectionPool {
  /**
   * Background threads are used to cleanup expired connections. There will be at most a single
   * thread running per connection pool. The thread pool executor permits the pool itself to be
   * garbage collected.
   */
  private static final Executor executor = new ThreadPoolExecutor(0 /* corePoolSize */,
      Integer.MAX_VALUE /* maximumPoolSize */, 60L /* keepAliveTime */, TimeUnit.SECONDS,
      new SynchronousQueue(), Util.threadFactory("OkHttp ConnectionPool", true));

  /** The maximum number of idle connections for each address. */
  private final int maxIdleConnections;
  //每个socket生存的时间
  private final long keepAliveDurationNs;
 // 用来清理socket的任务
  private final Runnable cleanupRunnable ;//具体实现省略
 //用来存储socket的核心结构
  private final Deque connections = new ArrayDeque<>();
  final RouteDatabase routeDatabase = new RouteDatabase();
  boolean cleanupRunning;
}

可以看到,这里使用一个Deque来保存的,至于RealConnection,可以理解成对socket的包装。这里大家要注意到,对于连接池来说查询从来不是什么耗时操作,所以这里其实用List也是可以的,没有什么大的影响。

get操作(连接池的复用)

@Nullable RealConnection get(Address address, StreamAllocation streamAllocation, Route route) {
    for (RealConnection connection : connections) {
      if (connection.isEligible(address, route)) {
        streamAllocation.acquire(connection, true);
        return connection;
      }
    }
    return null;
  }

可以看到就只是遍历了所有的连接,然后判断某个连接是否可以复用,我们看下复用的判断代码,在RealConnection中:

public boolean isEligible(Address address, @Nullable Route route) {
    // If this connection is not accepting new streams, we're done.
    if (allocations.size() >= allocationLimit || noNewStreams) {
      return false;
    }

    // If the non-host fields of the address don't overlap, we're done.
    if (!Internal.instance.equalsNonHost(this.route.address(), address)){
      System.out.println("host not equal");
      return false;
    }

    // If the host exactly matches, we're done: this connection can carry the address.
    if (address.url().host().equals(this.route().address().url().host())) {
      System.out.println("host equal "+address.url().host().toString());
      return true; // This connection is a perfect match.
    }

    // At this point we don't have a hostname match. But we still be able to carry the request if
    // our connection coalescing requirements are met. See also:
    // https://hpbn.co/optimizing-application-delivery/#eliminate-domain-sharding
    // https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/

    // 1. This connection must be HTTP/2.
    if (http2Connection == null) return false;

    // 2. The routes must share an IP address. This requires us to have a DNS address for both
    // hosts, which only happens after route planning. We can't coalesce connections that use a
    // proxy, since proxies don't tell us the origin server's IP address.
    if (route == null) return false;
    if (route.proxy().type() != Proxy.Type.DIRECT) return false;
    if (this.route.proxy().type() != Proxy.Type.DIRECT) return false;
    if (!this.route.socketAddress().equals(route.socketAddress())) return false;

    // 3. This connection's server certificate's must cover the new host.
    if (route.address().hostnameVerifier() != OkHostnameVerifier.INSTANCE) return false;
    if (!supportsUrl(address.url())) return false;

    // 4. Certificate pinning must match the host.
    try {
      address.certificatePinner().check(address.url().host(), handshake().peerCertificates());
    } catch (SSLPeerUnverifiedException e) {
      return false;
    }

    return true; // The caller's address can be carried by this connection.
  }

对这整个方法,里面对好几条复用规则作了判断,我们将重点分析。

规则一:流数量要符合要求

一个socket如果被复用,那么在多个请求并发进行的情况下,必然出现多个线程同时往一个socket中写入数据,那这样做是否允许呢?这要分成两种情况:

http1.x

在http 1.x协议下,所有的请求的都是顺序的,即使使用了管道技术(可以同时按顺序连续发送请求,但消息的返回还是按照请求发送的顺序返回)也是如此,因此一个socket在任何时刻只能有一个流在写入,这意味着正在写入数据的socket无法被另一个请求复用

http2.0

http2.0协议使用了多路复用技术,允许同一个socket在同一个时候写入多个流数据,每个流有id,会进行组装,因此,这个时候正在写入数据的socket是可以被复用的。

为了区分两种情况,okhttp记录了每个socket流使用情况,同时设定了每个socket能同时使用多少流,很明显,http1.x同一时间只能有一个流,http2.0能有无数个:

if (allocations.size() >= allocationLimit || noNewStreams) {
      return false;
    }

现在再看这句if判断就能理解了,当然noNewStreams是在某些特殊情况下防止连接被复用时设置的,比如服务端要求关闭这个连接,那当然也不能被复用了。

小结

http1.x协议下当前socket没有其他流正在读写时可以复用,否则不行,http2.0对流数量没有限制。

规则二 http和ssl协议配置要相同

想要复用一个http连接,那么两次请求的所有http配置和ssl配置都要相同:

// If the non-host fields of the address don't overlap, we're done.
    if (!Internal.instance.equalsNonHost(this.route.address(), address)){
      return false;
    }

具体的equalsNonHost方法实现如下:

 boolean equalsNonHost(Address that) {
    return this.dns.equals(that.dns)
        && this.proxyAuthenticator.equals(that.proxyAuthenticator)
        && this.protocols.equals(that.protocols)
        && this.connectionSpecs.equals(that.connectionSpecs)
        && this.proxySelector.equals(that.proxySelector)
        && equal(this.proxy, that.proxy)
        && equal(this.sslSocketFactory, that.sslSocketFactory)
        && equal(this.hostnameVerifier, that.hostnameVerifier)
        && equal(this.certificatePinner, that.certificatePinner)
        && this.url().port() == that.url().port();
  }

上面具体到每个配置大家可以自己研究,篇幅有限这里不做展开。

规则三 域名要匹配

 // If the host exactly matches, we're done: this connection can carry the address.
    if (address.url().host().equals(this.route().address().url().host())) {
      return true; // This connection is a perfect match.
    }

这点不用说,满足了以上三条规则后我们可以放心的复用这个连接了。

规则四 特殊情况

上面的三条规则如果都符合了自然是完美复用一个连接,但其实还有一种情况也是可以复用的:多个host指向同一个ip地址的情况。
在http1.x的情况,有些网站为了突破浏览器一个域名只能建立6-8个连接的限制,会给同一个ip地址配置不同的域名,这样浏览器就能使用很多连接来访问页面,加快页面打开速度,但在http2.0时代,有了多路复用,一个连接完全能满足以前的要求,所以针对这种特殊情况我们应该复用连接,而不是新开连接,当然这个条件是比较苛刻的。

只有在http2.0情况下才会考虑复用

// 1. This connection must be HTTP/2.
if (http2Connection == null) return false;

只有在没有代理时才能复用

 // 2. The routes must share an IP address. This requires us to have a DNS address for both
    // hosts, which only happens after route planning. We can't coalesce connections that use a
    // proxy, since proxies don't tell us the origin server's IP address.
    if (route == null) return false;
    if (route.proxy().type() != Proxy.Type.DIRECT) return false;
    if (this.route.proxy().type() != Proxy.Type.DIRECT) return false;
 

如果设置了代理,我们无法知道原始服务器的ip地址,自然无法判断这个域名和之前的连接是否共用一个ip地址,自然不能复用。

只有ip地址相同才能复用

   if (!this.route.socketAddress().equals(route.socketAddress())) return false;

这点是肯定的,ip地址不同socket连接肯定不可能复用,无需解释。

对不受信任的证书处理方式相同才能复用

 // 3. This connection's server certificate's must cover the new host.
    if (route.address().hostnameVerifier() != OkHostnameVerifier.INSTANCE) return false;
    if (!supportsUrl(address.url())) return false;

第一行要求对证书的处理必须是默认的OkHostnameVerifier才行,要是自己随便实现的一个该接口,我们无法保证它和之前的连接实现是否一致,自然无法复用。
第二行就是要求对于这个不同的host,必须通过之前的连接的证书校验才行:

public boolean supportsUrl(HttpUrl url) {
    if (url.port() != route.address().url().port()) {
      return false; // Port mismatch.
    }

    if (!url.host().equals(route.address().url().host())) {
      // We have a host mismatch. But if the certificate matches, we're still good.
      return handshake != null && OkHostnameVerifier.INSTANCE.verify(
          url.host(), (X509Certificate) handshake.peerCertificates().get(0));
    }

    return true; // Success. The URL is supported.
  }

这是因为复用socket连接其实就意味着跳过了https握手的过程,如果不通过证书校验太危险了。、

本地证书校验通过才能复用

https为了防止中间人攻击可以在建立连接成功后将服务器下发的证书保存下来,这样如果有中间人伪装服务器,中间人下发的证书和本地保存的不一致就会校验失败,最后一条规则就是保证这个校验要通过:

// 4. Certificate pinning must match the host.
    try {
      address.certificatePinner().check(address.url().host(), handshake().peerCertificates());
    } catch (SSLPeerUnverifiedException e) {
      return false;
    }

小结

通过上面所有的校验后,这种情况下,不同host同一ip地址的情况也是可以复用的。

连接池的清理

对于一个socket连接池来说必然有自己的清理机制,否则如果长期不发起网络请求,socket连接一直被占用就划不来了,okhttp是通过一个单独的线程来清理的。

何时开始清理工作:

 void put(RealConnection connection) {
    assert (Thread.holdsLock(this));
    if (!cleanupRunning) {
      cleanupRunning = true;
      executor.execute(cleanupRunnable);
    }
    connections.add(connection);
  }

每次put一个新连接的时候都会判断是否需要清理,就是说并不会每次put都执行,要看条件控制的,然后我们看下具体清理逻辑:

private final Runnable cleanupRunnable = new Runnable() {
    @Override public void run() {
      while (true) {
        long waitNanos = cleanup(System.nanoTime());
        if (waitNanos == -1) return;
        if (waitNanos > 0) {
          long waitMillis = waitNanos / 1000000L;
          waitNanos -= (waitMillis * 1000000L);
          synchronized (ConnectionPool.this) {
            try {
              ConnectionPool.this.wait(waitMillis, (int) waitNanos);
            } catch (InterruptedException ignored) {
            }
          }
        }
      }
    }
  };

这里似乎看不出什么,我们继续看cleanup()方法:

long cleanup(long now) {
    int inUseConnectionCount = 0;
    int idleConnectionCount = 0;
    RealConnection longestIdleConnection = null;
    long longestIdleDurationNs = Long.MIN_VALUE;

    // Find either a connection to evict, or the time that the next eviction is due.
    synchronized (this) {
      for (Iterator i = connections.iterator(); i.hasNext(); ) {
        RealConnection connection = i.next();

        // If the connection is in use, keep searching.
        if (pruneAndGetAllocationCount(connection, now) > 0) {
          inUseConnectionCount++;
          continue;
        }

        idleConnectionCount++;

        // If the connection is ready to be evicted, we're done.
        long idleDurationNs = now - connection.idleAtNanos;
        if (idleDurationNs > longestIdleDurationNs) {
          longestIdleDurationNs = idleDurationNs;
          longestIdleConnection = connection;
        }
      }

      if (longestIdleDurationNs >= this.keepAliveDurationNs
          || idleConnectionCount > this.maxIdleConnections) {
        // We've found a connection to evict. Remove it from the list, then close it below (outside
        // of the synchronized block).
        connections.remove(longestIdleConnection);
      } else if (idleConnectionCount > 0) {
        // A connection will be ready to evict soon.
        return keepAliveDurationNs - longestIdleDurationNs;
      } else if (inUseConnectionCount > 0) {
        // All connections are in use. It'll be at least the keep alive duration 'til we run again.
        return keepAliveDurationNs;
      } else {
        // No connections, idle or in use.
        cleanupRunning = false;
        return -1;
      }
    }
    closeQuietly(longestIdleConnection.socket());

    // Cleanup again immediately.
    return 0;
  }

代码看起来很长,其实原理很简单,就是遍历当前所有连接,跳过正在使用的连接,其他没有用的连接,如果哪个连接超过了规定的时间,就关掉这个socket。如果都没有超过规定时间的,就返回离规定时间最近的那个差值。
拿到那个时间值后,我们再回到上面那个cleanupRunnable中,在那里会wait线程,然后醒来继续清理。

举例:socket最长生存时间是30分钟,当前有5个连接,c1,c2正在被使用,c3空闲了40分钟,c4空闲了20分钟,c5空闲了25分钟,
那么一次clean()方法会关掉c3,然后返回0,在cleanupRunnable中会立马进行下一次循环清理,这个时候检测到离生存时间最近的是c5,那么clean()方法会返回5分钟这个时间值,cleanRunnable中会wait 5分钟,然后5分钟后会继续下一次清理。

小结

理论上来说,只要连接池中有连接,该清理线程就一直存在,直到所有连接被释放该线程才会停止。

你可能感兴趣的:(OkHttp源码之socket连接池)