Android 架构之OkHttp源码解读(中)

前言

在上一篇中,主要讲解了OkHttp 连接池复用机制、高并发分发、以及拦截器设计,但没有讲解每一个拦截器在框架中的作用,所以在本篇中会重点讲解每一个拦截器执行流程,以及对应的关系。

在下一篇中,将会手写一份阉割版的OkHttp,用来巩固对OkHttp的认知。话不多说,直接开始。

  Response getResponseWithInterceptorChain() throws IOException {
    // Build a full stack of interceptors.
    List interceptors = new ArrayList<>();
    //开发者自定义拦截器
    interceptors.addAll(client.interceptors());
    // RetryAndFollowUpInterceptor (重定向拦截器)
    interceptors.add(retryAndFollowUpInterceptor);
    // BridgeInterceptor (桥接拦截器)
    interceptors.add(new BridgeInterceptor(client.cookieJar()));
    //CacheInterceptor (缓存拦截器)
    interceptors.add(new CacheInterceptor(client.internalCache()));
    // ConnectInterceptor (连接拦截器)
    interceptors.add(new ConnectInterceptor(client));
    if (!forWebSocket) {
      //开发者自定义拦截器
      interceptors.addAll(client.networkInterceptors());
    }
    //CallServerInterceptor(读写拦截器)
    interceptors.add(new CallServerInterceptor(forWebSocket));

    Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
        originalRequest, this, eventListener, client.connectTimeoutMillis(),
        client.readTimeoutMillis(), client.writeTimeoutMillis());

    return chain.proceed(originalRequest);
  }

源码解读

在上一篇中,我们看源码直接分析到这就结束了,现在我们就从这开始。 根据这一段代码可得拦截器分为(除去开发者自定义):

  1. RetryAndFollowUpInterceptor (重定向/重试拦截器)
  2. BridgeInterceptor (桥接拦截器)
  3. CacheInterceptor (缓存拦截器)
  4. ConnectInterceptor (连接拦截器)
  5. CallServerInterceptor(读写拦截器)

1、RetryAndFollowUpInterceptor (重定向/重试拦截器)

第一个拦截器:RetryAndFollowUpInterceptor,主要就是完成两件事情:重试与重定向。

1.1、重试


public final class RetryAndFollowUpInterceptor implements Interceptor {

    ...略
    @Override public Response intercept(Chain chain) throws IOException {
    ...略
    // StreamAllocation 重点,后面连接拦截器会讲解
    StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
        createAddress(request.url()), call, eventListener, callStackTrace);
    this.streamAllocation = streamAllocation;

    int followUpCount = 0;
    Response priorResponse = null;
    while (true) {
      if (canceled) {
        streamAllocation.release();
        throw new IOException("Canceled");
      }

      Response response;
      boolean releaseConnection = true;
      try {
        response = realChain.proceed(request, streamAllocation, null, null);
        releaseConnection = false;
      } catch (RouteException e) {
        // The attempt to connect via a route failed. The request will not have been sent.
        if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
          throw e.getLastConnectException();
        }
        releaseConnection = false;
        continue;
      } catch (IOException e) {
        // An attempt to communicate with a server failed. The request may have been sent.
        boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
        if (!recover(e, streamAllocation, requestSendStarted, request)) throw e;
        releaseConnection = false;
        continue;
      } finally {
        // We're throwing an unchecked exception. Release any resources.
        if (releaseConnection) {
          streamAllocation.streamFailed(null);
          streamAllocation.release();
        }
      }
    ...略
     
  }
..略
}

源码解析

请求阶段发生了 RouteException 或者 IOException会进行判断是否重新发起请求。

      catch (RouteException e) {
        // The attempt to connect via a route failed. The request will not have been sent.
        if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
          throw e.getLastConnectException();
        }
        releaseConnection = false;
        continue;
      } catch (IOException e) {
        // An attempt to communicate with a server failed. The request may have been sent.
        boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
        if (!recover(e, streamAllocation, requestSendStarted, request)) throw e;
        releaseConnection = false;
        continue;
      }

源码解析

这俩个异常都调用了相同的方法 recover() 判断是否重试。那么进入一探究竟。

  private boolean recover(IOException e, StreamAllocation streamAllocation,
      boolean requestSendStarted, Request userRequest) {
    streamAllocation.streamFailed(e);

    // The application layer has forbidden retries. 应用层禁止重试。
    // 在配置OkhttpClient是设置了不允许重试(默认允许),则一旦发生请求失败就不再重试
    if (!client.retryOnConnectionFailure()) return false;

    // We can't send the request body again. 我们无法再次发送请求正文。
    if (requestSendStarted && userRequest.body() instanceof UnrepeatableRequestBody) return false;

    // This exception is fatal.这个异常是致命的。
    // 判断是不是属于重试的异常
    if (!isRecoverable(e, requestSendStarted)) return false;

    // No more routes to attempt.没有更多的路线可以尝试。
    if (!streamAllocation.hasMoreRoutes()) return false;

    // For failure recovery, use the same route selector with a new connection.
    return true;
  }

源码解析

在不禁止重试的前提下,如果出现了某些异常,并且存在更多的路由线路,则会尝试换条线路进行请求的重试。其中某些异常是在isRecoverable中进行判断。

  private boolean isRecoverable(IOException e, boolean requestSendStarted) {
    // If there was a protocol problem, don't recover.
    // 出现协议异常,不能重试
    if (e instanceof ProtocolException) {
      return false;
    }

    // If there was an interruption don't recover, but if there was a timeout connecting to a route
    // 如果有中断不恢复,但如果连接到路由超时
    // we should try the next route (if there is one).
    // 我们应该尝试下一条路线(如果有的话)。
    if (e instanceof InterruptedIOException) {
      return e instanceof SocketTimeoutException && !requestSendStarted;
    }

    // Look for known client-side or negotiation errors that are unlikely to be fixed by trying
    // again with a different route.
    // SSL握手异常中,证书出现问题,不能重试
    if (e instanceof SSLHandshakeException) {
      // If the problem was a CertificateException from the X509TrustManager,
      // do not retry.
      if (e.getCause() instanceof CertificateException) {
        return false;
      }
    }
    // SSL握手未授权异常 不能重试
    if (e instanceof SSLPeerUnverifiedException) {
      // e.g. a certificate pinning error.
      return false;
    }

    return true;
  }

源码解析

从这段代码可以看出:

  • 协议异常不能重试
  • 当socket超时时,并且没有新的连接通道的话,也不能重试
  • SSL握手异常不能重试
  • SSL未授权不能重试
  • 其他情况均可重试请求

1.2、重定向

在讲重定向之前,首先先理解何为重定向?

如图所示

现有一个网络接口,在浏览器访问时,第一次报了错误code 302,但在该地址返回头里,却有一个Location 字段,这个字段里面的值才是我们要正常访问的接口,于是浏览器自动重定向访问了对应字段里面的值,并将最新的接口数据反馈回来。

从这可以看出,浏览器在访问接口时,自动帮我们做了重定向操作。那么吧这个地址在Android上用最原始的方式访问试试,看看没有做重定向的结果是怎样的。

 private void testHttp(){
        String path = "http://jz.yxg12.cn/old.php";
        try {
            HttpUrl httpUrl = new HttpUrl(path);
            //请求的报文,拼接请求头
            StringBuffer request = new StringBuffer();
            request.append("GET ");
            request.append(httpUrl.getFile());
            request.append(" HTTP/1.1\r\n");
            request.append("Host: "+httpUrl.getHost());
            request.append("\r\n");
            request.append("\r\n");
            //如果有请求体,就需要拼接请求体
            //封装socket
            Socket socket = new Socket();
            socket.connect(new InetSocketAddress(httpUrl.getHost(),httpUrl.getPort()));
            OutputStream os = socket.getOutputStream();
            InputStream is = socket.getInputStream();
            new Thread(){
                @Override
                public void run() {
                    HttpCodec httpCodec = new HttpCodec();
                    try {
                        //读一行  响应行
                        String responseLine = httpCodec.readLine(is);
                        System.out.println("响应行:" + responseLine);
                        //读响应头
                        Map headers = httpCodec.readHeaders(is);
                        for (Map.Entry entry : headers.entrySet()) {
                            System.out.println(entry.getKey() + ": " + entry.getValue());
                        }
                        //读响应体 ? 需要区分是以 Content-Length 还是以 Chunked分块编码
                        if (headers.containsKey("Content-Length")) {
                            int length = Integer.valueOf(headers.get("Content-Length"));
                            byte[] bytes = httpCodec.readBytes(is, length);
                            content = new String(bytes);
                            System.out.println("响应:"+content);
                        } else {
                            //分块编码
                            String response = httpCodec.readChunked(is);
                            content = response;
                            System.out.println("响应:"+content);
                        }
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                }
            }.start();
            //发送请求
            os.write(request.toString().getBytes());
            os.flush();
            Thread.sleep(3000);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

运行效果

 I/System.out: 响应行:HTTP/1.1 302 Found
 I/System.out: Date: Sun, 03 Oct 2021 10:16:10 GMT
 I/System.out: Location: http://jz.yxg12.cn/newInfo.php?page=1&size=100
 I/System.out: Server: nginx
 I/System.out: Transfer-Encoding: chunked
 I/System.out: Content-Type: text/html; charset=UTF-8
 I/System.out: Connection: keep-alive
 I/System.out: 响应:

可以看出,没有重定向的效果,就把第一次的请求原封不动的返回过来了,根本就没有做再次请求的效果。既然没做重定向是这样的结果(返回错误的数据并且返回头里包含Location字段),那么做了重定向的是不是可以认为重新请求了返回头里面的Location属性?我们带着这样的疑问去阅读OkHttp源码。

public final class RetryAndFollowUpInterceptor implements Interceptor {

  ...略
 
 @Override public Response intercept(Chain chain) throws IOException {

    ...略
    // StreamAllocation  在连接拦截器重点讲解
    StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
    createAddress(request.url()), call, eventListener, callStackTrace);
    this.streamAllocation = streamAllocation;
    while (true) {
      if (canceled) {
        streamAllocation.release();
        throw new IOException("Canceled");
      }
      Response response;
      boolean releaseConnection = true; 
      response = realChain.proceed(request, streamAllocation, null, null);

    ...略
      Request followUp = followUpRequest(response, streamAllocation.route());

      if (followUp == null) {
        if (!forWebSocket) {
          streamAllocation.release();
        }
        return response;
      }
    ...略
      request = followUp;
      priorResponse = response;
    }
  }
...略
}

源码解析

从这段代码可以看出,重定向调用了方法 followUpRequest ,将方法返回的 Request 再次通过while循环重新请求。那么就去看看 followUpRequest 如何处理的。

private Request followUpRequest(Response userResponse, Route route) throws IOException {
    if (userResponse == null) throw new IllegalStateException();
    int responseCode = userResponse.code();

    final String method = userResponse.request().method();
    switch (responseCode) {
      ...略
      // 300 301 302 303 
      case HTTP_MULT_CHOICE:
      case HTTP_MOVED_PERM:
      case HTTP_MOVED_TEMP:
      case HTTP_SEE_OTHER:
        // Does the client allow redirects?
        if (!client.followRedirects()) return null;

        String location = userResponse.header("Location");
        if (location == null) return null;
        HttpUrl url = userResponse.request().url().resolve(location);

        // Don't follow redirects to unsupported protocols.
        if (url == null) return null;

        ...略

        // Most redirects don't include a request body.
        Request.Builder requestBuilder = userResponse.request().newBuilder();
        ...略

        return requestBuilder.url(url).build();

   ...略

      default:
        return null;
    }
  }

源码解析

看到这,应该明白了吧,和刚刚我们猜想的一样,当遇到需要重定向code的时,需要读取返回头里面的Location属性,然后将对应的属性作为新的Url再次包装请求头。

2、BridgeInterceptor (桥接拦截器)

注:可略过该类源码,直接看这段源码下面的解析,当然你想看,我也不拦着,毕竟源码也贴出来了。

public final class BridgeInterceptor implements Interceptor {
  private final CookieJar cookieJar;

  public BridgeInterceptor(CookieJar cookieJar) {
    this.cookieJar = cookieJar;
  }

  @Override public Response intercept(Chain chain) throws IOException {
    Request userRequest = chain.request();
    Request.Builder requestBuilder = userRequest.newBuilder();

    RequestBody body = userRequest.body();
    if (body != null) {
      MediaType contentType = body.contentType();
      if (contentType != null) {
        requestBuilder.header("Content-Type", contentType.toString());
      }

      long contentLength = body.contentLength();
      if (contentLength != -1) {
        requestBuilder.header("Content-Length", Long.toString(contentLength));
        requestBuilder.removeHeader("Transfer-Encoding");
      } else {
        requestBuilder.header("Transfer-Encoding", "chunked");
        requestBuilder.removeHeader("Content-Length");
      }
    }

    if (userRequest.header("Host") == null) {
      requestBuilder.header("Host", hostHeader(userRequest.url(), false));
    }

    if (userRequest.header("Connection") == null) {
      requestBuilder.header("Connection", "Keep-Alive");
    }

    // If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
    // the transfer stream.
    boolean transparentGzip = false;
    if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
      transparentGzip = true;
      requestBuilder.header("Accept-Encoding", "gzip");
    }

    List cookies = cookieJar.loadForRequest(userRequest.url());
    if (!cookies.isEmpty()) {
      requestBuilder.header("Cookie", cookieHeader(cookies));
    }

    if (userRequest.header("User-Agent") == null) {
      requestBuilder.header("User-Agent", Version.userAgent());
    }

    Response networkResponse = chain.proceed(requestBuilder.build());

    HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());

    Response.Builder responseBuilder = networkResponse.newBuilder()
        .request(userRequest);

    if (transparentGzip
        && "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
        && HttpHeaders.hasBody(networkResponse)) {
      GzipSource responseBody = new GzipSource(networkResponse.body().source());
      Headers strippedHeaders = networkResponse.headers().newBuilder()
          .removeAll("Content-Encoding")
          .removeAll("Content-Length")
          .build();
      responseBuilder.headers(strippedHeaders);
      String contentType = networkResponse.header("Content-Type");
      responseBuilder.body(new RealResponseBody(contentType, -1L, Okio.buffer(responseBody)));
    }

    return responseBuilder.build();
  }

  /** Returns a 'Cookie' HTTP request header with all cookies, like {@code a=b; c=d}. */
  private String cookieHeader(List cookies) {
    StringBuilder cookieHeader = new StringBuilder();
    for (int i = 0, size = cookies.size(); i < size; i++) {
      if (i > 0) {
        cookieHeader.append("; ");
      }
      Cookie cookie = cookies.get(i);
      cookieHeader.append(cookie.name()).append('=').append(cookie.value());
    }
    return cookieHeader.toString();
  }
}


源码解读

这个拦截器,你就理解成网络请求头/响应头的拼接封装。就下面图片这一坨字符串拼接,就由这个拦截器完成。

这个拦截器是最简单的,没啥可说的。

3、CacheInterceptor (缓存拦截器)

做好心理准备,又大又硬的骨头来了。

public final class CacheInterceptor implements Interceptor {
...略
  @Override public Response intercept(Chain chain) throws IOException {
    Response cacheCandidate = cache != null
        ? cache.get(chain.request())
        : null;

    long now = System.currentTimeMillis();
    
    //分析点1 CacheStrategy.Factory().get();
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    
    Request networkRequest = strategy.networkRequest;
    Response cacheResponse = strategy.cacheResponse;

    if (cache != null) {
      cache.trackResponse(strategy);
    }

    if (cacheCandidate != null && cacheResponse == null) {
      closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
    }

    // If we're forbidden from using the network and the cache is insufficient, fail.
    //分析点2  if (networkRequest == null && cacheResponse == null)
    if (networkRequest == null && cacheResponse == null) {
      return new Response.Builder()
          .request(chain.request())
          .protocol(Protocol.HTTP_1_1)
          .code(504)
          .message("Unsatisfiable Request (only-if-cached)")
          .body(Util.EMPTY_RESPONSE)
          .sentRequestAtMillis(-1L)
          .receivedResponseAtMillis(System.currentTimeMillis())
          .build();
    }

    // If we don't need the network, we're done.
    //分析点3  if (networkRequest == null)  cacheResponse !=null
    if (networkRequest == null) {
      return cacheResponse.newBuilder()
          .cacheResponse(stripBody(cacheResponse))
          .build();
    }

    Response networkResponse = null;
    try {
      networkResponse = chain.proceed(networkRequest);
    } finally {
      // If we're crashing on I/O or otherwise, don't leak the cache body.
      if (networkResponse == null && cacheCandidate != null) {
        closeQuietly(cacheCandidate.body());
      }
    }

    // If we have a cache response too, then we're doing a conditional get.
    //分析点4  if (cacheResponse != null) networkRequest !=null
    if (cacheResponse != null) {
      if (networkResponse.code() == HTTP_NOT_MODIFIED) {
        Response response = cacheResponse.newBuilder()
            .headers(combine(cacheResponse.headers(), networkResponse.headers()))
            .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
            .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
        networkResponse.body().close();

        // Update the cache after combining headers but before stripping the
        // Content-Encoding header (as performed by initContentStream()).
        cache.trackConditionalCacheHit();
        cache.update(cacheResponse, response);
        return response;
      } else {
        closeQuietly(cacheResponse.body());
      }
    }
    //分析点 5  networkRequest !=null  cacheResponse =null
    Response response = networkResponse.newBuilder()
        .cacheResponse(stripBody(cacheResponse))
        .networkResponse(stripBody(networkResponse))
        .build();

    //分析点 6  
    if (cache != null) {
      //未有缓存的状态下,写入缓存前逻辑判断
      if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
        // Offer this request to the cache.
        CacheRequest cacheRequest = cache.put(response);
        //写入缓存
        return cacheWritingResponse(cacheRequest, response);
      }

      if (HttpMethod.invalidatesCache(networkRequest.method())) {
        try {
          cache.remove(networkRequest);
        } catch (IOException ignored) {
          // The cache cannot be written.
        }
      }
    }

    return response;
  }
}

源码解析

  • 分析点1:先把它看成从一个缓存工厂里面拿到缓存(稍后会紧跟着讲如何拿到缓存)
  • networkRequest :此调用如果不使用网络则为空,使用网络就不为空
  • cacheResponse :此调用如果不使用缓存则为空,使用缓存就不为空
  • 分析点2:如果既不使用网络,也不使用缓存,请求就返回code为504的异常
  • 分析点3:如果不使用网络,那么就读取缓存,并且直接返回上一次缓存的数据
  • 分析点4:如果使用网络,并且也有缓存,那么先去通过网络请求一次,接着更新缓存,最后再返回最新的数据
  • 分析点5:如果使用网络,本地也没有缓存,那就去请求网络数据。
  • 分析点6:网络数据请求下来了,接着判断是否要缓存,如果缓存,再判断哪些能缓存(稍后会讲,到底哪些状态才会写入缓存)

3.1、如何写入缓存?

我们先来到 分析6 这里,写入缓存的方法毋庸置疑就是 cacheWritingResponse 方法,但是在研究这个方法之前,我们先看看,要哪些条件才能进入if判断。

第一层 if 判断 if (cache != null) ,表示 使用者表示该请求需要缓存; 第二层 if 判断 if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) ,分别调用了俩个方法,hasBody、isCacheable ,这两个都为true的时候,才能写缓存

HttpHeaders.hasBody

  /** Returns true if the response must have a (possibly 0-length) body. See RFC 7231. */
  public static boolean hasBody(Response response) {
    // HEAD requests never yield a body regardless of the response headers.
    if (response.request().method().equals("HEAD")) {
      return false;
    }

    int responseCode = response.code();
    // HTTP_CONTINUE:100
    if ((responseCode < HTTP_CONTINUE || responseCode >= 200)
        && responseCode != HTTP_NO_CONTENT // HTTP_NO_CONTENT 204
                                            //HTTP_NOT_MODIFIED  304
        && responseCode != HTTP_NOT_MODIFIED ) {
      return true;
    }

    // If the Content-Length or Transfer-Encoding headers disagree with the response code, the
    // response is malformed. For best compatibility, we honor the headers.
    if (contentLength(response) != -1
        || "chunked".equalsIgnoreCase(response.header("Transfer-Encoding"))) {
      return true;
    }

    return false;
  }

源码解析

从这个方法可以看出

  • HEAD 请求不能被缓存
  • 响应体code 可以小于100,也可以大于等于200,但不能为204,也不能是304的网络请求才能缓存
  • 如果 Content-Length 或 Transfer-Encoding 标头与响应代码一致时,才能缓存

接下来轮到第二个方法了

CacheStrategy.isCacheable


  /** Returns true if {@code response} can be stored to later serve another request. */
  public static boolean isCacheable(Response response, Request request) {
    // Always go to network for uncacheable response codes (RFC 7231 section 6.1),
    // This implementation doesn't support caching partial content.
    switch (response.code()) {
      case HTTP_OK: //200
      case HTTP_NOT_AUTHORITATIVE: //203
      case HTTP_NO_CONTENT: //204
      case HTTP_MULT_CHOICE: //300
      case HTTP_MOVED_PERM: //301
      case HTTP_NOT_FOUND: //404
      case HTTP_BAD_METHOD: //405
      case HTTP_GONE: //410
      case HTTP_REQ_TOO_LONG: //414
      case HTTP_NOT_IMPLEMENTED: //501
      case StatusLine.HTTP_PERM_REDIRECT: //308
        // These codes can be cached unless headers forbid it.
        break;

      case HTTP_MOVED_TEMP: //302
      case StatusLine.HTTP_TEMP_REDIRECT: //307
        // These codes can only be cached with the right response headers.
        // http://tools.ietf.org/html/rfc7234#section-3
        // s-maxage is not checked because OkHttp is a private cache that should ignore s-maxage.
        if (response.header("Expires") != null
            || response.cacheControl().maxAgeSeconds() != -1
            || response.cacheControl().isPublic()
            || response.cacheControl().isPrivate()) {
          break;
        }
        // Fall-through.

      default:
        // All other codes cannot be cached.
        return false;
    }

    // A 'no-store' directive on request or response prevents the response from being cached.
    return !response.cacheControl().noStore() && !request.cacheControl().noStore();
  }

源码解析

  1. 先看最后一句代码,只要是请求头,响应头里面的Cache-Control属性任意一个为 no-store 值,都不能缓存
  2. 在满足1的情况下,code 在 302、307 重定向的情况下,需要判断是不是存在一些允许缓存的响应头
  3. 在满足1的情况,并且不满足2的情况下,对应code 的响应都能缓存

现在写入缓存前的判断都分析完了,可以开始分析缓存的写入了。

写入缓存 (cacheWritingResponse)

  /**
   * Returns a new source that writes bytes to {@code cacheRequest} as they are read by the source
   * consumer. This is careful to discard bytes left over when the stream is closed; otherwise we
   * may never exhaust the source stream and therefore not complete the cached response.
   */
  private Response cacheWritingResponse(final CacheRequest cacheRequest, Response response)
      throws IOException {
    // Some apps return a null body; for compatibility we treat that like a null cache request.
    if (cacheRequest == null) return response;
    Sink cacheBodyUnbuffered = cacheRequest.body();
    if (cacheBodyUnbuffered == null) return response;
    //拿到缓存数据
    final BufferedSource source = response.body().source();
    //Square公司 开发的另一个 框架 Okio 读写流
    final BufferedSink cacheBody = Okio.buffer(cacheBodyUnbuffered);
    
    //分析点7
    Source cacheWritingSource = new Source() {
      boolean cacheRequestClosed;

      @Override public long read(Buffer sink, long byteCount) throws IOException {
        long bytesRead;
        try {
          //获取待缓存数据的字节长度
          bytesRead = source.read(sink, byteCount);
        } catch (IOException e) {
          if (!cacheRequestClosed) {
            cacheRequestClosed = true;
            cacheRequest.abort(); // Failed to write a complete cache response.
          }
          throw e;
        }
        //长度异常则关闭缓存对象
        if (bytesRead == -1) {
          if (!cacheRequestClosed) {
            cacheRequestClosed = true;
            cacheBody.close(); // The cache response is complete!
          }
          return -1;
        }

        sink.copyTo(cacheBody.buffer(), sink.size() - bytesRead, bytesRead);
        cacheBody.emitCompleteSegments();
        return bytesRead;
      }

      @Override public Timeout timeout() {
        return source.timeout();
      }

      @Override public void close() throws IOException {
        if (!cacheRequestClosed
            && !discard(this, HttpCodec.DISCARD_STREAM_TIMEOUT_MILLIS, MILLISECONDS)) {
          cacheRequestClosed = true;
          cacheRequest.abort();
        }
        source.close();
      }
    };

    String contentType = response.header("Content-Type");
    long contentLength = response.body().contentLength();
    //分析点8
    return response.newBuilder()
        .body(new RealResponseBody(contentType, contentLength, Okio.buffer(cacheWritingSource)))
        .build();
  }

源码分析

开头开一段标准的非空判断, 接着分析点7 和分析点 8 都是用了Square公司 另一个 Okio 读写流框架,这里就不过多解读了,反正可以理解为这里通过Okio框架,将缓存写入到SD卡,写入成功后在返回写入的数据就行了。

3.2、如何读取缓存?

为了避免来回翻,这里再贴一次简短的缓存拦截器里面的源码。

  @Override public Response intercept(Chain chain) throws IOException {
    Response cacheCandidate = cache != null
        ? cache.get(chain.request())
        : null;

    long now = System.currentTimeMillis();
    
    //分析点1 CacheStrategy.Factory().get();
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    Request networkRequest = strategy.networkRequest;
    Response cacheResponse = strategy.cacheResponse;
...略
}

源码解析

继续刚刚的分析1,我们先看里面的缓存工厂


    public Factory(long nowMillis, Request request, Response cacheResponse) {
      this.nowMillis = nowMillis;
      this.request = request;
      this.cacheResponse = cacheResponse;

      if (cacheResponse != null) {
        //对应响应的请求发出的本地时间 和 接收到响应的本地时间
        this.sentRequestMillis = cacheResponse.sentRequestAtMillis();
        this.receivedResponseMillis = cacheResponse.receivedResponseAtMillis();
        Headers headers = cacheResponse.headers();
        for (int i = 0, size = headers.size(); i < size; i++) {
          String fieldName = headers.name(i);
          String value = headers.value(i);
          if ("Date".equalsIgnoreCase(fieldName)) {
            servedDate = HttpDate.parse(value);
            servedDateString = value;
          } else if ("Expires".equalsIgnoreCase(fieldName)) {
            expires = HttpDate.parse(value);
          } else if ("Last-Modified".equalsIgnoreCase(fieldName)) {
            lastModified = HttpDate.parse(value);
            lastModifiedString = value;
          } else if ("ETag".equalsIgnoreCase(fieldName)) {
            etag = value;
          } else if ("Age".equalsIgnoreCase(fieldName)) {
            ageSeconds = HttpHeaders.parseSeconds(value, -1);
          }
        }
      }
    }


源码分析

这里无非就是一个构造函数,有一个for循环,然后在里面有一堆一系列的if条件语句判断。分别解释一下每个判断的意思:

  • Date 消息发送的时间
  • Expires 资源过期的时间
  • Last-Modified 资源最后修改时间
  • ETag 资源在服务器的唯一标识
  • Age 服务器用缓存响应请求,该缓存从产生到现在经过多长时间(秒)

接着我们再来看get 方法

    /**
     * Returns a strategy to satisfy {@code request} using the a cached response {@code response}.
     */
    public CacheStrategy get() {
      CacheStrategy candidate = getCandidate();

      if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) {
        // We're forbidden from using the network and the cache is insufficient.
        return new CacheStrategy(null, null);
      }

      return candidate;
    }

源码解读

代码很少,通过 方法 getCandidate() 来获取对应的缓存。继续深入。


    /** Returns a strategy to use assuming the request can use the network. */
    private CacheStrategy getCandidate() {
      // No cached response.
      //判断是缓存是不是存在:
      if (cacheResponse == null) {
        return new CacheStrategy(request, null);
      }

      // Drop the cached response if it's missing a required handshake.
      //如果本次请求是HTTPS,但是缓存中没有对应的握手信息,那么缓存无效 
      if (request.isHttps() && cacheResponse.handshake() == null) {
        return new CacheStrategy(request, null);
      }

      // 上面写入缓存 有讲解这个方法,这里不再描述
      if (!isCacheable(cacheResponse, request)) {
        return new CacheStrategy(request, null);
      }

      //如果用户指定了Cache-Control: no-cache(不使用缓存)的请求头或者请求头包含
          //If-Modified-Since或If-None-Match(请求验证),那么就不允许使用缓存
      CacheControl requestCaching = request.cacheControl();
      if (requestCaching.noCache() || hasConditions(request)) {
        return new CacheStrategy(request, null);
      }

      CacheControl responseCaching = cacheResponse.cacheControl();
      //如果缓存的响应中包含Cache-Control: immutable,这意味着对应请求的响应内容将一直不会改变。
      //此时就可以直接使用缓存。否则继续判断缓存是否可用
      if (responseCaching.immutable()) { //分析点 9
        return new CacheStrategy(null, cacheResponse);
      }


      //获得缓存的响应从创建到现在的时间
      long ageMillis = cacheResponseAge();
      //获取这个响应有效缓存的时长
      long freshMillis = computeFreshnessLifetime();
      
      //如果请求中指定了 max-age 表示指定了能拿的缓存有效时长,
      //就需要综合响应有效缓存时长与请求能拿缓存的时长,获得最小的能够使用响应缓存的时长
      if (requestCaching.maxAgeSeconds() != -1) {
        freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
      }
     //请求包含  Cache-Control:min-fresh=[秒]  能够使用还未过指定时间的缓存 (请求认为的缓存有效时间)
      long minFreshMillis = 0;
      if (requestCaching.minFreshSeconds() != -1) {
        minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
      }
          //Cache-Control:must-revalidate 可缓存但必须再向源服务器进行确认
      //Cache-Control:max-stale=[秒] 缓存过期后还能使用指定的时长  如果未指定多少秒,
          //则表示无论过期多长时间都可以;如果指定了,则只要是指定时间内就能使用缓存
      //前者会忽略后者,所以判断了不必须向服务器确认,再获得请求头中的max-stale
      long maxStaleMillis = 0;
      if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
        maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
      }

      //不需要与服务器验证有效性 && 响应存在的时间+请求认为的缓存有效时间 小于 
          //缓存有效时长+过期后还可以使用的时间
      //允许使用缓存
      if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
        Response.Builder builder = cacheResponse.newBuilder();
        //如果已过期,但未超过 过期后继续使用时长,那还可以继续使用,只用添加相应的头部字段
        if (ageMillis + minFreshMillis >= freshMillis) {
          builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
        }
        long oneDayMillis = 24 * 60 * 60 * 1000L;
        //如果缓存已超过一天并且响应中没有设置过期时间也需要添加警告
        if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
          builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
        }
        //分析点10
        return new CacheStrategy(null, builder.build());
      }

      // Find a condition to add to the request. If the condition is satisfied, the response body
      // will not be transmitted.
      String conditionName;
      String conditionValue;
      if (etag != null) {
        conditionName = "If-None-Match";
        conditionValue = etag;
      } else if (lastModified != null) {
        conditionName = "If-Modified-Since";
        conditionValue = lastModifiedString;
      } else if (servedDate != null) {
        conditionName = "If-Modified-Since";
        conditionValue = servedDateString;
      } else {
        return new CacheStrategy(request, null); // No condition! Make a regular request.
      }

      Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
      Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);

      Request conditionalRequest = request.newBuilder()
          .headers(conditionalRequestHeaders.build())
          .build();
          //分析点11
      return new CacheStrategy(conditionalRequest, cacheResponse);
    }


源码解析

好累啊,这注释快弄吐了,不过离啃完大骨头还差最后一步。这里讲解要结合刚刚上面分析点一起讲,我这就直接贴过来。

  • 分析点1:先把它看成从一个缓存工厂里面拿到缓存(稍后会紧跟着讲如何拿到缓存)
  • networkRequest :此调用如果不使用网络则为空,使用网络就不为空
  • cacheResponse :此调用如果不使用缓存则为空,使用缓存就不为空
  • 分析点2:如果既不使用网络,也不使用缓存,请求就返回code为504的异常
  • 分析点3:如果不使用网络,那么就读取缓存,并且直接返回上一次缓存的数据
  • 分析点4:如果使用网络,并且也有缓存,那么先去通过网络请求一次,接着更新缓存,最后再返回最新的数据
  • 分析点5:如果使用网络,本地也没有缓存,那就去请求网络数据。
  • 分析点6:网络数据请求下来了,接着判断是否要缓存,如果缓存,再判断哪些能缓存(稍后会讲,到底哪些状态才会写入缓存)
    //分析点1 CacheStrategy.Factory().get();
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    
    Request networkRequest = strategy.networkRequest;
    Response cacheResponse = strategy.cacheResponse;

    public final class CacheStrategy {
      CacheStrategy(Request networkRequest, Response cacheResponse) {
        this.networkRequest = networkRequest;
        this.cacheResponse = cacheResponse;
      }
    }

从上面分析点和最近代码段结合看,缓存属性是在构造函数的第二变量,也就是说,要使用缓存的话,那么第二个参数必须不能为空,在 getCandidate方法里,真正使用到缓存的 在对应的 分析点9、分析点10、分析点11 位置。

  • 分析点9:如果缓存的响应中包含Cache-Control: immutable,这意味着对应请求的响应内容将一直不会改变,那就可以放心的取缓存数据
  • 分析点10:在不满足分析9的情况下,在缓存有效期内,就取缓存数据
  • 分析点11:在不满足分析点9、10的情况下,只要 缓存的 etag(缓存响应)、lastModified(最后修改时间)、servedDate(响应服务器的世界)任意一个不为空的情况下,就取缓存数据

4、ConnectInterceptor (连接拦截器)

public final class ConnectInterceptor implements Interceptor {
  public final OkHttpClient client;

  public ConnectInterceptor(OkHttpClient client) {
    this.client = client;
  }

  @Override public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Request request = realChain.request();
    StreamAllocation streamAllocation = realChain.streamAllocation();

    // We need the network to satisfy this request. Possibly for validating a conditional GET.
    boolean doExtensiveHealthChecks = !request.method().equals("GET");
    HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
    RealConnection connection = streamAllocation.connection();

    return realChain.proceed(request, streamAllocation, httpCodec, connection);
  }
}

源码解析

这里连接拦截器的代码相对较少,主要连接功能都封装在 streamAllocation.newStream 这里面了,而StreamAllocation 变量又是在 RetryAndFollowUpInterceptor (重定向/重试拦截器) 里面创建的。现在先看newStream 方法。

  public HttpCodec newStream(
      OkHttpClient client, Interceptor.Chain chain, boolean doExtensiveHealthChecks) {
    int connectTimeout = chain.connectTimeoutMillis();
    int readTimeout = chain.readTimeoutMillis();
    int writeTimeout = chain.writeTimeoutMillis();
    int pingIntervalMillis = client.pingIntervalMillis();
    boolean connectionRetryEnabled = client.retryOnConnectionFailure();

    try {
      RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
          writeTimeout, pingIntervalMillis, connectionRetryEnabled, doExtensiveHealthChecks);
      HttpCodec resultCodec = resultConnection.newCodec(client, chain, this);

      synchronized (connectionPool) {
        codec = resultCodec;
        return resultCodec;
      }
    } catch (IOException e) {
      throw new RouteException(e);
    }
  }

源码解析

这里主要逻辑在 findHealthyConnection 方法里,进去看看

  /**
   * Finds a connection and returns it if it is healthy. If it is unhealthy the process is repeated
   * until a healthy connection is found.
   */
  private RealConnection findHealthyConnection(int connectTimeout, int readTimeout,
      int writeTimeout, int pingIntervalMillis, boolean connectionRetryEnabled,
      boolean doExtensiveHealthChecks) throws IOException {
    while (true) {
      RealConnection candidate = findConnection(connectTimeout, readTimeout, writeTimeout,
          pingIntervalMillis, connectionRetryEnabled);

      // If this is a brand new connection, we can skip the extensive health checks.
      synchronized (connectionPool) {
        if (candidate.successCount == 0) {
          return candidate;
        }
      }

      // Do a (potentially slow) check to confirm that the pooled connection is still good. If it
      // isn't, take it out of the pool and start again.
      if (!candidate.isHealthy(doExtensiveHealthChecks)) {
        noNewStreams();
        continue;
      }

      return candidate;
    }
  }

源码解析

代码较少,这里可以看出开启了一个死循环,然后调用findConnection ,主要逻辑也在这个方法里面,继续深入。


  /**
   * Returns a connection to host a new stream. This prefers the existing connection if it exists,
   * then the pool, finally building a new connection.
   */
  private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
      int pingIntervalMillis, boolean connectionRetryEnabled) throws IOException {
    boolean foundPooledConnection = false;
    RealConnection result = null;
    Route selectedRoute = null;
    Connection releasedConnection;
    Socket toClose;
    synchronized (connectionPool) {
      if (released) throw new IllegalStateException("released");
      if (codec != null) throw new IllegalStateException("codec != null");
      if (canceled) throw new IOException("Canceled");

      // Attempt to use an already-allocated connection. We need to be careful here because our
      // already-allocated connection may have been restricted from creating new streams.
      releasedConnection = this.connection;
      toClose = releaseIfNoNewStreams();
      //判断连接是否是当前连接处于KeepLive
      if (this.connection != null) {
        // We had an already-allocated connection and it's good.
        result = this.connection;
        releasedConnection = null;
      }
      if (!reportedAcquired) {
        // If the connection was never reported acquired, don't report it as released!
        releasedConnection = null;
      }
     //这里表示 该请求不是 当前连接处于 KeepLive 状态
      if (result == null) {
        // Attempt to get a connection from the pool.
        //尝试从连接池里面获取该请求的链接
        Internal.instance.get(connectionPool, address, this, null);
        if (connection != null) {
          foundPooledConnection = true;
          result = connection;
        } else {
          selectedRoute = route;
        }
      }
    }
    closeQuietly(toClose);

    if (releasedConnection != null) {
      eventListener.connectionReleased(call, releasedConnection);
    }
    if (foundPooledConnection) {
      eventListener.connectionAcquired(call, result);
    }
    // 当我们从连接池里面获取到了对应KeepLive的请求链接,那么就返回
    if (result != null) {
      // If we found an already-allocated or pooled connection, we're done.
      return result;
    }
    //后面就表示 ,未能从连接池里面获取 对应链接
    // If we need a route selection, make one. This is a blocking operation.
    boolean newRouteSelection = false;
    
    //判断是否含有下一层线路,如果有,即将在下一层线路中的线程池里面寻找对应的请求链接
    if (selectedRoute == null && (routeSelection == null || !routeSelection.hasNext())) {
      newRouteSelection = true;
      routeSelection = routeSelector.next();
    }

    synchronized (connectionPool) {
      if (canceled) throw new IOException("Canceled");

      if (newRouteSelection) {
        // Now that we have a set of IP addresses, make another attempt at getting a connection from
        // the pool. This could match due to connection coalescing.
        List routes = routeSelection.getAll();
        //遍历所有已拥有的线路,在线路里查找是否有对应的请求
        for (int i = 0, size = routes.size(); i < size; i++) {
          Route route = routes.get(i);
          Internal.instance.get(connectionPool, address, this, route);
          if (connection != null) {
            foundPooledConnection = true;
            result = connection;
            this.route = route;
            break;
          }
        }
      }
    
    //未找到请求链接 那么准备创建新的待链接的请求 RealConnection 
      if (!foundPooledConnection) {
        if (selectedRoute == null) {
          selectedRoute = routeSelection.next();
        }

        // Create a connection and assign it to this allocation immediately. This makes it possible
        // for an asynchronous cancel() to interrupt the handshake we're about to do.
        route = selectedRoute;
        refusedStreamCount = 0;
        result = new RealConnection(connectionPool, selectedRoute);
        acquire(result, false);
      }
    }
    
    //如果找到了已存在的链接,那么直接返回
    // If we found a pooled connection on the 2nd time around, we're done.
    if (foundPooledConnection) {
      eventListener.connectionAcquired(call, result);
      return result;
    }
    
    //这里没找到,那么就把刚刚已创建的新的连接对象进行 TCP + TLS 握手(阻塞)
    // Do TCP + TLS handshakes. This is a blocking operation.
    result.connect(connectTimeout, readTimeout, writeTimeout, pingIntervalMillis,
        connectionRetryEnabled, call, eventListener);
    routeDatabase().connected(result.route());

    Socket socket = null;
    synchronized (connectionPool) {
      reportedAcquired = true;
     //将当前连接成功得到请求,加入到线程池中
      // Pool the connection.
      Internal.instance.put(connectionPool, result);

      // If another multiplexed connection to the same address was created concurrently, then
      // release this connection and acquire that one.
      if (result.isMultiplexed()) {
        socket = Internal.instance.deduplicate(connectionPool, address, this);
        result = connection;
      }
    }
    closeQuietly(socket);

    eventListener.connectionAcquired(call, result);
    return result;
  }


源码解析

连接拦截器的核心就在于这个方法,主要逻辑:先是判断已分配连接是否存在,如果不存在,那么先从线程池里面寻找,线程池里面没有找到然后再去对应线路中寻找,如果都没找到,最后就会开辟新的连接,加入到线程池里并且将连接成功后的对象返回。

5、CallServerInterceptor(读写/请求拦截器)

public final class CallServerInterceptor implements Interceptor {
  private final boolean forWebSocket;

  public CallServerInterceptor(boolean forWebSocket) {
    this.forWebSocket = forWebSocket;
  }

  @Override public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    // httpCodec  这是从连接拦截器里面设置的
    HttpCodec httpCodec = realChain.httpStream();  
    // streamAllocation  这是从重定向拦截器里面设置的
    StreamAllocation streamAllocation = realChain.streamAllocation();
    RealConnection connection = (RealConnection) realChain.connection();
    // request  这是从桥接拦截器里面设置的
    Request request = realChain.request();

    long sentRequestMillis = System.currentTimeMillis();

    realChain.eventListener().requestHeadersStart(realChain.call());
    //这里写入桥接拦截器里面的内容
    httpCodec.writeRequestHeaders(request);
    realChain.eventListener().requestHeadersEnd(realChain.call(), request);

    Response.Builder responseBuilder = null;
    // !(method.equals("GET") || method.equals("HEAD")) 
    // 该请求 不为 get,或者 为 HEAD  并且 请求参不能为空,就能进入if 
    if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
      // If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
      // Continue" response before transmitting the request body. If we don't get that, return
      // what we did get (such as a 4xx response) without ever transmitting the request body.
      if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
        httpCodec.flushRequest();
        realChain.eventListener().responseHeadersStart(realChain.call());
        responseBuilder = httpCodec.readResponseHeaders(true);
      }

      if (responseBuilder == null) {
        // Write the request body if the "Expect: 100-continue" expectation was met.
        //开始获取请求参数
        realChain.eventListener().requestBodyStart(realChain.call());
        long contentLength = request.body().contentLength();
        CountingSink requestBodyOut =
            new CountingSink(httpCodec.createRequestBody(request, contentLength));
        BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
        //将请求参数写入待请求对象里
        request.body().writeTo(bufferedRequestBody);
        bufferedRequestBody.close();
        realChain.eventListener()
            .requestBodyEnd(realChain.call(), requestBodyOut.successfulCount);
      } else if (!connection.isMultiplexed()) {
        // If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection
        // from being reused. Otherwise we're still obligated to transmit the request body to
        // leave the connection in a consistent state.
        streamAllocation.noNewStreams();
      }
    }
    //这里表示,网络请求前的准备工作做好了
    httpCodec.finishRequest();

    if (responseBuilder == null) {
      realChain.eventListener().responseHeadersStart(realChain.call());
      responseBuilder = httpCodec.readResponseHeaders(false);
    }
    //开始请求网络
    Response response = responseBuilder
        .request(request)
        .handshake(streamAllocation.connection().handshake())
        .sentRequestAtMillis(sentRequestMillis)
        .receivedResponseAtMillis(System.currentTimeMillis())
        .build();

    int code = response.code();
    if (code == 100) {
      // server sent a 100-continue even though we did not request one.
      // try again to read the actual response
      //如果响应code 为 100 再次请求
      responseBuilder = httpCodec.readResponseHeaders(false);

      response = responseBuilder
              .request(request)
              .handshake(streamAllocation.connection().handshake())
              .sentRequestAtMillis(sentRequestMillis)
              .receivedResponseAtMillis(System.currentTimeMillis())
              .build();

      code = response.code();
    }

    realChain.eventListener()
            .responseHeadersEnd(realChain.call(), response);

    if (forWebSocket && code == 101) {
      // Connection is upgrading, but we need to ensure interceptors see a non-null response body.
      response = response.newBuilder()
          .body(Util.EMPTY_RESPONSE)
          .build();
    } else {
    //普通的http网络请求将会直接走else ,然后重新通过 openResponseBody 封装 响应体数据
      response = response.newBuilder()
          .body(httpCodec.openResponseBody(response))
          .build();
    }

    if ("close".equalsIgnoreCase(response.request().header("Connection"))
        || "close".equalsIgnoreCase(response.header("Connection"))) {
      streamAllocation.noNewStreams();
    }

    if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
      throw new ProtocolException(
          "HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
    }

    return response;
  }

...略
}

源码解析

这个类里面包含了上面拦截器所有内容,什么连接拦截器设置的 httpCode、桥接拦截器封装的请求体以及响应体,以及重定向拦截等等。主要功能就是网络接口请求时,起到了网络请求体以及响应体的封装作用。写好的请求体拿去请求网络,写好的响应体一次返回给上一级。

到这里,本章内容差不多结束了,最后来一张流程图做个总结。

6、拦截器流程

如图所示

相信你能够轻易看懂这张流程图,也能全方位熟悉 OkHttp整个拦截器的走向。到这,整个OkHttp的源码已经全部解析完了。

在下一篇里,我将仿照OkHttp结构,手写一份阉割版的OkHttp,用来加固对OkHttp源码的认识。

本文转自 https://juejin.cn/post/7015517626881277966,如有侵权,请联系删除。

你可能感兴趣的:(Android 架构之OkHttp源码解读(中))