1 概述
在Android应用中大都会使用Http协议来访问网络, Android主要提供了两种方式(HttpURLConnection、HttpClient)来进行Http操作,那么应该使用那种方式访问网络呢?
关于这个话题大家可以产考Android访问网络,使用HttpURLConnection还是HttpClient?,
最后我要补充一句,现在的应用最低版本都不会低于Android2.3,所以根据上面文章的结论,应该使用HttpURLConnection进行网络请求,并且在Android6.0版本中HttpClient已经被移除。
在TCP/IP的五层协议模型中,Socket是对传输层协议的封装,Socket本身并不是协议,而是一个调用接口(API),目前传输层协议有TCP、UDP协议两种,Socket可以指定使用的传输层协议,当使用TCP协议进行连接时,该Socket连接就是一个TCP连接。HttpURLConnection建立的连接是TCP连接,TCP连接的建立需要经过三次握手过程,然后开始慢启动。
2 预备知识
2.1 HTTP的发展史
1> 1996年HTTP/1.0诞生
在发送请求之前,建立一个新的TCP连接,在这个请求完成后,该TCP连接就会被关闭,因此每发送一条请求都需要重新建立TCP连接,这样就会导致性能消耗和延迟,因此HttpURLConnection通过添加请求头字段Connection: keep-alive来优化这个问题,但这并没有得到广泛支持,所以问题仍然存在;HttpURLConnection通过连接池(ConnectionPool)来优化这个问题,连接池中TCP连接的数量是没有上限的,理论上说可以并发处理无数个HTTP请求,但是服务端肯定是不允许这样的事情发生。
2> 1999年HTTP/1.1诞生
HTTP/1.1规范规定使用长连接(Persistent Connection),这样的话可以在同一个TCP连接上发送多个请求,一定程度上弥补了HTTP1.0每次发送请求都要创建连接导致的性能消耗和延迟,如果要关闭连接,需要添加请求头字段Connection: close;同时为了降低等待响应的耗时,HTTP/1.1规范提出管线化(Pipelining)来实现在在单个TCP连接上发送多个请求而不等待相应响应,但是服务器必须按照收到请求的顺序发送响应,这时多个响应有可能会被阻塞的,通过下图你可以更加清晰的理解Pipelining的实现原理:
HttpURLConnection目前是不支持管线化的,因此HttpURLConnection中使用上图左侧的方式处理 请求-响应的。
3> 2015年HTTP/2诞生
官方HTTP/2规范
中文版HTTP/2规范,谢谢作者的分享。
HTTP/2规范提出了多路复用(Multiplexing)来实现多个请求在一个连接上并发,避免了HTTP/1.1中的多个响应可能会被阻塞的问题,通过下图你可以更加清晰的理解多路复用的实现原理:
上图中间部分代表TCP连接,4行代表4个并行的Stream,每一个方块可以理解成一个Frame,在HTTP/2中请求或者响应会被分割成多个Frame被传递,官方规范 HTTP Frames描述了Frame的格式:
+-----------------------------------------------+
| Length (24) |
+---------------+---------------+---------------+
| Type (8) | Flags (8) |
+-+-------------+---------------+-------------------------------+
|R| Stream Identifier (31) |
+=+=============================================================+
| Frame Payload (0...) ...
+---------------------------------------------------------------+
Frame由首部(9个字节长度)和负载(上图中的Frame Payload空间,用来负载请求或者响应的空间)两部分组成,首部由上图中的前5个区域组成:
Length:代表负载的长度,默认最大值为2^14 (16384),如果想将最大值设置为[2^14 + 1,2^24 - 1]范围中的值,那么可以通过发送
SETTINGS类型的Frame来说明发送方可以接受的最大负载大小。
Type:表示帧的类型,官方HTTP/2规范一共定义了10种Frame类型,其中HEADERS、PUSH_PROMISE、CONTINUATION三个类型是用来负载请求头或者响应头的,具体可以参考Header Compression and Decompression;其中DATA类型是用来负载请求体或者响应体的。
Flags:用来表达一种语义,比如
END_HEADERS:在HEADERS、PUSH_PROMISE、CONTINUATION类型的Frame中,表达请求头或者响应头结束。
END_STREAM:表达请求或者响应结束。
R: 保留位。
Stream Identifier: Stream标识,用来标识该Frame来自于那个Stream。
在 HTTP/2 中一对 请求-响应 就对应一个Stream,换句话说一个Stream的任务就是完成一对 请求-响应 的传输,然后就没用了;接收端通过Frame中携带的StreamId(即Stream Identifier)来区分不同的请求或者响应,接着连接Frame得到得到完整的请求或者响应。
注意:请求头或者响应头必须作为一个连续的Frame序列传送,不能有任何其他类型或任何其他Stream上的交错Frame,。
2.2 Upgrade协商机制和APLN协商机制
用于协商使用的应用层协议:
1> Upgrade协商机制(HTTP/1.1引入的Upgrade 机制)
只有在不使用SSL/TLS的情况下,在协商使用那个应用层协议时才会用到Upgrade协商机制,客户端通过发送一个包含请求头字段为 Upgrade:协议名称列表 的HTTP/1.1请求来发起应用层协议的协商,例如:
GET / HTTP/1.1
Host: server.example.com
Connection: Upgrade, HTTP2-Settings
Upgrade: h2c
HTTP2-Settings:
客户端通过Upgrade请求头字段中的协议列表(按照优先级降序排列)建议服务端切换到其中的某个协议,上面的请求建议服务端使用h2c协议(HTTP/2协议直接运行在TCP协议之上,没有中间层SSL/TLS),如果服务端同意使用Upgrade列举的协议,就会给出如下响应:
HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: h2c
[ HTTP/2 connection ...
如果服务端不同意或者不支持Upgrade列举的协议,就会直接忽略(当成 HTTP/1.1 请求,给出HTTP/1.1 响应):
HTTP/1.1 200 OK
Content-Length: 243
Content-Type: text/html
...
2> APLN协商机制
首先通过下图来了解HTTPS HTTP SSL/TLS TCP之间的关系:
网景公司(Netscape)开发了原始的SSL(Secure Sockets Layer)协议,由于协议中严重的安全漏洞没有发布1.0版;版本2.0在1995年2月发布,由于包含了一些安全缺陷,因此需要3.0版本的设计,SSL版本3.0于1996年发布;SSL 2.0在2011年被RFC 6176禁用,SSL 3.0在2015年6月也被RFC 7568禁用;1999年1月,TLS(Transport Layer Security) 1.0版本在RFC 2246中被定义为SSL Version 3.0的升级版;TLS 1.1是在2006年4月的RFC 4346中定义的;TLS 1.2是在2008年8月的RFC 5246中定义的。
应用层协议协商(Application-Layer Protocol Negotiation,简称ALPN)是一个TLS扩展。ALPN用于协商使用的应用层协议,以避免额外的往返协商通信;Google 在 SPDY 协议中开发了一个名为 NPN(Next Protocol Negotiation,下一代协议协商)的 TLS 扩展。随着 SPDY 被 HTTP/2 取代(2015年9月,Google 宣布了计划,移除对SPDY的支持,拥抱 HTTP/2,并将在Chrome 51中生效。),NPN 也被官方修订为 ALPN(Application Layer Protocol Negotiation,应用层协议协商);HttpURLConnection中使用ALPN进行应用层协议的协商,在TLS握手的Client Hello中,客户端会通过ALPN 扩展列出自己支持的各种应用层协议,按照优先级降序排列,服务端如果支持其中某个应用层协议,就会在Server Hello中通过ALPN扩展指定协商的结果为该应用层协议。
关于TLS握手阶段的流程,大家可以产考如下文章:
SSL/TLS协议运行机制的概述
TLS 握手优化详解
HTTP/2 协议本身并没有要求必须基于TLS部署,但是实际使用中,HTTP/2 和TLS几乎都是捆绑在一起,当前主流浏览器都只支持基于TLS部署的 HTTP/2。
2.3 HTTP缓存策略
首先通过下图整体的看一下HTTP的缓存策略:
下面会根据源码来讲解HTTP的缓存策略。
2.4 HttpURLConnection相关类的解析
2.4.1 ConnectionPool
源码地址Android源码中内嵌的okhttp源码,该类用来管理TCP连接(RealConnection是对TCP连接的封装,所以这里就直接叫TCP连接)的复用以减少网络延迟,下面就来看看是如何来管理的:
该类中有两个非常重要的成员变量:
maxIdleConnections :ConnectionPool中空闲TCP连接的最大数量,默认为5个。
keepAliveDurationNs:ConnectionPool中TCP连接最长的空闲时长,默认为5分钟。
1> 首先看ConnectionPool中保持连接和获取连接的过程:
/** Returns a recycled connection to {@code address}, or null if no such connection exists. */
RealConnection get(Address address, StreamAllocation streamAllocation) {
assert (Thread.holdsLock(this));
for (RealConnection connection : connections) {
// connection.allocationLimit()方法返回的是每一个TCP连接上最大的Stream数量,每一个Stream对应一对 请求-响应,那么该方法的返回值代表同一时刻同一个TCP连接上最多可以处理多少对 请求-响应,
// 对于HTTP/1.x,最大的Stream数量是1,对于HTTP/2,最大的Stream数量是4,具体实现可以直接参考该方法。
// address.equals(connection.getRoute().address相同代表URL字符串中的scheme、host、port是相同的,即只有scheme、host、port相同的情况下,才有可能复用同一个TCP连接。
if (connection.allocations.size() < connection.allocationLimit()
&& address.equals(connection.getRoute().address)
&& !connection.noNewStreams) {
streamAllocation.acquire(connection);
return connection;
}
}
return null;
}
void put(RealConnection connection) {
assert (Thread.holdsLock(this));
if (connections.isEmpty()) {
// 当给连接池中放入第一个TCP连接,会在后台开启一个后台的清理线程,用于轮询连接池中的所有的TCP连接,关掉符合清理条件的TCP连接。
executor.execute(cleanupRunnable);
}
connections.add(connection);
}
2> 接下来看一下cleanupRunnable:
private Runnable cleanupRunnable = new Runnable() {
@Override public void run() {
while (true) {
long waitNanos = cleanup(System.nanoTime());
// cleanup方法的返回值是-1代表连接池中为空,此时后台清理线程结束
if (waitNanos == -1) return;
// cleanup方法返回值大于0代表需要暂停后台清理线程,暂停时长为返回值的大小,返回值为0时,就会立刻进入下一轮的轮询。
if (waitNanos > 0) {
long waitMillis = waitNanos / 1000000L;
waitNanos -= (waitMillis * 1000000L);
synchronized (ConnectionPool.this) {
try {
ConnectionPool.this.wait(waitMillis, (int) waitNanos);
} catch (InterruptedException ignored) {
}
}
}
}
}
};
3> 接下来看一下轮询的过程,即cleanup方法的实现:
long cleanup(long now) {
// 记录连接池中处于使用状态的TCP连接的数量
int inUseConnectionCount = 0;
// 记录连接池中处于空闲状态的TCP连接的数量
int idleConnectionCount = 0;
// 记录连接池中空闲时长最长的TCP连接
RealConnection longestIdleConnection = null;
// 记录连接池中所有TCP连接中最长的空闲时长
long longestIdleDurationNs = Long.MIN_VALUE;
// Find either a connection to evict, or the time that the next eviction is due.
synchronized (this) {
for (Iterator i = connections.iterator(); i.hasNext(); ) {
RealConnection connection = i.next();
// pruneAndGetAllocationCount返回值大于零代表该TCP连接处于使用状态
if (pruneAndGetAllocationCount(connection, now) > 0) {
// inUseConnectionCount加1,然后继续轮询下一个TCP连接
inUseConnectionCount++;
continue;
}
// 可以运行到这里,说明该TCP连接处于空闲状态,此时idleConnectionCount加1
idleConnectionCount++;
long idleDurationNs = now - connection.idleAtNanos;
// 下面条件成立,代表该TCP的空闲时间比其前面的连接的空闲时长都长
if (idleDurationNs > longestIdleDurationNs) {
// 记录连接池中所有TCP连接中最长的空闲时长
longestIdleDurationNs = idleDurationNs;
// 记录连接池中空闲时长最长的TCP连接
longestIdleConnection = connection;
}
}
// longestIdleDurationNs >= this.keepAliveDurationNs成立代表连接池中所有TCP连接中最长的空闲时长大于5分钟。
// idleConnectionCount > this.maxIdleConnections成立接池中处于空闲状态的TCP连接的数量大于5
if (longestIdleDurationNs >= this.keepAliveDurationNs
|| idleConnectionCount > this.maxIdleConnections) {
// 将longestIdleConnection从连接池中移除
connections.remove(longestIdleConnection);
} else if (idleConnectionCount > 0) {
// 此时连接池中所有TCP连接中最长的空闲时长没有达到5分钟时
return keepAliveDurationNs - longestIdleDurationNs;
} else if (inUseConnectionCount > 0) {
// 所有的TCP连接都处于使用状态
return keepAliveDurationNs;
} else {
// No connections, idle or in use.
return -1;
}
}
// 关闭掉从连接池中移除TCP连接
Util.closeQuietly(longestIdleConnection.getSocket());
// 立刻进入下一轮的轮询
return 0;
}
2.4.2 ConfigAwareConnectionPool
该类是单例的,用于提供一个共享的ConnectionPool,该类会监听网络配置改变事件,当网络配置发生改变,ConnectionPool对象就会被作废,之后可以通过get方法创建一个新的ConnectionPool对象。
3 源码分析
首先放个例子:
if (NETWORK_GET.equals(action)) {
//发送GET请求
url = new URL("https://www.jianshu.com/recommendations/notes?category_id=56&utm_medium=index-banner-s&utm_source=desktop");
conn = (HttpURLConnection) url.openConnection();
//HttpURLConnection默认就是用GET发送请求,所以下面的setRequestMethod可以省略
conn.setRequestMethod("GET");
// 表示是否可以读取响应体中的数据,默认为true。
conn.setDoInput(true);
//用setRequestProperty方法设置一个自定义的请求头字段
conn.setRequestProperty("action", NETWORK_GET);
//禁用网络缓存
conn.setUseCaches(false);
//在对各种参数配置完成后,通过调用connect方法建立TCP连接,但是并未真正获取数据
//conn.connect()方法不必显式调用,当调用conn.getInputStream()方法时内部也会自动调用connect方法
conn.connect();
//调用getInputStream方法后,服务端才会收到完整的请求,并阻塞式地接收服务端返回的数据
InputStream is = conn.getInputStream();
} else if (NETWORK_POST_KEY_VALUE.equals(action)) {
//用POST发送键值对数据
url = new URL("https://www.jianshu.com/recommendations/notes");
conn = (HttpURLConnection) url.openConnection();
//通过setRequestMethod将conn设置成POST方法
conn.setRequestMethod("POST");
//表示是否可以通过请求体发送数据给服务端,默认为false。
conn.setDoOutput(true);
//用setRequestProperty方法设置一个自定义的请求头:action
conn.setRequestProperty("action", NETWORK_POST_KEY_VALUE);
//获取conn的输出流
OutputStream os = conn.getOutputStream();
//获取两个键值对name=孙群和age=27的字节数组,将该字节数组作为请求体
requestBody = new String("category_id=56&utm_medium=index-banner-s&utm_source=desktop").getBytes("UTF-8");
//将请求体写入到conn的输出流中
os.write(requestBody);
//记得调用输出流的flush方法
os.flush();
//关闭输出流
os.close();
//当调用getInputStream方法时才真正将请求体数据上传至服务器
InputStream is = conn.getInputStream();
}
上面是通过HttpURLConnection发起GET/POST请求的实现代码,接着通过下面的时序图整体的看一下流程:
接下来就是按照上图一步步解析。
3.1 创建URL对象
第1步,通过URL字符串创建URL对象:
public URL(String spec) throws MalformedURLException {
this(null, spec);
}
public URL(URL context, String spec) throws MalformedURLException {
this(context, spec, null);
}
public URL(URL context, String spec, URLStreamHandler handler)
throws MalformedURLException
{
String original = spec;
int i, limit, c;
int start = 0;
String newProtocol = null;
boolean aRef=false;
boolean isRelative = false;
// Check for permission to specify a handler
if (handler != null) {
SecurityManager sm = System.getSecurityManager();
if (sm != null) {
checkSpecifyHandler(sm);
}
}
try {
limit = spec.length();
while ((limit > 0) && (spec.charAt(limit - 1) <= ' ')) {
limit--; //eliminate trailing whitespace
}
while ((start < limit) && (spec.charAt(start) <= ' ')) {
start++; // eliminate leading whitespace
}
if (spec.regionMatches(true, start, "url:", 0, 4)) {
start += 4;
}
if (start < spec.length() && spec.charAt(start) == '#') {
/* we're assuming this is a ref relative to the context URL.
* This means protocols cannot start w/ '#', but we must parse
* ref URL's like: "hello:there" w/ a ':' in them.
*/
aRef=true;
}
for (i = start ; !aRef && (i < limit) &&
((c = spec.charAt(i)) != '/') ; i++) {
if (c == ':') {
String s = spec.substring(start, i).toLowerCase();
if (isValidProtocol(s)) {
newProtocol = s;
start = i + 1;
}
break;
}
}
// Only use our context if the protocols match.
protocol = newProtocol;
if ((context != null) && ((newProtocol == null) ||
newProtocol.equalsIgnoreCase(context.protocol))) {
// inherit the protocol handler from the context
// if not specified to the constructor
if (handler == null) {
handler = context.handler;
}
// If the context is a hierarchical URL scheme and the spec
// contains a matching scheme then maintain backwards
// compatibility and treat it as if the spec didn't contain
// the scheme; see 5.2.3 of RFC2396
if (context.path != null && context.path.startsWith("/"))
newProtocol = null;
if (newProtocol == null) {
protocol = context.protocol;
authority = context.authority;
userInfo = context.userInfo;
host = context.host;
port = context.port;
file = context.file;
path = context.path;
isRelative = true;
}
}
if (protocol == null) {
throw new MalformedURLException("no protocol: "+original);
}
// Get the protocol handler if not specified or the protocol
// of the context could not be used
if (handler == null &&
(handler = getURLStreamHandler(protocol)) == null) {
throw new MalformedURLException("unknown protocol: "+protocol);
}
this.handler = handler;
i = spec.indexOf('#', start);
if (i >= 0) {
ref = spec.substring(i + 1, limit);
limit = i;
}
/*
* Handle special case inheritance of query and fragment
* implied by RFC2396 section 5.2.2.
*/
if (isRelative && start == limit) {
query = context.query;
if (ref == null) {
ref = context.ref;
}
}
handler.parseURL(this, spec, start, limit);
} catch(MalformedURLException e) {
throw e;
} catch(Exception e) {
MalformedURLException exception = new MalformedURLException(e.getMessage());
exception.initCause(e);
throw exception;
}
}
URL的构造方法很简单,主要做了如下几件事情:
1> 解析出URL字符串中的协议,即上面代码中的protocol
2> 通过getURLStreamHandler方法获取处理protocol 协议对应的URLStreamHandler
3> 利用URLStreamHandler的parseURL方法解析URL字符串
接着看第2步:
static URLStreamHandler getURLStreamHandler(String protocol) {
URLStreamHandler handler = handlers.get(protocol);
if (handler == null) {
......
// Fallback to built-in stream handler.
// Makes okhttp the default http/https handler
if (handler == null) {
try {
// BEGIN Android-changed
// Use of okhttp for http and https
// Removed unnecessary use of reflection for sun classes
if (protocol.equals("file")) {
handler = new sun.net.www.protocol.file.Handler();
} else if (protocol.equals("ftp")) {
handler = new sun.net.www.protocol.ftp.Handler();
} else if (protocol.equals("jar")) {
handler = new sun.net.www.protocol.jar.Handler();
} else if (protocol.equals("http")) {
handler = (URLStreamHandler)Class.
forName("com.android.okhttp.HttpHandler").newInstance();
} else if (protocol.equals("https")) {
handler = (URLStreamHandler)Class.
forName("com.android.okhttp.HttpsHandler").newInstance();
}
// END Android-changed
} catch (Exception e) {
throw new AssertionError(e);
}
}
......
}
return handler;
}
由于HttpURLConnection是对HTTP协议的实现,所以下面只关注com.android.okhttp.HttpHandler和com.android.okhttp.HttpsHandler,HttpsHandler继承至HttpHandler,区别在于添加了对TLS的支持,即在建立TCP连接后会执行TLS握手;接下来就是下载Android源码中内嵌的okhttp源码,就可以找到HttpHandler和HttpsHandler的源码,但是发现了一个很奇怪的现象,HttpHandler和HttpsHandler的包名是com.squareup.okhttp,但是上面getURLStreamHandler方法中却是com.android.okhttp,这是怎么回事呢?这时我就看见源码中有一个文件有点眼熟:
看到这里大家应该明白为什么了吧。
还有一个问题,什么时候Android中开始使用okhttp,我对比了一下Android 4.4(左)和Android 4.3的URL.java,如下图所示:
可以看出从Android 4.4开始使用okhttp处理HTTP协议。
接着看第3步,调用URLStreamHandler的parseURL方法来解析URL字符串,然后用解析后得到的结果初始化URL对象。为了更好的理解parseURL方法的原理,那就要知道URL字符串的结构:
scheme:[//authority][/path][?query][#fragment]
scheme:[//[userInfo@]host[:port]][/path][?query][#fragment]
scheme:[//[user[:password]@]host[:port]][/path][?query][#fragment]
上面的三种格式就是对URL中的authority部分的逐步细分,目前Android API 26中URL.java中细分到了第二种格式。
具体每部分是什么意思,大家可以参考https://en.wikipedia.org/wiki/URL
对于上面例子中的URL字符串 https://www.jianshu.com/recommendations/notes?category_id=56&utm_medium=index-banner-s&utm_source=desktop:
scheme:https
authority:www.jianshu.com
host:www.jianshu.com
path:/recommendations/notes
query:category_id=56&utm_medium=index-banner-s&utm_source=desktop
其中userInfo、port和fragment是没有的。
解析URL字符串用是URLStreamHandler的parseURL方法,该方法就是按照上面的URL字符串的结构来解析出每一部分的值,然后用这些值来初始化URL对象,具体的细节大家可以自己查看parseURL方法源码。
3.2 创建HttpURLConnection实例
接下来看第5步,调用URL的openConnection方法:
public URLConnection openConnection() throws java.io.IOException {
return handler.openConnection(this);
}
首先看一下HttpHandler的openConnection方法,即第6步:
@Override protected URLConnection openConnection(URL url) throws IOException {
return newOkUrlFactory(null /* proxy */).open(url);
}
接着看一下HttpHandler的newOkUrlFactory方法,即第7步:
// CLEARTEXT_ONLY代表不使用TLS,即URL字符串中scheme为http的情况下使用明文传输
private final static List CLEARTEXT_ONLY =
Collections.singletonList(ConnectionSpec.CLEARTEXT);
private static final CleartextURLFilter CLEARTEXT_FILTER = new CleartextURLFilter();
private final ConfigAwareConnectionPool configAwareConnectionPool =
ConfigAwareConnectionPool.getInstance();
// http的默认端口号为80
@Override protected int getDefaultPort() {
return 80;
}
protected OkUrlFactory newOkUrlFactory(Proxy proxy) {
OkUrlFactory okUrlFactory = createHttpOkUrlFactory(proxy);
// For HttpURLConnections created through java.net.URL Android uses a connection pool that
// is aware when the default network changes so that pooled connections are not re-used when
// the default network changes.
okUrlFactory.client().setConnectionPool(configAwareConnectionPool.get());
return okUrlFactory;
}
/**
* Creates an OkHttpClient suitable for creating {@link java.net.HttpURLConnection} instances on
* Android.
*/
// Visible for android.net.Network.
public static OkUrlFactory createHttpOkUrlFactory(Proxy proxy) {
OkHttpClient client = new OkHttpClient();
// Explicitly set the timeouts to infinity.
client.setConnectTimeout(0, TimeUnit.MILLISECONDS);
client.setReadTimeout(0, TimeUnit.MILLISECONDS);
client.setWriteTimeout(0, TimeUnit.MILLISECONDS);
// Set the default (same protocol) redirect behavior. The default can be overridden for
// each instance using HttpURLConnection.setInstanceFollowRedirects().
client.setFollowRedirects(HttpURLConnection.getFollowRedirects());
// Do not permit http -> https and https -> http redirects.
client.setFollowSslRedirects(false);
// 仅允许明文传输(针对URL字符串中scheme为http的情况,而不是https)
client.setConnectionSpecs(CLEARTEXT_ONLY);
// When we do not set the Proxy explicitly OkHttp picks up a ProxySelector using
// ProxySelector.getDefault().
if (proxy != null) {
client.setProxy(proxy);
}
// OkHttp requires that we explicitly set the response cache.
OkUrlFactory okUrlFactory = new OkUrlFactory(client);
// Use the installed NetworkSecurityPolicy to determine which requests are permitted over
// http.
OkUrlFactories.setUrlFilter(okUrlFactory, CLEARTEXT_FILTER);
ResponseCache responseCache = ResponseCache.getDefault();
if (responseCache != null) {
AndroidInternal.setResponseCache(okUrlFactory, responseCache);
}
return okUrlFactory;
}
private static final class CleartextURLFilter implements URLFilter {
@Override
public void checkURLPermitted(URL url) throws IOException {
String host = url.getHost();
if (!NetworkSecurityPolicy.getInstance().isCleartextTrafficPermitted(host)) {
throw new IOException("Cleartext HTTP traffic to " + host + " not permitted");
}
}
}
由上面的代码可知newOkUrlFactory方法主要做了如下几件事情:
1> 调用createHttpOkUrlFactory方法创建OkUrlFactory实例:
createHttpOkUrlFactory方法中首先创建OkHttpClient实例,并且为OkHttpClient实例设置了一些参数:
读写超时:0,代表超时时间为无穷大,即没有超时;可以通过调用HttpURLConnection的setReadTimeout方法设置该值。
连接超时:0,代表超时时间为无穷大,即没有超时;可以通过调用HttpURLConnection的setConnectTimeout方法设置该值。
ConnectionSpecs:CLEARTEXT_ONLY,代表不需要TLS,即明文传输。
接着以OkHttpClient实例为参数创建OkUrlFactory实例,接着将OkUrlFactory实例的urlFilter字段设置为CleartextURLFilter(用于判断是否可以与指定host的服务器进行明文通信),最后返回OkUrlFactory实例。
2> 接着为OkHttpClient实例设置ConnectionPool并且返回OkUrlFactory实例,关于ConnectionPool更加细致的讲解可以参考2.4.1。
3> 接着调用OkUrlFactory的open方法,即第8步:
HttpURLConnection open(URL url, Proxy proxy) {
String protocol = url.getProtocol();
OkHttpClient copy = client.copyWithDefaults();
copy.setProxy(proxy);
if (protocol.equals("http")) return new HttpURLConnectionImpl(url, copy, urlFilter);
if (protocol.equals("https")) return new HttpsURLConnectionImpl(url, copy, urlFilter);
throw new IllegalArgumentException("Unexpected protocol: " + protocol);
}
到这里,HttpHandler的openConnection方法分析完毕,接下来就要看看HttpsHandler,HttpsHandler重写了HttpHandler的newOkUrlFactory方法和createHttpOkUrlFactory方法:
/**
* The connection spec to use when connecting to an https:// server. Note that Android does
* not set the cipher suites or TLS versions to use so the socket's defaults will be used
* instead. When the SSLSocketFactory is provided by the app or GMS core we will not
* override the enabled ciphers or TLS versions set on the sockets it produces with a
* list hardcoded at release time. This is deliberate.
*/
private static final ConnectionSpec TLS_CONNECTION_SPEC = ConnectionSpecs.builder(true)
.allEnabledCipherSuites()
.allEnabledTlsVersions()
.supportsTlsExtensions(true)
.build();
// TLS握手阶段时,通过ALPN协商应用层协议时客户端ClientHello携带的应用层协议列表
private static final List HTTP_1_1_ONLY =
Collections.singletonList(Protocol.HTTP_1_1);
private final ConfigAwareConnectionPool configAwareConnectionPool =
ConfigAwareConnectionPool.getInstance();
// https的默认端口号为443
@Override protected int getDefaultPort() {
return 443;
}
@Override
protected OkUrlFactory newOkUrlFactory(Proxy proxy) {
OkUrlFactory okUrlFactory = createHttpsOkUrlFactory(proxy);
// For HttpsURLConnections created through java.net.URL Android uses a connection pool that
// is aware when the default network changes so that pooled connections are not re-used when
// the default network changes.
okUrlFactory.client().setConnectionPool(configAwareConnectionPool.get());
return okUrlFactory;
}
/**
* Creates an OkHttpClient suitable for creating {@link HttpsURLConnection} instances on
* Android.
*/
// Visible for android.net.Network.
public static OkUrlFactory createHttpsOkUrlFactory(Proxy proxy) {
// The HTTPS OkHttpClient is an HTTP OkHttpClient with extra configuration.
OkUrlFactory okUrlFactory = HttpHandler.createHttpOkUrlFactory(proxy);
// All HTTPS requests are allowed.
OkUrlFactories.setUrlFilter(okUrlFactory, null);
OkHttpClient okHttpClient = okUrlFactory.client();
// Only enable HTTP/1.1 (implies HTTP/1.0). Disable SPDY / HTTP/2.0.
okHttpClient.setProtocols(HTTP_1_1_ONLY);
okHttpClient.setConnectionSpecs(Collections.singletonList(TLS_CONNECTION_SPEC));
// Android support certificate pinning via NetworkSecurityConfig so there is no need to
// also expose OkHttp's mechanism. The OkHttpClient underlying https HttpsURLConnections
// in Android should therefore always use the default certificate pinner, whose set of
// {@code hostNamesToPin} is empty.
okHttpClient.setCertificatePinner(CertificatePinner.DEFAULT);
// OkHttp does not automatically honor the system-wide HostnameVerifier set with
// HttpsURLConnection.setDefaultHostnameVerifier().
okUrlFactory.client().setHostnameVerifier(HttpsURLConnection.getDefaultHostnameVerifier());
// OkHttp does not automatically honor the system-wide SSLSocketFactory set with
// HttpsURLConnection.setDefaultSSLSocketFactory().
// See https://github.com/square/okhttp/issues/184 for details.
okHttpClient.setSslSocketFactory(HttpsURLConnection.getDefaultSSLSocketFactory());
return okUrlFactory;
}
从上面源码可以看出HttpsHandler改写了第7步中的createHttpOkUrlFactory方法:
首先调用父类HttpHandler的createHttpOkUrlFactory方法返回OkUrlFactory实例,接着将OkUrlFactory实例的urlFilter字段设置为null;接着为OkHttpClient实例设置一些参数:
ConnectionSpecs:TLS_CONNECTION_SPEC,代表需要支持TLS,即密文传输。
protocols:HTTP_1_1_ONLY,TLS握手阶段时,通过ALPN协商应用层协议时客户端ClientHello携带的应用层协议列表。
sslSocketFactory:HttpsURLConnection.getDefaultSSLSocketFactory()。
通过上面的分析可知URL的openConnection方法并没有建立TCP连接,只是创建了HttpURLConnectionImpl实例或者HttpsURLConnectionImpl实例。
3.3 调用HttpURLConnection的connect方法与服务端建立TCP连接
HttpsURLConnectionImpl使用了装饰者模式对HttpURLConnectionImpl进行了装饰,HttpsURLConnectionImpl对connect方法没有任何的装饰,所以直接看HttpURLConnectionImpl中connect方法,即第12步:
@Override public final void connect() throws IOException {
initHttpEngine();
boolean success;
do {
// execute方法用于发送请求,如果请求成功执行,则返回true,如果请求可以重试,则返回false,
// 如果请求永久失败,则抛出异常。
success = execute(false);
} while (!success);
}
HttpURLConnectionImpl中的initHttpEngine方法是私有方法,所以不会被装饰,所以直接看HttpURLConnectionImpl的initHttpEngine方法,即第13步:
private void initHttpEngine() throws IOException {
if (httpEngineFailure != null) {
throw httpEngineFailure;
} else if (httpEngine != null) {
return;
}
connected = true;
try {
if (doOutput) {
if (method.equals("GET")) {
// they are requesting a stream to write to. This implies a POST method
method = "POST";
} else if (!HttpMethod.permitsRequestBody(method)) {
throw new ProtocolException(method + " does not support writing");
}
}
// If the user set content length to zero, we know there will not be a request body.
httpEngine = newHttpEngine(method, null, null, null);
} catch (IOException e) {
httpEngineFailure = e;
throw e;
}
}
在initHttpEngine方法中,如果doOutput为true并且请求方法是GET的情况下,就会将请求方法强制替换成POST,这是为什么呢?首先让我们了解doInput和doOutput是用来干嘛的?
doInput:表示是否可以读取服务端返回的响应体中的数据,默认为true。
doOutput:表示是否可以通过请求体发送数据给服务端,默认为false。
对于请求方法是GET的请求是没有请求体的,而请求方法是POST的请求是有请求体的(具体哪些请求方法有请求体,可以参考HttpMethod.permitsRequestBody方法),所以在doOutput为true并且请求方法是GET的情况下,就会将请求方法强制替换成POST。
由于HttpURLConnectionImpl中的newHttpEngine方法是私有方法,即不会被装饰,所以直接看HttpURLConnectionImpl中的newHttpEngine方法:
private HttpEngine newHttpEngine(String method, StreamAllocation streamAllocation,
RetryableSink requestBody, Response priorResponse)
throws MalformedURLException, UnknownHostException {
// OkHttp's Call API requires a placeholder body; the real body will be streamed separately.
RequestBody placeholderBody = HttpMethod.requiresRequestBody(method)
? EMPTY_REQUEST_BODY
: null;
URL url = getURL();
HttpUrl httpUrl = Internal.instance.getHttpUrlChecked(url.toString());
// 根据请求的url、method、请求头(通过HttpURLConnection的setRequestProperty方法设置的)
// 创建Request实例
Request.Builder builder = new Request.Builder()
.url(httpUrl)
.method(method, placeholderBody);
Headers headers = requestHeaders.build();
for (int i = 0, size = headers.size(); i < size; i++) {
builder.addHeader(headers.name(i), headers.value(i));
}
boolean bufferRequestBody = false;
if (HttpMethod.permitsRequestBody(method)) { // 判断请求是否有请求体
// HTTP/1.1以及之后的版本都是长连接的,对于请求有请求体的情况,就需要告诉服务端请求体何时结束,
// 有两种方式来告诉服务端请求体已经结束:Content-Length、Transfer-Encoding,
// 具体可以参考[HTTP 协议中的 Transfer-Encoding](https://imququ.com/post/transfer-encoding-header-in-http.html)
if (fixedContentLength != -1) {
// 调用了HttpURLConnection的setFixedLengthStreamingMode来设置请求体的结束方式为Content-Length
builder.header("Content-Length", Long.toString(fixedContentLength));
} else if (chunkLength > 0) {
// 调用了HttpURLConnection的setChunkedStreamingMode来设置请求体的结束方式为Transfer-Encoding
builder.header("Transfer-Encoding", "chunked");
} else {
// 没有设置请求体结束方式的情况
bufferRequestBody = true;
}
// Add a content type for the request body, if one isn't already present.
if (headers.get("Content-Type") == null) {
builder.header("Content-Type", "application/x-www-form-urlencoded");
}
}
if (headers.get("User-Agent") == null) {
builder.header("User-Agent", defaultUserAgent());
}
Request request = builder.build();
// If we're currently not using caches, make sure the engine's client doesn't have one.
OkHttpClient engineClient = client;
if (Internal.instance.internalCache(engineClient) != null && !getUseCaches()) {
engineClient = client.clone().setCache(null);
}
return new HttpEngine(engineClient, request, bufferRequestBody, true, false, streamAllocation,
requestBody, priorResponse);
}
接下来看一下HttpEngine的构造方法以及相关代码:
public HttpEngine(OkHttpClient client, Request request, boolean bufferRequestBody,
boolean callerWritesRequestBody, boolean forWebSocket, StreamAllocation streamAllocation,
RetryableSink requestBodyOut, Response priorResponse) {
this.client = client;
this.userRequest = request;
this.bufferRequestBody = bufferRequestBody;
this.callerWritesRequestBody = callerWritesRequestBody;
this.forWebSocket = forWebSocket;
this.streamAllocation = streamAllocation != null
? streamAllocation
: new StreamAllocation(client.getConnectionPool(), createAddress(client, request));
this.requestBodyOut = requestBodyOut;
this.priorResponse = priorResponse;
}
private static Address createAddress(OkHttpClient client, Request request) {
SSLSocketFactory sslSocketFactory = null;
HostnameVerifier hostnameVerifier = null;
CertificatePinner certificatePinner = null;
if (request.isHttps()) {
// 只有scheme为https时,sslSocketFactory才不为null
sslSocketFactory = client.getSslSocketFactory();
hostnameVerifier = client.getHostnameVerifier();
certificatePinner = client.getCertificatePinner();
}
return new Address(request.httpUrl().host(), request.httpUrl().port(), client.getDns(),
client.getSocketFactory(), sslSocketFactory, hostnameVerifier, certificatePinner,
client.getAuthenticator(), client.getProxy(), client.getProtocols(),
client.getConnectionSpecs(), client.getProxySelector());
}
回到HttpURLConnectionImpl中connect方法;由于HttpURLConnectionImpl中的execute方法是私有方法,即不会被装饰,所以直接看HttpURLConnectionImpl中的execute方法,即第14步:
private boolean execute(boolean readResponse) throws IOException {
boolean releaseConnection = true;
// 由第7步可知,在scheme为http情况下,urlFilter为CleartextURLFilter类型的实例,
// CleartextURLFilter用于判断是否可以与指定host的服务器进行明文通信。
if (urlFilter != null) {
urlFilter.checkURLPermitted(httpEngine.getRequest().url());
}
try {
// 发起请求
httpEngine.sendRequest();
Connection connection = httpEngine.getConnection();
if (connection != null) {
route = connection.getRoute();
handshake = connection.getHandshake();
} else {
route = null;
handshake = null;
}
if (readResponse) {
// 读取响应
httpEngine.readResponse();
}
releaseConnection = false;
return true;
}
......
}
接下来看HttpEngine的sendRequest方法,即第15步:
public void sendRequest() throws RequestException, RouteException, IOException {
if (cacheStrategy != null) return; // Already sent.
if (httpStream != null) throw new IllegalStateException();
// 为请求添加默认的请求头
Request request = networkRequest(userRequest);
// 从缓存中获取该请求对应的缓存的响应,当调用HttpURLConnection的setUseCaches方法
// 将useCaches字段(默认值为true)设置为false时,responseCache为null
InternalCache responseCache = Internal.instance.internalCache(client);
Response cacheCandidate = responseCache != null
? responseCache.get(request)
: null;
long now = System.currentTimeMillis();
// 给出请求和请求对应的缓存的响应,然后根据缓存策略得出符合缓存策略的请求和响应
cacheStrategy = new CacheStrategy.Factory(now, request, cacheCandidate).get();
// 当得出的请求不为null时(即networkRequest不为null),代表该请求对应的缓存的响应不存在或者已经过期,
// 此时需要从服务端获取,否则使用请求对应的缓存的响应
networkRequest = cacheStrategy.networkRequest;
cacheResponse = cacheStrategy.cacheResponse;
if (responseCache != null) {
responseCache.trackResponse(cacheStrategy);
}
if (cacheCandidate != null && cacheResponse == null) {
closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
}
// 当networkRequest不等于null时,代表该请求对应的缓存响应不存在或者已经过期,需要从新从服务端获取
if (networkRequest != null) {
// 建立Socket连接
httpStream = connect();
httpStream.setHttpEngine(this);
// If the caller's control flow writes the request body, we need to create that stream
// immediately. And that means we need to immediately write the request headers, so we can
// start streaming the request body. (We may already have a request body if we're retrying a
// failed POST.)
if (callerWritesRequestBody && permitsRequestBody(networkRequest) && requestBodyOut == null) {
long contentLength = OkHeaders.contentLength(request);
//在第13步中说过,在请求体存在并且没有设置请求体结束的方式的情况下,bufferRequestBody为true
if (bufferRequestBody) {
if (contentLength > Integer.MAX_VALUE) {
throw new IllegalStateException("Use setFixedLengthStreamingMode() or "
+ "setChunkedStreamingMode() for requests larger than 2 GiB.");
}
if (contentLength != -1) {
// 请求体的长度知道的情况(即请求头中包含头字段Content-Length),当通过HttpURLConnection的setRequestProperty方法设置了Content-Length时,这种情况才会发生,
// writeRequestHeaders最终将请求头写入到RealBufferedSink实例中,该RealBufferedSink实例是下面第21步中的提到的sink字段。
httpStream.writeRequestHeaders(networkRequest);
// 创建RetryableSink类型的实例requestBodyOut,RetryableSink中有一个Buffer类型的字段,
// 用于缓存请求体,因此通过RetryableSink类型的requestBodyOut实例写入请求体,只不过是将其缓存到内存中
requestBodyOut = new RetryableSink((int) contentLength);
} else {
// 请求体的长度未知的情况(即请求头中没有正确设置头字段Content-Length),此时就必须在整个请求体准备好之后才能写请求头。
requestBodyOut = new RetryableSink();
}
} else {
// 有请求体并且设置了请求体的结束方式为Content-Length和Transfer-Encoding其中之一(即请求头中包含头字段Content-Length和Transfer-Encoding其中之一),
// writeRequestHeaders最终将请求头写入到RealBufferedSink实例中,该RealBufferedSink实例是下面第21步中的提到的sink字段。
httpStream.writeRequestHeaders(networkRequest);
// createRequestBody方法中会根据请求头中头字段Content-Length或者Transfer-Encoding创建
// 不同类型的Sink实例,通过该Sink实例写入的请求体休息最终会被写入到RealBufferedSink实例中,该RealBufferedSink实例是下面第21步中的提到的sink字段。
requestBodyOut = httpStream.createRequestBody(networkRequest, contentLength);
}
}
} else { // 代表使用该请求对应的缓存的响应
if (cacheResponse != null) {
// We have a valid cached response. Promote it to the user response immediately.
this.userResponse = cacheResponse.newBuilder()
.request(userRequest)
.priorResponse(stripBody(priorResponse))
.cacheResponse(stripBody(cacheResponse))
.build();
} else {
// We're forbidden from using the network, and the cache is insufficient.
this.userResponse = new Response.Builder()
.request(userRequest)
.priorResponse(stripBody(priorResponse))
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(EMPTY_BODY)
.build();
}
// 当响应头包含头字段Content-Encoding并且值为gzip时,unzip中会利用GzipSource将响应体进行解压。
userResponse = unzip(userResponse);
}
}
private Request networkRequest(Request request) throws IOException {
Request.Builder result = request.newBuilder();
if (request.header("Host") == null) {
result.header("Host", Util.hostHeader(request.httpUrl(), false));
}
// 告诉服务端使用长连接
if (request.header("Connection") == null) {
result.header("Connection", "Keep-Alive");
}
// 告诉服务端客户端支持的压缩类型
if (request.header("Accept-Encoding") == null) {
transparentGzip = true;
result.header("Accept-Encoding", "gzip");
}
CookieHandler cookieHandler = client.getCookieHandler();
if (cookieHandler != null) {
// Capture the request headers added so far so that they can be offered to the CookieHandler.
// This is mostly to stay close to the RI; it is unlikely any of the headers above would
// affect cookie choice besides "Host".
Map> headers = OkHeaders.toMultimap(result.build().headers(), null);
Map> cookies = cookieHandler.get(request.uri(), headers);
// Add any new cookies to the request.
OkHeaders.addCookies(result, cookies);
}
if (request.header("User-Agent") == null) {
result.header("User-Agent", Version.userAgent());
}
return result.build();
}
上面代码中讲到了CacheStrategy类,该类实现了2.3中的缓存策略,下面就来分析一下CacheStrategy的使用流程:
public Factory(long nowMillis, Request request, Response cacheResponse) {
this.nowMillis = nowMillis;
this.request = request;
this.cacheResponse = cacheResponse;
if (cacheResponse != null) {
// 当请求对应的缓存的响应不为空时,获取缓存的响应的一些信息,后面在判断缓存的响应是否新鲜时会用到
Headers headers = cacheResponse.headers();
for (int i = 0, size = headers.size(); i < size; i++) {
String fieldName = headers.name(i);
String value = headers.value(i);
if ("Date".equalsIgnoreCase(fieldName)) {
// Data代表服务端发送缓存响应的日期和时间,比如 Date: Tue, 15 Nov 1994 08:12:31 GMT
servedDate = HttpDate.parse(value);
servedDateString = value;
} else if ("Expires".equalsIgnoreCase(fieldName)) {
// Expires代表一个日期和时间,超过该时间则认为此回应已经过期,比如 Expires: Thu, 01 Dec 1994 16:00:00 GMT
expires = HttpDate.parse(value);
} else if ("Last-Modified".equalsIgnoreCase(fieldName)) {
// Last-Modified代表缓存响应的最后修改日期,比如 Last-Modified: Tue, 15 Nov 1994 12:45:26 GMT
lastModified = HttpDate.parse(value);
lastModifiedString = value;
} else if ("ETag".equalsIgnoreCase(fieldName)) {
// ETag代表对于缓存响应的某个特定版本的一个标识符,比如 ETag: "737060cd8c284d8af7ad3082f209582d"
etag = value;
} else if ("Age".equalsIgnoreCase(fieldName)) {
// Age代表缓存响应在缓存中存在的时间,以秒为单位
ageSeconds = HeaderParser.parseSeconds(value, -1);
} else if (OkHeaders.SENT_MILLIS.equalsIgnoreCase(fieldName)) {
// SENT_MILLIS字段是Okhttp中添加的字段,用于记录请求头被发送时的时间点
sentRequestMillis = Long.parseLong(value);
} else if (OkHeaders.RECEIVED_MILLIS.equalsIgnoreCase(fieldName)) {
// RECEIVED_MILLIS字段是Okhttp中添加的字段,用于记录客户端接收到响应的时间点
receivedResponseMillis = Long.parseLong(value);
}
}
}
}
/**
* Returns a strategy to satisfy {@code request} using the a cached response
* {@code response}.
*/
public CacheStrategy get() {
CacheStrategy candidate = getCandidate();
if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) {
// We're forbidden from using the network and the cache is insufficient.
return new CacheStrategy(null, null);
}
return candidate;
}
/** Returns a strategy to use assuming the request can use the network. */
private CacheStrategy getCandidate() {
if (cacheResponse == null) {
// 请求对应的缓存响应为null,对应于2.3中的第1步否定的情况。
return new CacheStrategy(request, null);
}
// Drop the cached response if it's missing a required handshake.
if (request.isHttps() && cacheResponse.handshake() == null) {
return new CacheStrategy(request, null);
}
// 可以执行到这里,代表进入到2.3中第1步肯定情况下的程
// isCacheable用于判断请求对应的响应是否允许被缓存,下面条件语句成立代表不允许缓存的情况,
// 对应于2.3中的第二步否定的情况。
if (!isCacheable(cacheResponse, request)) {
return new CacheStrategy(request, null);
}
CacheControl requestCaching = request.cacheControl();
if (requestCaching.noCache() || hasConditions(request)) {
return new CacheStrategy(request, null);
}
long ageMillis = cacheResponseAge();
long freshMillis = computeFreshnessLifetime();
if (requestCaching.maxAgeSeconds() != -1) {
freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
}
long minFreshMillis = 0;
if (requestCaching.minFreshSeconds() != -1) {
minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
}
long maxStaleMillis = 0;
CacheControl responseCaching = cacheResponse.cacheControl();
if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
}
// 可以执行到这里,代表进入到2.3中第2步肯定情况下的流程
// 下面条件语句如果成立的话,代表缓存的响应还是新鲜的,说明请求对应的缓存的响应没有过期,
// 对应于2.3中的第3步肯定的情况。
if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
Response.Builder builder = cacheResponse.newBuilder();
if (ageMillis + minFreshMillis >= freshMillis) {
builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
}
long oneDayMillis = 24 * 60 * 60 * 1000L;
if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
}
return new CacheStrategy(null, builder.build());
}
Request.Builder conditionalRequestBuilder = request.newBuilder();
// 如果可以执行到这里,说明请求对应的缓存响应已经过期,代表进入到2.3中第3步否定情况下的流程,
// 这时就需要与服务端进行验证是否还可用,验证方式根据请求对应的缓存的响应决定,有如下三种
if (etag != null) {
// 如果请求对应的缓存的响应包含字段ETag,则使用请求头字段If-None-Match验证是否还可用
conditionalRequestBuilder.header("If-None-Match", etag);
} else if (lastModified != null) {
// 如果请求对应的缓存的响应包含字段Last-Modified,则使用请求头字段If-Modified-Since验证是否还可用
conditionalRequestBuilder.header("If-Modified-Since", lastModifiedString);
} else if (servedDate != null) {
// 如果请求对应的缓存的响应包含字段Date,则使用请求头字段If-Modified-Since验证是否还可用
conditionalRequestBuilder.header("If-Modified-Since", servedDateString);
}
Request conditionalRequest = conditionalRequestBuilder.build();
return hasConditions(conditionalRequest)
? new CacheStrategy(conditionalRequest, cacheResponse)
: new CacheStrategy(conditionalRequest, null);
}
对于请求头和响应头中字段的含义,可以参考HTTP头字段。
接下来看一下HttpEngine的connect方法,即第16步:
private HttpStream connect() throws RouteException, RequestException, IOException {
boolean doExtensiveHealthChecks = !networkRequest.method().equals("GET");
return streamAllocation.newStream(client.getConnectTimeout(),
client.getReadTimeout(), client.getWriteTimeout(),
client.getRetryOnConnectionFailure(), doExtensiveHealthChecks);
}
接下来看StreamAllocation的newStream方法,即第17步:
public HttpStream newStream(int connectTimeout, int readTimeout, int writeTimeout,
boolean connectionRetryEnabled, boolean doExtensiveHealthChecks)
throws RouteException, IOException {
try {
// 寻找一个健康的连接
RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
writeTimeout, connectionRetryEnabled, doExtensiveHealthChecks);
// 在寻找到的连接上设置HttpStream,HttpStream是用来发送请求和接收响应的
HttpStream resultStream;
if (resultConnection.framedConnection != null) { // 对应于HTTP/2协议
resultStream = new Http2xStream(this, resultConnection.framedConnection);
} else { // 对应于HTTP/1.x协议
resultConnection.getSocket().setSoTimeout(readTimeout);
resultConnection.source.timeout().timeout(readTimeout, MILLISECONDS);
resultConnection.sink.timeout().timeout(writeTimeout, MILLISECONDS);
resultStream = new Http1xStream(this, resultConnection.source, resultConnection.sink);
}
synchronized (connectionPool) {
resultConnection.streamCount++;
stream = resultStream;
return resultStream;
}
} catch (IOException e) {
throw new RouteException(e);
}
}
接下来看StreamAllocation的findHealthyConnection方法,即第18步:
/**
* 循环寻找一个健康的连接,直到找到为止
*/
private RealConnection findHealthyConnection(int connectTimeout, int readTimeout,
int writeTimeout, boolean connectionRetryEnabled, boolean doExtensiveHealthChecks)
throws IOException, RouteException {
while (true) {
RealConnection candidate = findConnection(connectTimeout, readTimeout, writeTimeout,
connectionRetryEnabled);
// 如果candidate是一个全新的连接(即连接上面Steam的个数为0),则跳过下面的健康检查直接返回。
synchronized (connectionPool) {
if (candidate.streamCount == 0) {
return candidate;
}
}
// Otherwise do a potentially-slow check to confirm that the pooled connection is still good.
if (candidate.isHealthy(doExtensiveHealthChecks)) {
return candidate;
}
connectionFailed();
}
}
接下来看StreamAllocation的findConnection方法,即第19步:
/**
* 如果没有找到,就创建一个新的连接,首先将新创建的连接实例放到连接池,然后通过新创建的连接实例与
* 服务端建立Socket连接,最后将新创建的连接返回。
*/
private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
boolean connectionRetryEnabled) throws IOException, RouteException {
synchronized (connectionPool) {
if (released) throw new IllegalStateException("released");
if (stream != null) throw new IllegalStateException("stream != null");
if (canceled) throw new IOException("Canceled");
RealConnection allocatedConnection = this.connection;
if (allocatedConnection != null && !allocatedConnection.noNewStreams) {
return allocatedConnection;
}
// 从连接池中寻找,如果之前已经通过相同的Address(只要URL字符串中的scheme、host、port相同,
// Address一般都是相同的,具体可以看Address的equals方法)创建过连接并且该连接上Stream没有达到上限(对于HTTP/1.x,Stream的上限为1,对于HTTP/2,Stream的上限为4),
// 那就找到了可复用的连接
RealConnection pooledConnection = Internal.instance.get(connectionPool, address, this);
if (pooledConnection != null) {
// 在连接池中找到了可复用的连接,直接返回
this.connection = pooledConnection;
return pooledConnection;
}
if (routeSelector == null) {
// RouteSelector的构造方法中会调用resetNextProxy方法,该方法中会获取系统默认的ProxySelector,
// 然后调用ProxySelector的select方法(以address的url为参数)获取http或者https协议对应的
// 代理列表,默认情况下代理列表是空的,可以通过http://www.blogs8.cn/posts/EU5L296中提供的方式设置代理。
// 接着将代理列表保存到proxies字段中,最后在proxies的末尾添加一个Proxy.NO_PROXY代理,
// Proxy.NO_PROXY代理的type是Type.DIRECT类型,即直接连接,不使用代理,这也是默认的方式。
routeSelector = new RouteSelector(address, routeDatabase());
}
}
// RouteSelector的next方法中首先获取proxies中的第一个代理(在没有设置代理的情况下,
// 该代理为Proxy.NO_PROXY),然后用该代理创建Route实例
Route route = routeSelector.next();
RealConnection newConnection = new RealConnection(route);
acquire(newConnection);
// 将新创建的连接放到连接池中以备后用
synchronized (connectionPool) {
Internal.instance.put(connectionPool, newConnection);
this.connection = newConnection;
if (canceled) throw new IOException("Canceled");
}
// 通过新创建的连接与服务端建立Socket连接
newConnection.connect(connectTimeout, readTimeout, writeTimeout, address.getConnectionSpecs(),
connectionRetryEnabled);
routeDatabase().connected(newConnection.getRoute());
return newConnection;
}
接下来就来看看RealConnection的connect方法,即第20步:
public void connect(int connectTimeout, int readTimeout, int writeTimeout,
List connectionSpecs, boolean connectionRetryEnabled) throws RouteException {
if (protocol != null) throw new IllegalStateException("already connected");
RouteException routeException = null;
ConnectionSpecSelector connectionSpecSelector = new ConnectionSpecSelector(connectionSpecs);
Proxy proxy = route.getProxy();
Address address = route.getAddress();
// 如果getSslSocketFactory()为null,那么schema为http,继而使用明文传输,
// 所以下面条件判断中 如果不包含ConnectionSpec.CLEARTEXT,就抛出异常。
if (route.getAddress().getSslSocketFactory() == null
&& !connectionSpecs.contains(ConnectionSpec.CLEARTEXT)) {
throw new RouteException(new UnknownServiceException(
"CLEARTEXT communication not supported: " + connectionSpecs));
}
// 有两种情况可以使protocol不为null:
// 1> scheme为http时,成功建立Socket连接,protocol就会被设置为Protocol.HTTP_1_1
// 2> scheme为https时,成功建立Socket连接并且TLS握手成功,protocol就会被设置为Protocol.HTTP_1_1
// 那么在HttpURLConnection中,无论scheme为https还是http,协议版本都是HTTP/1.1。
// 具体原因第21步会详细说明
while (protocol == null) {
try {
// 根据上面findConnection的注释可知,默认情况下不设置代理,即proxy.type() == Proxy.Type.DIRECT是成立的,
// 因此通过address.getSocketFactory().createSocket()创建一个Socket实例
rawSocket = proxy.type() == Proxy.Type.DIRECT || proxy.type() == Proxy.Type.HTTP
? address.getSocketFactory().createSocket()
: new Socket(proxy);
// 利用rawSocket实例根据指定host和port发起与服务端的TCP连接
connectSocket(connectTimeout, readTimeout, writeTimeout, connectionSpecSelector);
} catch (IOException e) {
Util.closeQuietly(socket);
Util.closeQuietly(rawSocket);
socket = null;
rawSocket = null;
source = null;
sink = null;
handshake = null;
protocol = null;
if (routeException == null) {
routeException = new RouteException(e);
} else {
routeException.addConnectException(e);
}
if (!connectionRetryEnabled || !connectionSpecSelector.connectionFailed(e)) {
throw routeException;
}
}
}
}
接下来就来看看RealConnection的connectSocket方法,即第21步:
/** Does all the work necessary to build a full HTTP or HTTPS connection on a raw socket. */
private void connectSocket(int connectTimeout, int readTimeout, int writeTimeout,
ConnectionSpecSelector connectionSpecSelector) throws IOException {
rawSocket.setSoTimeout(readTimeout);
try {
// 利用rawSocket实例根据指定host和port的服务端建立TCP连接
Platform.get().connectSocket(rawSocket, route.getSocketAddress(), connectTimeout);
} catch (ConnectException e) {
throw new ConnectException("Failed to connect to " + route.getSocketAddress());
}
// 为rawSocket的InputStream建立RealBufferedSource实例,为rawSocket的OutputStream建立RealBufferedSink实例
// 这两个实例就是用来读取响应和发送请求的,具体原理可以参考https://www.jianshu.com/p/0bc80063afb3
source = Okio.buffer(Okio.source(rawSocket));
sink = Okio.buffer(Okio.sink(rawSocket));
// 只有scheme为https的情况下,下面条件才会成立
if (route.getAddress().getSslSocketFactory() != null) {
// 进行TLS握手
connectTls(readTimeout, writeTimeout, connectionSpecSelector);
} else { // 对于scheme为http情况下,则将协议版本设置为HTTP/1.1
protocol = Protocol.HTTP_1_1;
socket = rawSocket;
}
if (protocol == Protocol.SPDY_3 || protocol == Protocol.HTTP_2) {
socket.setSoTimeout(0); // Framed connection timeouts are set per-stream.
// 当协议版本为HTTP/2时,framedConnection才会被初始化,framedConnection针对HTTP/2中的Frame结构。
FramedConnection framedConnection = new FramedConnection.Builder(true)
.socket(socket, route.getAddress().url().host(), source, sink)
.protocol(protocol)
.build();
framedConnection.sendConnectionPreface();
// Only assign the framed connection once the preface has been sent successfully.
this.framedConnection = framedConnection;
}
}
private void connectTls(int readTimeout, int writeTimeout,
ConnectionSpecSelector connectionSpecSelector) throws IOException {
if (route.requiresTunnel()) {
createTunnel(readTimeout, writeTimeout);
}
Address address = route.getAddress();
SSLSocketFactory sslSocketFactory = address.getSslSocketFactory();
boolean success = false;
SSLSocket sslSocket = null;
try {
// 将rawSocket基础上创建SSLSocket实例,SSLSocket用于发送请求时的加密和解析响应时的解密
sslSocket = (SSLSocket) sslSocketFactory.createSocket(
rawSocket, address.getUriHost(), address.getUriPort(), true /* autoClose */);
// 为sslSocket配置ciphers、 TLS版本、和extensions,为TLS握手做准备
ConnectionSpec connectionSpec = connectionSpecSelector.configureSecureSocket(sslSocket);
// 能运行到这里,scheme一定为https,那么connectionSpec为
// HttpsHandler.TLS_CONNECTION_SPEC(具体原因可以参考第7步),那么下面一定是成立的
if (connectionSpec.supportsTlsExtensions()) {
// 下面方法的第三个参数就是TLS握手时通过ALPN协商使用哪个应用层协议时建议的协议列表,
// 由于scheme为https,那么address.getProtocols()列表为HTTP_1_1_ONLY(具体原因可以参考第7步),
// 即只支持HTTP/1.1
Platform.get().configureTlsExtensions(
sslSocket, address.getUriHost(), address.getProtocols());
}
// Force handshake. This can throw!
sslSocket.startHandshake();
Handshake unverifiedHandshake = Handshake.get(sslSocket.getSession());
// Verify that the socket's certificates are acceptable for the target host.
if (!address.getHostnameVerifier().verify(address.getUriHost(), sslSocket.getSession())) {
X509Certificate cert = (X509Certificate) unverifiedHandshake.peerCertificates().get(0);
throw new SSLPeerUnverifiedException("Hostname " + address.getUriHost() + " not verified:"
+ "\n certificate: " + CertificatePinner.pin(cert)
+ "\n DN: " + cert.getSubjectDN().getName()
+ "\n subjectAltNames: " + OkHostnameVerifier.allSubjectAltNames(cert));
}
// Check that the certificate pinner is satisfied by the certificates presented.
if (address.getCertificatePinner() != CertificatePinner.DEFAULT) {
TrustRootIndex trustRootIndex = trustRootIndex(address.getSslSocketFactory());
List certificates = new CertificateChainCleaner(trustRootIndex)
.clean(unverifiedHandshake.peerCertificates());
address.getCertificatePinner().check(address.getUriHost(), certificates);
}
// 上面是TLS握手相关的代码,有兴趣的同学可以自己研究下,执行到这里说明握手成功,
// 这是就会保存ALPN协商后的应用层协议名称
String maybeProtocol = connectionSpec.supportsTlsExtensions()
? Platform.get().getSelectedProtocol(sslSocket)
: null;
socket = sslSocket;
// 更新source和sink,使其sslSocket的输入输出流。
source = Okio.buffer(Okio.source(socket));
sink = Okio.buffer(Okio.sink(socket));
handshake = unverifiedHandshake;
protocol = maybeProtocol != null
? Protocol.get(maybeProtocol)
: Protocol.HTTP_1_1;
success = true;
} catch (AssertionError e) {
if (Util.isAndroidGetsocknameError(e)) throw new IOException(e);
throw e;
} finally {
if (sslSocket != null) {
Platform.get().afterHandshake(sslSocket);
}
if (!success) {
closeQuietly(sslSocket);
}
}
}
3.4 在有请求体的情况下往请求体中写入内容
由于HttpsURLConnectionImpl对HttpURLConnectionImpl中getOutputStream方法没有任何装饰,所以直接看HttpURLConnectionImpl中的getOutputStream方法,即第25步:
@Override public final OutputStream getOutputStream() throws IOException {
connect();
// 通过OutputStream写入数据时,首先会写入到下面sink实例中,该sink实例是通过第15步中的requestBodyOut创建的。
// 通过第15步中可知,当OutputStream的类型为RetryableSink时,通过下面的sink实例写入的数据会保存到内存中;
// 否则对于HTTP/1.x,通过sink实例会将数据写入到第21步中提到的sink字段中,对于HTTP/2,通过sink实例写入的数据在达到一Frame(默认值为16KB)时,就会将该Frame写入到第21步中的提到的sink字段中。
BufferedSink sink = httpEngine.getBufferedRequestBody();
if (sink == null) {
throw new ProtocolException("method does not support a request body: " + method);
} else if (httpEngine.hasResponse()) {
throw new ProtocolException("cannot write request body after response has been read");
}
return sink.outputStream();
}
接下来看一下HttpEngine的getBufferedRequestBody方法,即第26步:
public BufferedSink getBufferedRequestBody() {
BufferedSink result = bufferedRequestBody;
if (result != null) return result;
Sink requestBody = getRequestBody();
return requestBody != null
? (bufferedRequestBody = Okio.buffer(requestBody))
: null;
}
/** Returns the request body or null if this request doesn't have a body. */
public Sink getRequestBody() {
if (cacheStrategy == null) throw new IllegalStateException();
// requestBodyOut就是15步中提到的requestBodyOut
return requestBodyOut;
}
3.5 从服务端获取响应
由于HttpsURLConnectionImpl对HttpURLConnectionImpl中getInputStream方法没有任何装饰,所以直接看HttpURLConnectionImpl中的getInputStream方法,即第29步:
@Override public final InputStream getInputStream() throws IOException {
if (!doInput) {
throw new ProtocolException("This protocol does not support input");
}
HttpEngine response = getResponse();
// if the requested file does not exist, throw an exception formerly the
// Error page from the server was returned if the requested file was
// text/html this has changed to return FileNotFoundException for all
// file types
if (getResponseCode() >= HTTP_BAD_REQUEST) {
throw new FileNotFoundException(url.toString());
}
return response.getResponse().body().byteStream();
}
由于HttpURLConnectionImpl中的getResponse方法是私有方法,即不会被装饰,所以直接看HttpURLConnectionImpl中的getResponse方法,即第30步:
private HttpEngine getResponse() throws IOException {
initHttpEngine();
if (httpEngine.hasResponse()) {
return httpEngine;
}
while (true) {
if (!execute(true)) {
continue;
}
......
}
}
由于HttpURLConnectionImpl中的execute方法是私有方法,即不会被装饰,所以直接看HttpURLConnectionImpl中的execute方法,即第31步:
private boolean execute(boolean readResponse) throws IOException {
......
if (readResponse) {
httpEngine.readResponse();
}
......
}
接下来看下HttpEngine的readResponse方法,即第32步:
public void readResponse() throws IOException {
if (userResponse != null) {
return; // Already ready.
}
if (networkRequest == null && cacheResponse == null) {
throw new IllegalStateException("call sendRequest() first!");
}
if (networkRequest == null) {
return; // No network response to read.
}
Response networkResponse;
if (forWebSocket) {
httpStream.writeRequestHeaders(networkRequest);
networkResponse = readNetworkResponse();
} else if (!callerWritesRequestBody) {
networkResponse = new NetworkInterceptorChain(0, networkRequest).proceed(networkRequest);
} else {
// 由第13步中的中newHttpEngine方法可知,forWebSocket为false、callerWritesRequestBody为true,因此会运行到这里
// 将bufferedRequestBody中剩下的数据(即不够一个Segment的数据)全部移动到requestBodyOut中。
if (bufferedRequestBody != null && bufferedRequestBody.buffer().size() > 0) {
bufferedRequestBody.emit();
}
if (sentRequestMillis == -1) {
// sentRequestMillis等于-1代表请求头还没有写入到第21步中的提到的sink字段,没有写入的原因是请求体的长度是未知的。
if (OkHeaders.contentLength(networkRequest) == -1
&& requestBodyOut instanceof RetryableSink) {
long contentLength = ((RetryableSink) requestBodyOut).contentLength();
networkRequest = networkRequest.newBuilder()
// 添加头字段Content-Length,用于告诉服务端请求体的长度
.header("Content-Length", Long.toString(contentLength))
.build();
}
// writeRequestHeaders最终将请求头写入到RealBufferedSink实例中,该RealBufferedSink实例是下面第21步中的提到的sink字段,对应第33步
httpStream.writeRequestHeaders(networkRequest);
}
if (requestBodyOut != null) {
if (bufferedRequestBody != null) {
// This also closes the wrapped requestBodyOut.
bufferedRequestBody.close();
} else {
requestBodyOut.close();
}
if (requestBodyOut instanceof RetryableSink) {
// 由第15步可知,当requestBodyOut的类型是RetryableSink时,写入到requestBodyOut中的请求体数据是保持在内存中,下面的方法就是将内存中的请求体数据写入到第21步中的提到的sink字段中,对应第34步。
httpStream.writeRequestBody((RetryableSink) requestBodyOut);
}
}
// 从服务端读取响应
networkResponse = readNetworkResponse();
}
receiveHeaders(networkResponse.headers());
// If we have a cache response too, then we're doing a conditional get.
if (cacheResponse != null) {
if (validate(cacheResponse, networkResponse)) {
// 执行到这里,说明过期的缓存的响应仍然是可用的,这时直接使用缓存的响应,对应于2.3中第5步肯定的情况
userResponse = cacheResponse.newBuilder()
.request(userRequest)
.priorResponse(stripBody(priorResponse))
// combine方法用于更新缓存的响应的新鲜度,比如修改Last-Modified字段,对应2.3中的第7步
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();
releaseStreamAllocation();
// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
InternalCache responseCache = Internal.instance.internalCache(client);
responseCache.trackConditionalCacheHit();
responseCache.update(cacheResponse, stripBody(userResponse));
userResponse = unzip(userResponse);
return;
} else {
closeQuietly(cacheResponse.body());
}
}
// 运行到这里,说明缓存响应已经不新鲜了,这时使用networkResponse,对应于2.3中第5步否定的情况
userResponse = networkResponse.newBuilder()
.request(userRequest)
.priorResponse(stripBody(priorResponse))
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
if (hasBody(userResponse)) {
// maybeCache是用于对网络响应的缓存处理,对应于2.3中的第8步,可以看出只有有响应体的响应才会被缓存
maybeCache();
userResponse = unzip(cacheWritingResponse(storeRequest, userResponse));
}
}
/**
* 返回 true 代表应该使用缓存响应,否则使用服务端返回的响应
* 下面这个方法就是用来判断过期的缓存的响应是否还可用
*/
private static boolean validate(Response cached, Response network) {
if (network.code() == HTTP_NOT_MODIFIED) {
return true;
}
// The HTTP spec says that if the network's response is older than our
// cached response, we may return the cache's response. Like Chrome (but
// unlike Firefox), this client prefers to return the newer response.
Date lastModified = cached.headers().getDate("Last-Modified");
if (lastModified != null) {
Date networkLastModified = network.headers().getDate("Last-Modified");
if (networkLastModified != null
&& networkLastModified.getTime() < lastModified.getTime()) {
return true;
}
}
return false;
}
接下来看下HttpEngine的readNetworkResponse方法,即第35步:
private Response readNetworkResponse() throws IOException {
// 请求的所有数据都被写入到了第21步中的提到的sink字段中,
// 下面的finishRequest方法是将sink字段中剩余的数据(即不够一个Segment的数据)写入到Socket的OutputStream中。
httpStream.finishRequest();
// 通过httpStream读取状态行和响应头,并且将OkHeaders.SENT_MILLIS、OkHeaders.RECEIVED_MILLIS记录在响应头中。
Response networkResponse = httpStream.readResponseHeaders()
.request(networkRequest)
.handshake(streamAllocation.connection().getHandshake())
.header(OkHeaders.SENT_MILLIS, Long.toString(sentRequestMillis))
.header(OkHeaders.RECEIVED_MILLIS, Long.toString(System.currentTimeMillis()))
.build();
if (!forWebSocket) {
// 创建RealResponseBody类型的请求体实例,通过该实例的byteStream读取的数据最终来自于第21步中提到的source字段。
networkResponse = networkResponse.newBuilder()
.body(httpStream.openResponseBody(networkResponse))
.build();
}
if ("close".equalsIgnoreCase(networkResponse.request().header("Connection"))
|| "close".equalsIgnoreCase(networkResponse.header("Connection"))) {
streamAllocation.noNewStreams();
}
return networkResponse;
}
private void maybeCache() throws IOException {
InternalCache responseCache = Internal.instance.internalCache(client);
if (responseCache == null) return;
if (!CacheStrategy.isCacheable(userResponse, networkRequest)) {
// 走到这里说明网路缓存不允许缓存,对应2.3中第8步否定的情况
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
// 移除过期的缓存响应
responseCache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
return;
}
// 运行的到这里说明该网络响应可以缓存,对应于2.3中的第8步肯定的情况
storeRequest = responseCache.put(stripBody(userResponse));
}
到这里,HttpURLConnection的源码分析完毕。
4 总结
首先通过下图概括一下HttpURLConnection的操作流程:
1> 首先通过Socket接口与服务端建立TCP连接,连接的最大空闲时间为5分钟。
所有建立的TCP连接都会放到连接池中进行集中管理,具体细节请参考2.4.1。
2> 接着就是确定使用的应用层协议
对于scheme为http的情况,直接使用HTTP/1.1协议。
对于scheme为https的情况,通过ALPN和服务端协商使用的应用层协议,协商的细节请参考2.2,协商的结果一定是使用HTTP/1.1。
3> 接着就是在TCP连接上发生请求和接受响应,如下图:
上图根据HTTP/1.1绘制的, OkHttp目前不支持Pipelining的特性,因此HttpURLConnection中的实现与上图的左侧相同,一个TCP连接在同一时刻只能服务于1对请求-响应,如果在一个请求的响应没有返回的情况下发起另一个请求,则会新建一个TCP连接。
虽然HttpURLConnection默认使用的是HTTP/1.1,但是还是通过下图说明一下对于HTTP/2在TCP连接上发生请求和接受响应的过程:
一个TCP连接在同一时刻最多可以服务于4对 请求-响应(具体原因请参考2.4.1),上图中的4行代表4个并行的Stream(用StreamId进行标识),每一个方块代表一个Frame(请求和响应都会被分割成多个Frame,具体可以参考2.1),Frame中会携带StreamId,当客户端同时接受多个响应时可以通过StreamId将响应进行区分不同的响应。
上面3步概括了HttpURLConnection的操作流程,下面就来总结一下请求在写入到Socket/SSLSocket之前都做了哪些处理,首先通过下图看一下HTTP请求和响应的内部结构:
上图中请求的结构是被写入到Socket/SSLSocket中的最终格式,那么请求的最终格式是怎么得到的,那就通过下图了解一下具体的流程:
上图中TLS行之上对请求的处理都是HttpURLConnection所要实现的,下面列举一下上面用到的处理请求的操作:
对于HTTP/1.1(上图左侧),对请求没有做任何的操作,直接按照HTTP请求和响应的内部结构图中的结构将请求写入Socket/SSLSocket中;
对于HTTP/2,是没有请求行的,请求行被拆分成键值对放到了请求头中,首先会对请求头进行头部压缩(在 Hpack.java中实现)得到请求头块,那么请求行内容也会被压缩,有兴趣的同学可以参考:
HPACK: Header Compression for HTTP/2 -- 官方专门为头部压缩给出的规范
HTTP/2 头部压缩技术介绍
接着将请求头块进行分割得到多个 请求头块片段,每个请求头块片段对应一个Frame,具体细节可以请参考2.1,具体实现在 Http2.java中。