阻塞/非阻塞与同步/异步
阻塞 I/O 发起的 read 请求,线程会被挂起,一直等到内核数据准备好,并把数据从内核区域拷贝到应用程序的缓冲区中,当拷贝过程完成,read 请求调用才返回。接下来,应用程序就可以对缓冲区的数据进行数据解析。
非阻塞的 read 请求在数据未准备好的情况下立即返回,应用程序可以不断轮询内核,直到数据准备好,内核将数据拷贝到应用程序缓冲,并完成这次 read 调用。注意,这里最后一次 read 调用,获取数据的过程,是一个同步的过程。这里的同步指的是内核区域的数据拷贝到缓存区这个过程。
那么这时候我们需要不停的去轮询查询内核io是否已经准备好了
于是有了我们的select、poll的多路复用技术出现(NIO)
其实上述的多路复用都是同步调用技术。为什么这么说呢?因为同步调用、异步调用的说法,是对于获取数据的过程而言的,前面几种最后获取数据的 read 操作调用,都是同步的,在 read 调用时,内核将数据从内核空间拷贝到应用程序空间,这个过程是在 read 函数中同步进行的,如果内核实现的拷贝效率很差,read 调用就会在这个同步过程中消耗比较长的时间。
而真正的异步是当我们发起 aio_read 之后 ,内核自动将数据从内核空间拷贝到应用程序空间,这个拷贝过程是异步的,内核自动完成的,和前面的同步操作不一样,应用程序并不需要主动发起拷贝动作。
这也就是我们所说的AIO了
我们来看如下代码
Selector.open();
public static Selector open() throws IOException {
return SelectorProvider.provider().openSelector();
}
这里会创建一个provider
public static SelectorProvider provider() {
synchronized (lock) {
if (provider != null)
return provider;
return AccessController.doPrivileged(
new PrivilegedAction<>() {
public SelectorProvider run() {
if(loadProviderFromProperty())
return provider;
if(loadProviderAsService())
return provider;
provider=sun.nio.ch.DefaultSelectorProvider.create();
return provider;
}
});
}
}
这里调用了DefaultSelectorProvider
创建SelectorProvider
在windows环境中这里创建的是WindowsSelectorProvider
public static SelectorProvider create() {
return new sun.nio.ch.WindowsSelectorProvider();
}
随后调用WindowsSelectorProvider.openSelector
public AbstractSelector openSelector() throws IOException {
return new WindowsSelectorImpl(this);
}
那么我们可以来看这个构造器做了什么事
WindowsSelectorImpl(SelectorProvider sp) throws IOException {
super(sp);
pollWrapper = new PollArrayWrapper(INIT_CAP);
wakeupPipe = Pipe.open();
wakeupSourceFd = ((SelChImpl)wakeupPipe.source()).getFDVal();
// Disable the Nagle algorithm so that the wakeup is more immediate
SinkChannelImpl sink = (SinkChannelImpl)wakeupPipe.sink();
(sink.sc).socket().setTcpNoDelay(true);
wakeupSinkFd = ((SelChImpl)sink).getFDVal();
pollWrapper.addWakeupSocket(wakeupSourceFd, 0);
}
这里调用了super(sp)
keys = ConcurrentHashMap.newKeySet();
selectedKeys = new HashSet<>();
publicKeys = Collections.unmodifiableSet(keys);
publicSelectedKeys = Util.ungrowableSet(selectedKeys);
初始化了key并把SelectorProvider
放入Selector
对象中
接下来我们继续看创建了一个PollArrayWrapper
对象,该对象相当重要java是通过该数组,把事件传输给c中去调用select函数,select参数的读写事件均来自于该数组
这个对象是个包装对象注释如下
Manipulates a native array of structs corresponding to (fd, events) pairs.
typedef struct pollfd {
SOCKET fd; // 4 bytes
short events; // 2 bytes
} pollfd_t;
构造器如下
private AllocatedNativeObject pollArray; // The fd array
long pollArrayAddress; // pollArrayAddress
@Native private static final short FD_OFFSET = 0; // fd offset in pollfd
@Native private static final short EVENT_OFFSET = 4; // events offset in pollfd
static short SIZE_POLLFD = 8; // sizeof pollfd struct
private int size; // Size of the pollArray
PollArrayWrapper(int newSize) {
int allocationSize = newSize * SIZE_POLLFD;
pollArray = new AllocatedNativeObject(allocationSize, true);
pollArrayAddress = pollArray.address();
this.size = newSize;
}
在C中其实际struct如下
typedef struct {
jint fd;
jshort events;
} pollfd;
这个之后会用到,那么接下来我们看wakeupPipe = Pipe.open();
,什么是Pipe呢,我们先来看一下java中的注释
/**
* A pair of channels that implements a unidirectional pipe.
* 一对实现单向管道的通道。
* A pipe consists of a pair of channels: A writable {@link
* Pipe.SinkChannel sink} channel and a readable {@link Pipe.SourceChannel source}
* channel. Once some bytes are written to the sink channel they can be read
* from the source channel in exactly the order in which they were written.
* 管道由一对通道组成,一个可写的SinkChannel和一个可读的SourceChannel
*
Whether or not a thread writing bytes to a pipe will block until another
* thread reads those bytes, or some previously-written bytes, from the pipe is
* system-dependent and therefore unspecified. Many pipe implementations will
* buffer up to a certain number of bytes between the sink and source channels,
* but such buffering should not be assumed.
*
*
* @author Mark Reinhold
* @author JSR-51 Expert Group
* @since 1.4
*/
可以看到Pipe这里有2个Channel那么我们继续
public static Pipe open() throws IOException {
return SelectorProvider.provider().openPipe();
}
public Pipe openPipe() throws IOException {
return new PipeImpl(this);
}
PipeImpl(final SelectorProvider sp) throws IOException {
try {
AccessController.doPrivileged(new Initializer(sp));
} catch (PrivilegedActionException x) {
throw (IOException)x.getCause();
}
}
这里AccessController.doPrivileged(new Initializer(sp));
这里使用了特权调用
为了方便查看这里贴出PipeImpl
class PipeImpl
extends Pipe
{
// Number of bytes in the secret handshake.
private static final int NUM_SECRET_BYTES = 16;
// Random object for handshake values
private static final Random RANDOM_NUMBER_GENERATOR = new SecureRandom();
// Source and sink channels
private SourceChannel source;
private SinkChannel sink;
private class Initializer
implements PrivilegedExceptionAction<Void>
{
private final SelectorProvider sp;
private IOException ioe = null;
private Initializer(SelectorProvider sp) {
this.sp = sp;
}
@Override
public Void run() throws IOException {
LoopbackConnector connector = new LoopbackConnector();
connector.run();
if (ioe instanceof ClosedByInterruptException) {
ioe = null;
Thread connThread = new Thread(connector) {
@Override
public void interrupt() {}
};
connThread.start();
for (;;) {
try {
connThread.join();
break;
} catch (InterruptedException ex) {}
}
Thread.currentThread().interrupt();
}
if (ioe != null)
throw new IOException("Unable to establish loopback connection", ioe);
return null;
}
private class LoopbackConnector implements Runnable {
@Override
public void run() {
ServerSocketChannel ssc = null;
SocketChannel sc1 = null;
SocketChannel sc2 = null;
try {
// Create secret with a backing array.
ByteBuffer secret = ByteBuffer.allocate(NUM_SECRET_BYTES);
ByteBuffer bb = ByteBuffer.allocate(NUM_SECRET_BYTES);
// Loopback address
InetAddress lb = InetAddress.getLoopbackAddress();
assert(lb.isLoopbackAddress());
InetSocketAddress sa = null;
for(;;) {
// Bind ServerSocketChannel to a port on the loopback
// address
if (ssc == null || !ssc.isOpen()) {
ssc = ServerSocketChannel.open();
ssc.socket().bind(new InetSocketAddress(lb, 0));
sa = new InetSocketAddress(lb, ssc.socket().getLocalPort());
}
// Establish connection (assume connections are eagerly
// accepted)
sc1 = SocketChannel.open(sa);
RANDOM_NUMBER_GENERATOR.nextBytes(secret.array());
do {
sc1.write(secret);
} while (secret.hasRemaining());
secret.rewind();
// Get a connection and verify it is legitimate
sc2 = ssc.accept();
do {
sc2.read(bb);
} while (bb.hasRemaining());
bb.rewind();
if (bb.equals(secret))
break;
sc2.close();
sc1.close();
}
// Create source and sink channels
source = new SourceChannelImpl(sp, sc1);
sink = new SinkChannelImpl(sp, sc2);
} catch (IOException e) {
try {
if (sc1 != null)
sc1.close();
if (sc2 != null)
sc2.close();
} catch (IOException e2) {}
ioe = e;
} finally {
try {
if (ssc != null)
ssc.close();
} catch (IOException e2) {}
}
}
}
}
PipeImpl(final SelectorProvider sp) throws IOException {
try {
AccessController.doPrivileged(new Initializer(sp));
} catch (PrivilegedActionException x) {
throw (IOException)x.getCause();
}
}
public SourceChannel source() {
return source;
}
public SinkChannel sink() {
return sink;
}
}
实际可以看到调用了Initializer.run()
方法,而后调用了LoopbackConnector.run()
方法
这里可以看到通过socket创建了2个通道,source = new SourceChannelImpl(sp, sc1);
以及sink = new SinkChannelImpl(sp, sc2);
并且检查了通道是否可以接受数据和发送数据。创建这两个通道的意义其实是为了在select没有设置timeout参数情况下唤醒select,结束阻塞
那么我们回到WindowsSelectorImpl
构造方法中来看看接下来做的事
wakeupSourceFd = ((SelChImpl)wakeupPipe.source()).getFDVal();
// Disable the Nagle algorithm so that the wakeup is more immediate
SinkChannelImpl sink = (SinkChannelImpl)wakeupPipe.sink();
(sink.sc).socket().setTcpNoDelay(true);
wakeupSinkFd = ((SelChImpl)sink).getFDVal();
pollWrapper.addWakeupSocket(wakeupSourceFd, 0);
可以看到这里SinkChannel
禁用了TCP中的Nagle算法,并且把SourceChannel
的fd(文件描述符)放入到了
pollWrapper
中并且添加唤醒事件可以看WindowsNativeDispatcher.c
中以下方法
JNIEXPORT jint JNICALL
Java_sun_nio_ch_WindowsSelectorImpl_00024SubSelector_poll0(JNIEnv *env, jobject this,
jlong pollAddress, jint numfds,
jintArray returnReadFds, jintArray returnWriteFds,
jintArray returnExceptFds, jlong timeout)
{
DWORD result = 0;
pollfd *fds = (pollfd *) pollAddress;
int i;
FD_SET readfds, writefds, exceptfds;
struct timeval timevalue, *tv;
static struct timeval zerotime = {0, 0};
int read_count = 0, write_count = 0, except_count = 0;
#ifdef _WIN64
int resultbuf[FD_SETSIZE + 1];
#endif
if (timeout == 0) {
tv = &zerotime;
} else if (timeout < 0) {
tv = NULL;
} else {
jlong sec = timeout / 1000;
tv = &timevalue;
//
// struct timeval members are signed 32-bit integers so the
// signed 64-bit jlong needs to be clamped
//
if (sec > INT_MAX) {
tv->tv_sec = INT_MAX;
tv->tv_usec = 0;
} else {
tv->tv_sec = (long)sec;
tv->tv_usec = (long)((timeout % 1000) * 1000);
}
}
/* Set FD_SET structures required for select */
for (i = 0; i < numfds; i++) {
if (fds[i].events & POLLIN) {
readfds.fd_array[read_count] = fds[i].fd;
read_count++;
}
if (fds[i].events & (POLLOUT | POLLCONN))
{
writefds.fd_array[write_count] = fds[i].fd;
write_count++;
}
exceptfds.fd_array[except_count] = fds[i].fd;
except_count++;
}
readfds.fd_count = read_count;
writefds.fd_count = write_count;
exceptfds.fd_count = except_count;
/* Call select */
if ((result = select(0 , &readfds, &writefds, &exceptfds, tv))
== SOCKET_ERROR) {
/* Bad error - this should not happen frequently */
/* Iterate over sockets and call select() on each separately */
FD_SET errreadfds, errwritefds, errexceptfds;
readfds.fd_count = 0;
writefds.fd_count = 0;
exceptfds.fd_count = 0;
for (i = 0; i < numfds; i++) {
/* prepare select structures for the i-th socket */
errreadfds.fd_count = 0;
errwritefds.fd_count = 0;
if (fds[i].events & POLLIN) {
errreadfds.fd_array[0] = fds[i].fd;
errreadfds.fd_count = 1;
}
if (fds[i].events & (POLLOUT | POLLCONN))
{
errwritefds.fd_array[0] = fds[i].fd;
errwritefds.fd_count = 1;
}
errexceptfds.fd_array[0] = fds[i].fd;
errexceptfds.fd_count = 1;
/* call select on the i-th socket */
if (select(0, &errreadfds, &errwritefds, &errexceptfds, &zerotime)
== SOCKET_ERROR) {
/* This socket causes an error. Add it to exceptfds set */
exceptfds.fd_array[exceptfds.fd_count] = fds[i].fd;
exceptfds.fd_count++;
} else {
/* This socket does not cause an error. Process result */
if (errreadfds.fd_count == 1) {
readfds.fd_array[readfds.fd_count] = fds[i].fd;
readfds.fd_count++;
}
if (errwritefds.fd_count == 1) {
writefds.fd_array[writefds.fd_count] = fds[i].fd;
writefds.fd_count++;
}
if (errexceptfds.fd_count == 1) {
exceptfds.fd_array[exceptfds.fd_count] = fds[i].fd;
exceptfds.fd_count++;
}
}
}
}
/* Return selected sockets. */
/* Each Java array consists of sockets count followed by sockets list */
#ifdef _WIN64
resultbuf[0] = readfds.fd_count;
for (i = 0; i < (int)readfds.fd_count; i++) {
resultbuf[i + 1] = (int)readfds.fd_array[i];
}
(*env)->SetIntArrayRegion(env, returnReadFds, 0,
readfds.fd_count + 1, resultbuf);
resultbuf[0] = writefds.fd_count;
for (i = 0; i < (int)writefds.fd_count; i++) {
resultbuf[i + 1] = (int)writefds.fd_array[i];
}
(*env)->SetIntArrayRegion(env, returnWriteFds, 0,
writefds.fd_count + 1, resultbuf);
resultbuf[0] = exceptfds.fd_count;
for (i = 0; i < (int)exceptfds.fd_count; i++) {
resultbuf[i + 1] = (int)exceptfds.fd_array[i];
}
(*env)->SetIntArrayRegion(env, returnExceptFds, 0,
exceptfds.fd_count + 1, resultbuf);
#else
(*env)->SetIntArrayRegion(env, returnReadFds, 0,
readfds.fd_count + 1, (jint *)&readfds);
(*env)->SetIntArrayRegion(env, returnWriteFds, 0,
writefds.fd_count + 1, (jint *)&writefds);
(*env)->SetIntArrayRegion(env, returnExceptFds, 0,
exceptfds.fd_count + 1, (jint *)&exceptfds);
#endif
return 0;
}
这里设置的fd会在这段代码用到
/* Set FD_SET structures required for select */
for (i = 0; i < numfds; i++) {
if (fds[i].events & POLLIN) {
readfds.fd_array[read_count] = fds[i].fd;
read_count++;
}
if (fds[i].events & (POLLOUT | POLLCONN))
{
writefds.fd_array[write_count] = fds[i].fd;
write_count++;
}
exceptfds.fd_array[except_count] = fds[i].fd;
except_count++;
}
用于设置事件供select轮询,在java中存在以下定义
/**
* Event masks for the various poll system calls.
* They will be set platform dependent in the static initializer below.
*/
public static final short POLLIN;
public static final short POLLOUT;
public static final short POLLERR;
public static final short POLLHUP;
public static final short POLLNVAL;
public static final short POLLCONN;
static native short pollinValue();
static native short polloutValue();
static native short pollerrValue();
static native short pollhupValue();
static native short pollnvalValue();
static native short pollconnValue();
static {
IOUtil.load();
initIDs();
POLLIN = pollinValue();
POLLOUT = polloutValue();
POLLERR = pollerrValue();
POLLHUP = pollhupValue();
POLLNVAL = pollnvalValue();
POLLCONN = pollconnValue();
}
这些也对应c中的select轮询事件,windows的Net.c中值如下
JNIEXPORT jshort JNICALL
Java_sun_nio_ch_Net_pollinValue(JNIEnv *env, jclass this)
{
return (jshort)POLLIN;
}
JNIEXPORT jshort JNICALL
Java_sun_nio_ch_Net_polloutValue(JNIEnv *env, jclass this)
{
return (jshort)POLLOUT;
}
JNIEXPORT jshort JNICALL
Java_sun_nio_ch_Net_pollerrValue(JNIEnv *env, jclass this)
{
return (jshort)POLLERR;
}
JNIEXPORT jshort JNICALL
Java_sun_nio_ch_Net_pollhupValue(JNIEnv *env, jclass this)
{
return (jshort)POLLHUP;
}
JNIEXPORT jshort JNICALL
Java_sun_nio_ch_Net_pollnvalValue(JNIEnv *env, jclass this)
{
return (jshort)POLLNVAL;
}
JNIEXPORT jshort JNICALL
Java_sun_nio_ch_Net_pollconnValue(JNIEnv *env, jclass this)
{
return (jshort)POLLCONN;
}
那么我们回到WindowsSelectorImpl
看下这个属性最大只有1024
// Maximum number of sockets for select().
// Should be INIT_CAP times a power of 2
private static final int MAX_SELECTABLE_FDS = 1024;
在select模型中,poll就不存在操作符限制,因为 fd_set
是一个long类型数组,数组大小为__FD_SETSIZE/__NFDBITS,前者为1024,后者与系统有关,这种定义保证了fdset数组最大可以存储__FD_SETSIZE个bit位
但是在java中该大小只是用来评判是否需要辅助线程帮忙进行poll操作
至此Selector.open()
到这里结束
public final int select(long timeout) throws IOException {
if (timeout < 0)
throw new IllegalArgumentException("Negative timeout");
return lockAndDoSelect(null, (timeout == 0) ? -1 : timeout);
}
private int lockAndDoSelect(Consumer<SelectionKey> action, long timeout)
throws IOException
{
synchronized (this) {
ensureOpen();
if (inSelect)
throw new IllegalStateException("select in progress");
inSelect = true;
try {
synchronized (publicSelectedKeys) {
return doSelect(action, timeout);
}
} finally {
inSelect = false;
}
}
}
进入我们的doselect(action,timeout)
protected int doSelect(Consumer<SelectionKey> action, long timeout)
throws IOException
{
assert Thread.holdsLock(this);
this.timeout = timeout; // set selector timeout
processUpdateQueue();
processDeregisterQueue();
if (interruptTriggered) {
resetWakeupSocket();
return 0;
}
// Calculate number of helper threads needed for poll. If necessary
// threads are created here and start waiting on startLock
adjustThreadsCount();
finishLock.reset(); // reset finishLock
// Wakeup helper threads, waiting on startLock, so they start polling.
// Redundant threads will exit here after wakeup.
startLock.startThreads();
// do polling in the main thread. Main thread is responsible for
// first MAX_SELECTABLE_FDS entries in pollArray.
try {
begin();
try {
subSelector.poll();
} catch (IOException e) {
finishLock.setException(e); // Save this exception
}
// Main thread is out of poll(). Wakeup others and wait for them
if (threads.size() > 0)
finishLock.waitForHelperThreads();
} finally {
end();
}
// Done with poll(). Set wakeupSocket to nonsignaled for the next run.
finishLock.checkForException();
processDeregisterQueue();
int updated = updateSelectedKeys(action);
// Done with poll(). Set wakeupSocket to nonsignaled for the next run.
resetWakeupSocket();
return updated;
}
这里就是select的核心方法了
首先processUpdateQueue()
/**
* Process new registrations and changes to the interest ops.
*/
private void processUpdateQueue() {
assert Thread.holdsLock(this);
synchronized (updateLock) {
SelectionKeyImpl ski;
// new registrations
while ((ski = newKeys.pollFirst()) != null) {
if (ski.isValid()) {
growIfNeeded();
channelArray[totalChannels] = ski;
ski.setIndex(totalChannels);
pollWrapper.putEntry(totalChannels, ski);
totalChannels++;
MapEntry previous = fdMap.put(ski);
assert previous == null;
}
}
// changes to interest ops
while ((ski = updateKeys.pollFirst()) != null) {
int events = ski.translateInterestOps();
int fd = ski.getFDVal();
if (ski.isValid() && fdMap.containsKey(fd)) {
int index = ski.getIndex();
assert index >= 0 && index < totalChannels;
pollWrapper.putEventOps(index, events);
}
}
}
}
首先会检测是否有新注册的key,再检测是否有修改的key,分别进行注册事件和修改关注事件
processDeregisterQueue()
/**
* Invoked by selection operations to process the cancelled-key set
*/
protected final void processDeregisterQueue() throws IOException {
assert Thread.holdsLock(this);
assert Thread.holdsLock(publicSelectedKeys);
Set<SelectionKey> cks = cancelledKeys();
synchronized (cks) {
if (!cks.isEmpty()) {
Iterator<SelectionKey> i = cks.iterator();
while (i.hasNext()) {
SelectionKeyImpl ski = (SelectionKeyImpl)i.next();
i.remove();
// remove the key from the selector
implDereg(ski);
selectedKeys.remove(ski);
keys.remove(ski);
// remove from channel's key set
deregister(ski);
SelectableChannel ch = ski.channel();
if (!ch.isOpen() && !ch.isRegistered())
((SelChImpl)ch).kill();
}
}
}
}
这里是删除取消的key和事件
adjustThreadsCount()
// After some channels registered/deregistered, the number of required
// helper threads may have changed. Adjust this number.
private void adjustThreadsCount() {
if (threadsCount > threads.size()) {
// More threads needed. Start more threads.
for (int i = threads.size(); i < threadsCount; i++) {
SelectThread newThread = new SelectThread(i);
threads.add(newThread);
newThread.setDaemon(true);
newThread.start();
}
} else if (threadsCount < threads.size()) {
// Some threads become redundant. Remove them from the threads List.
for (int i = threads.size() - 1 ; i >= threadsCount; i--)
threads.remove(i).makeZombie();
}
}
这里是判断是否有超过MAX_SELECTABLE_FDS其值为1024如果超过则需要添加辅助线程辅助线程的SelectThread
如下
// Represents a helper thread used for select.
private final class SelectThread extends Thread {
private final int index; // index of this thread
final SubSelector subSelector;
private long lastRun = 0; // last run number
private volatile boolean zombie;
// Creates a new thread
private SelectThread(int i) {
super(null, null, "SelectorHelper", 0, false);
this.index = i;
this.subSelector = new SubSelector(i);
//make sure we wait for next round of poll
this.lastRun = startLock.runsCounter;
}
void makeZombie() {
zombie = true;
}
boolean isZombie() {
return zombie;
}
public void run() {
while (true) { // poll loop
// wait for the start of poll. If this thread has become
// redundant, then exit.
if (startLock.waitForStart(this))
return;
// call poll()
try {
subSelector.poll(index);
} catch (IOException e) {
// Save this exception and let other threads finish.
finishLock.setException(e);
}
// notify main thread, that this thread has finished, and
// wakeup others, if this thread is the first to finish.
finishLock.threadFinished();
}
}
}
可以看到就是辅助执行poll操作
finishLock.reset()
和startLock.startThreads()
分别是同步辅助线程数,和启动辅助线程
那么接下来就到了我们的poll操作了
private int poll() throws IOException{ // poll for the main thread
return poll0(pollWrapper.pollArrayAddress,
Math.min(totalChannels, MAX_SELECTABLE_FDS),
readFds, writeFds, exceptFds, timeout);
}
poll操作在WindowsSelectorImpl.c
中,方法源码在上文已经给出,实际调用的就是select函数
select(0 , &readfds, &writefds, &exceptfds, tv)
之后会根据不同的事件放回 returnReadFds,returnWriteFds,returnExceptFds 返回给java,如下
resultbuf[0] = readfds.fd_count;
for (i = 0; i < (int)readfds.fd_count; i++) {
resultbuf[i + 1] = (int)readfds.fd_array[i];
}
(*env)->SetIntArrayRegion(env, returnReadFds, 0,
readfds.fd_count + 1, resultbuf);
resultbuf[0] = writefds.fd_count;
for (i = 0; i < (int)writefds.fd_count; i++) {
resultbuf[i + 1] = (int)writefds.fd_array[i];
}
(*env)->SetIntArrayRegion(env, returnWriteFds, 0,
writefds.fd_count + 1, resultbuf);
resultbuf[0] = exceptfds.fd_count;
for (i = 0; i < (int)exceptfds.fd_count; i++) {
resultbuf[i + 1] = (int)exceptfds.fd_array[i];
}
(*env)->SetIntArrayRegion(env, returnExceptFds, 0,
exceptfds.fd_count + 1, resultbuf);
接下来就到了我们最重要的方法了int updated = updateSelectedKeys(action);
// Update ops of the corresponding Channels. Add the ready keys to the
// ready queue.
private int updateSelectedKeys(Consumer<SelectionKey> action) {
updateCount++;
int numKeysUpdated = 0;
numKeysUpdated += subSelector.processSelectedKeys(updateCount, action);
for (SelectThread t: threads) {
numKeysUpdated += t.subSelector.processSelectedKeys(updateCount, action);
}
return numKeysUpdated;
}
可以看到这里同事也把辅助线程的numKeysUpdated进行了操作和计算那么我们继续看
private int processSelectedKeys(long updateCount, Consumer<SelectionKey> action) {
int numKeysUpdated = 0;
numKeysUpdated += processFDSet(updateCount, action, readFds,
Net.POLLIN,
false);
numKeysUpdated += processFDSet(updateCount, action, writeFds,
Net.POLLCONN |
Net.POLLOUT,
false);
numKeysUpdated += processFDSet(updateCount, action, exceptFds,
Net.POLLIN |
Net.POLLCONN |
Net.POLLOUT,
true);
return numKeysUpdated;
}
可以看到这里处理readFds,writeFds和exceptFds 这时已经把事件都分别放在了这3个数组中
/**
* updateCount is used to tell if a key has been counted as updated
* in this select operation.
*
* me.updateCount <= updateCount
*/
private int processFDSet(long updateCount,
Consumer<SelectionKey> action,
int[] fds, int rOps,
boolean isExceptFds)
{
int numKeysUpdated = 0;
for (int i = 1; i <= fds[0]; i++) {
int desc = fds[i];
if (desc == wakeupSourceFd) {
synchronized (interruptLock) {
interruptTriggered = true;
}
continue;
}
MapEntry me = fdMap.get(desc);
// If me is null, the key was deregistered in the previous
// processDeregisterQueue.
if (me == null)
continue;
SelectionKeyImpl sk = me.ski;
// The descriptor may be in the exceptfds set because there is
// OOB data queued to the socket. If there is OOB data then it
// is discarded and the key is not added to the selected set.
if (isExceptFds &&
(sk.channel() instanceof SocketChannelImpl) &&
discardUrgentData(desc))
{
continue;
}
int updated = processReadyEvents(rOps, sk, action);
if (updated > 0 && me.updateCount != updateCount) {
me.updateCount = updateCount;
numKeysUpdated++;
}
}
return numKeysUpdated;
}
}
首先这里会判断是不是存在wakeupSourceFd
如果是则进行select()唤醒接着判断是不是OOB数据( out-of-band,带外数据)如果是则丢弃
那么进入processReadyEvents(rOps, sk, action)
处理就绪事件
/**
* Invoked by selection operations to handle ready events. If an action
* is specified then it is invoked to handle the key, otherwise the key
* is added to the selected-key set (or updated when it is already in the
* set).
*/
protected final int processReadyEvents(int rOps,
SelectionKeyImpl ski,
Consumer<SelectionKey> action) {
if (action != null) {
ski.translateAndSetReadyOps(rOps);
if ((ski.nioReadyOps() & ski.nioInterestOps()) != 0) {
action.accept(ski);
ensureOpen();
return 1;
}
} else {
assert Thread.holdsLock(publicSelectedKeys);
if (selectedKeys.contains(ski)) {
if (ski.translateAndUpdateReadyOps(rOps)) {
return 1;
}
} else {
ski.translateAndSetReadyOps(rOps);
if ((ski.nioReadyOps() & ski.nioInterestOps()) != 0) {
selectedKeys.add(ski);
return 1;
}
}
}
return 0;
}
这里已经很清晰了,selectkeys就是这里进行添加的,所以在selector.selectedKeys()
会拿到我们的事件key
那么我们再看一下ski.translateAndSetReadyOps(rOps)
boolean translateAndSetReadyOps(int ops) {
return channel.translateAndSetReadyOps(ops, this);
}
这里的channel是ServerSocketChannelImpl
public boolean translateAndSetReadyOps(int ops, SelectionKeyImpl ski) {
return translateReadyOps(ops, 0, ski);
}
/**
* Translates native poll revent set into a ready operation set
*/
public boolean translateReadyOps(int ops, int initialOps, SelectionKeyImpl ski) {
int intOps = ski.nioInterestOps();
int oldOps = ski.nioReadyOps();
int newOps = initialOps;
if ((ops & Net.POLLNVAL) != 0) {
// This should only happen if this channel is pre-closed while a
// selection operation is in progress
// ## Throw an error if this channel has not been pre-closed
return false;
}
if ((ops & (Net.POLLERR | Net.POLLHUP)) != 0) {
newOps = intOps;
ski.nioReadyOps(newOps);
return (newOps & ~oldOps) != 0;
}
if (((ops & Net.POLLIN) != 0) &&
((intOps & SelectionKey.OP_ACCEPT) != 0))
newOps |= SelectionKey.OP_ACCEPT;
ski.nioReadyOps(newOps);
return (newOps & ~oldOps) != 0;
}
可以看到已经很清晰了,就是把ops进行一个翻译放入SelectionKeyImpl
的nioReadyOps
中之后根据判断这里准备好的事件是什么事件,来进行相应的处理
我们看一下ServerSocketChannel.open()
public static ServerSocketChannel open() throws IOException {
return SelectorProvider.provider().openServerSocketChannel();
}
public ServerSocketChannel openServerSocketChannel() throws IOException {
return new ServerSocketChannelImpl(this);
}
ServerSocketChannelImpl(SelectorProvider sp) throws IOException {
super(sp);
this.fd = Net.serverSocket(true);
this.fdVal = IOUtil.fdVal(fd);
}
这里创建了一个Socket套接字
serverSocketChannel.configureBlocking(false);
/**
* Adjusts this channel's blocking mode.
*
* If the given blocking mode is different from the current blocking
* mode then this method invokes the {@link #implConfigureBlocking
* implConfigureBlocking} method, while holding the appropriate locks, in
* order to change the mode.
*/
public final SelectableChannel configureBlocking(boolean block)
throws IOException
{
synchronized (regLock) {
if (!isOpen())
throw new ClosedChannelException();
boolean blocking = !nonBlocking;
if (block != blocking) {
if (block && haveValidKeys())
throw new IllegalBlockingModeException();
implConfigureBlocking(block);
nonBlocking = !block;
}
}
return this;
}
@Override
protected void implConfigureBlocking(boolean block) throws IOException {
acceptLock.lock();
try {
synchronized (stateLock) {
ensureOpen();
IOUtil.configureBlocking(fd, block);
}
} finally {
acceptLock.unlock();
}
}
实际是将文件描述符状态设置为O_NONBLOCK
c中调用如下,关键代码为fcntl(fd, F_SETFL, newflags)
static int
configureBlocking(int fd, jboolean blocking)
{
int flags = fcntl(fd, F_GETFL);
int newflags = blocking ? (flags & ~O_NONBLOCK) : (flags | O_NONBLOCK);
return (flags == newflags) ? 0 : fcntl(fd, F_SETFL, newflags);
}
JNIEXPORT void JNICALL
Java_sun_nio_ch_IOUtil_configureBlocking(JNIEnv *env, jclass clazz,
jobject fdo, jboolean blocking)
{
if (configureBlocking(fdval(env, fdo), blocking) < 0)
JNU_ThrowIOExceptionWithLastError(env, "Configure blocking failed");
}
我们来看以下代码
public final ServerSocketChannel bind(SocketAddress local)
throws IOException
{
return bind(local, 0);
}
@Override
public ServerSocketChannel bind(SocketAddress local, int backlog) throws IOException {
synchronized (stateLock) {
ensureOpen();
if (localAddress != null)
throw new AlreadyBoundException();
InetSocketAddress isa = (local == null)
? new InetSocketAddress(0)
: Net.checkAddress(local);
SecurityManager sm = System.getSecurityManager();
if (sm != null)
sm.checkListen(isa.getPort());
NetHooks.beforeTcpBind(fd, isa.getAddress(), isa.getPort());
Net.bind(fd, isa.getAddress(), isa.getPort());
Net.listen(fd, backlog < 1 ? 50 : backlog);
localAddress = Net.localAddress(fd);
}
return this;
}
这里其实和ServerSocket差不多,绑定端口并监听
这里的ops 在java中SelectionKey
有如下定义
// -- Operation bits and bit-testing convenience methods --
/**
* Operation-set bit for read operations.
*
* Suppose that a selection key's interest set contains
* {@code OP_READ} at the start of a selection operation. If the selector
* detects that the corresponding channel is ready for reading, has reached
* end-of-stream, has been remotely shut down for further reading, or has
* an error pending, then it will add {@code OP_READ} to the key's
* ready-operation set.
*/
public static final int OP_READ = 1 << 0;
/**
* Operation-set bit for write operations.
*
* Suppose that a selection key's interest set contains
* {@code OP_WRITE} at the start of a selection operation. If the selector
* detects that the corresponding channel is ready for writing, has been
* remotely shut down for further writing, or has an error pending, then it
* will add {@code OP_WRITE} to the key's ready set.
*/
public static final int OP_WRITE = 1 << 2;
/**
* Operation-set bit for socket-connect operations.
*
* Suppose that a selection key's interest set contains
* {@code OP_CONNECT} at the start of a selection operation. If the selector
* detects that the corresponding socket channel is ready to complete its
* connection sequence, or has an error pending, then it will add
* {@code OP_CONNECT} to the key's ready set.
*/
public static final int OP_CONNECT = 1 << 3;
/**
* Operation-set bit for socket-accept operations.
*
* Suppose that a selection key's interest set contains
* {@code OP_ACCEPT} at the start of a selection operation. If the selector
* detects that the corresponding server-socket channel is ready to accept
* another connection, or has an error pending, then it will add
* {@code OP_ACCEPT} to the key's ready set.
*/
public static final int OP_ACCEPT = 1 << 4;
那么我们跟入代码
public final SelectionKey register(Selector sel, int ops)
throws ClosedChannelException
{
return register(sel, ops, null);
}
public final SelectionKey register(Selector sel, int ops, Object att)
throws ClosedChannelException
{
if ((ops & ~validOps()) != 0)
throw new IllegalArgumentException();
if (!isOpen())
throw new ClosedChannelException();
synchronized (regLock) {
if (isBlocking())
throw new IllegalBlockingModeException();
synchronized (keyLock) {
// re-check if channel has been closed
if (!isOpen())
throw new ClosedChannelException();
SelectionKey k = findKey(sel);
if (k != null) {
k.attach(att);
k.interestOps(ops);
} else {
// New registration
k = ((AbstractSelector)sel).register(this, ops, att);
addKey(k);
}
return k;
}
}
}
findkey(sel)
private SelectionKey findKey(Selector sel) {
assert Thread.holdsLock(keyLock);
if (keys == null)
return null;
for (int i = 0; i < keys.length; i++)
if ((keys[i] != null) && (keys[i].selector() == sel))
return keys[i];
return null;
}
它会去查找是否有注册过这个Selector
如果注册过则修改感兴趣的事件如果没有则注册
关键代码k = ((AbstractSelector)sel).register(this, ops, att);
和addKey(k);
@Override
protected final SelectionKey register(AbstractSelectableChannel ch,
int ops,
Object attachment)
{
if (!(ch instanceof SelChImpl))
throw new IllegalSelectorException();
SelectionKeyImpl k = new SelectionKeyImpl((SelChImpl)ch, this);
k.attach(attachment);
// register (if needed) before adding to key set
implRegister(k);
// add to the selector's key set, removing it immediately if the selector
// is closed. The key is not in the channel's key set at this point but
// it may be observed by a thread iterating over the selector's key set.
keys.add(k);
try {
k.interestOps(ops);
} catch (ClosedSelectorException e) {
assert ch.keyFor(this) == null;
keys.remove(k);
k.cancel();
throw e;
}
return k;
}
@Override
protected void implRegister(SelectionKeyImpl ski) {
ensureOpen();
synchronized (updateLock) {
newKeys.addLast(ski);
}
}
这里的newKeys
在select()
中的processUpdateQueue()
调用修改感兴趣的事件
addkey(k)
// -- Utility methods for the key set --
private void addKey(SelectionKey k) {
assert Thread.holdsLock(keyLock);
int i = 0;
if ((keys != null) && (keyCount < keys.length)) {
// Find empty element of key array
for (i = 0; i < keys.length; i++)
if (keys[i] == null)
break;
} else if (keys == null) {
keys = new SelectionKey[2];
} else {
// Grow key array
int n = keys.length * 2;
SelectionKey[] ks = new SelectionKey[n];
for (i = 0; i < keys.length; i++)
ks[i] = keys[i];
keys = ks;
i = keyCount;
}
keys[i] = k;
keyCount++;
}
添加新注册的SelectionKeyImpl
进keys中用于循环key,判断事件
至此,windows下的select基本已经分析完毕,可以看到java对select函数进行了大量的封装,但是整体思路是比较简单的,在linux下还有epoll,epoll比select高效很多,select和poll均需要循环所有事件,事件数量增加后性能会降低很多,而epoll它不再需要每次循环所有事件,操作系统只会返回相应的需要处理的事件,可以根据文件描述符来扩大事件的处理,所以在linux下想要增加连接数,需要增加文件描述符的设置。当然windows下有aio的实现,也就是异步非阻塞的实现,epoll并非是真正的异步,但是性能也是相当好的,linux下aio的实现在最新内核io_uring 其支持socket的异步,可以持续关注;