一. 本文阐述内容的着眼点:
阐述jetty-server作为一个高性能的,吞吐量比较高的http服务它后面采用了怎样的技术和多线程手段提升它的吞吐量和高性能的。主要分三块:
1.server实例化:new Server();
2.启动Server:server.start();
3.等待请求某些http的请求并处理(简略阐述);
二. new Server():
/* ------------------------------------------------------------ */
/** Convenience constructor
* Creates server and a {@link ServerConnector} at the passed port.
* @param port The port of a network HTTP connector (or 0 for a randomly allocated port).
* @see NetworkConnector#getLocalPort()
*/
public Server(@Name("port")int port)
{
//1.初始化线程池
this((ThreadPool)null);
//2.初始化ServerConnector
ServerConnector connector=new ServerConnector(this);
//3.设置port
connector.setPort(port);
//4.关联Server和Connector
setConnectors(new Connector[]{connector});
}
1.初始化线程池:this((ThreadPool)null)
像jetty这样一个服务端的容器,后台不会使用new 一个线程,new 一个线程不是一件很靠谱的事情。一定是使用线程池的。同时jetty并没有使用jdk的线程池,而是自己实现了一个线程池--QueuedThreadPool。
public Server(@Name("threadpool") ThreadPool pool)
{
_threadPool=pool!=null?pool:new QueuedThreadPool();
addBean(_threadPool);
setServer(this);
}
QueuedThreadPool实现了一个SizedThreadPool。
@ManagedObject("A thread pool")
public class QueuedThreadPool extends AbstractLifeCycle implements SizedThreadPool, Dumpable
QueuedThreadPool关键是executor方法:
execute仅仅把job放入队列_jobs里。_jobs是一个BlockingQueue的接口,保存所有要执行的任务。BlockingQueue是一个性能不高的队列,因为这里execute执行频率不是很高,offer和poll就不是很高。如果execute执行频率很高,那么这里就有优化的空间了。
public void execute(Runnable job)
{
if (LOG.isDebugEnabled())
LOG.debug("queue {}",job);
if (!isRunning() || !_jobs.offer(job))
{
LOG.warn("{} rejected {}", this, job);
throw new RejectedExecutionException(job.toString());
}
else
{
// Make sure there is at least one thread executing the job.
if (getThreads() == 0)
startThreads(1);
}
}
默认用类jetty自己实现的BlockingArrayQueue:
默认调用:
public QueuedThreadPool(@Name("maxThreads") int maxThreads, @Name("minThreads") int minThreads, @Name("idleTimeout") int idleTimeout, @Name("queue") BlockingQueue queue, @Name("threadGroup") ThreadGroup threadGroup)
{
setMinThreads(minThreads);
setMaxThreads(maxThreads);
setIdleTimeout(idleTimeout);
setStopTimeout(5000);
if (queue==null)
{
int capacity=Math.max(_minThreads, 8);
queue=new BlockingArrayQueue<>(capacity, capacity);
}
_jobs=queue;
_threadGroup=threadGroup;
}
jetty自己实现的BlockingArrayQueue,虽然做了一些优化还是用的阻塞,性能不是太高:
public class BlockingArrayQueue extends AbstractList implements BlockingQueue
2.初始化ServerConnector:
ServerConnector表示服务端的一个连接。处理http连接以及nio,channel,selector等。继承AbstractConnector
@ManagedObject("HTTP connector using NIO ByteChannels and Selectors")
public class ServerConnector extends AbstractNetworkConnector
@ManagedObject("AbstractNetworkConnector")
public abstract class AbstractNetworkConnector extends AbstractConnector implements NetworkConnector
分四步:
1.初始化ScheduledExecutorScheduler
2.初始化ByteBufferPool
3.维护ConnectionFactory
4.取得可用CPU数量
5.更新acceptor数量
6.创建acceptor线程组
7.初始化ServerConnectorManager
public AbstractConnector(
Server server,
Executor executor,
Scheduler scheduler,
ByteBufferPool pool,
int acceptors,
ConnectionFactory... factories)
{
_server=server;
_executor=executor!=null?executor:_server.getThreadPool();
//1.初始化ScheduledExecutorScheduler
if (scheduler==null)
scheduler=_server.getBean(Scheduler.class);
_scheduler=scheduler!=null?scheduler:new ScheduledExecutorScheduler();
//2.初始化ByteBufferPool
if (pool==null)
pool=_server.getBean(ByteBufferPool.class);
_byteBufferPool = pool!=null?pool:new ArrayByteBufferPool();
addBean(_server,false);
addBean(_executor);
if (executor==null)
unmanage(_executor); // inherited from server
addBean(_scheduler);
addBean(_byteBufferPool);
//3.维护ConnectionFactory
for (ConnectionFactory factory:factories)
addConnectionFactory(factory);
//4.取得可用CPU数量
int cores = Runtime.getRuntime().availableProcessors();
//5.更新acceptor数量
if (acceptors < 0)
acceptors=Math.max(1, Math.min(4,cores/8));
if (acceptors > cores)
LOG.warn("Acceptors should be <= availableProcessors: " + this);
//6.创建acceptor线程组
_acceptors = new Thread[acceptors];
}
1)初始化ScheduledExecutorScheduler:
jetty里有些任务是要隔一段时间执行一次的。比如隔一段时间要检查什么东西的状态。
2)初始化ByteBufferPool:
当jetty sever接受到大量Http请求(底层是tcp请求),需要初始化ByteBuffer,然后读取或放入channel中,这样太频繁的话会降低性能。ByteBufferPool里面放的都是ByteBuffer对象,可重用,减少GC,减少对象的产生。new不会消耗性能(因为new是jvm性能非常高的操作),主要是对象的回收,特别是新生代的回收,GC太消耗jvm。
但是,不能随便设计一个多个线程获取资源的Pool,因为Pool的特性是线程安全,如果用低效的阻塞(如synchronized)去实现线程安全,这样还不如直接new对象然后交给GC回收。用低效的阻塞(如synchronized)做的对象池的性能是非常差的。
所以要想高效设计对象pool,必须是线程安全而且无锁的。
ByteBufferPool和一般的对象池还是有区别的,一般的对象池里的对象任何一个都一样,比如连接池。ByteBufferPool不同,所以比普通对象池稍微复杂一些。因为请求有可能用1k的ByteBuffer,2k的ByteBuffer,10k的ByteBuffer,2M的ByteBuffer。而且不可能为每一个容量都存N份。
ByteBufferPool在jetty里有很多的实现类,现在分析一下默认的实现类ArrayByteBufferPool:
public ArrayByteBufferPool(int minSize, int increment, int maxSize, int maxQueue)
{
if (minSize<=0)
minSize=0;
if (increment<=0)
increment=1024;
if (maxSize<=0)
maxSize=64*1024;
if (minSize>=increment)
throw new IllegalArgumentException("minSize >= increment");
if ((maxSize%increment)!=0 || increment>=maxSize)
throw new IllegalArgumentException("increment must be a divisor of maxSize");
_min=minSize;
_inc=increment;
//一种大小对于一个Bucket,所以需要maxSize/increment=64个 //Bucket
//直接内存Bucket数组
_direct=new ByteBufferPool.Bucket[maxSize/increment];
//堆内存Bucket数组
_indirect=new ByteBufferPool.Bucket[maxSize/increment];
_maxQueue=maxQueue;
int size=0;
for (int i=0;i<_direct.length;i++)
{
size+=_inc;
//创建每个大小的Bucket,并放到Bucket数组里。
_direct[i]=new ByteBufferPool.Bucket(this,size,_maxQueue);
_indirect[i]=new ByteBufferPool.Bucket(this,size,_maxQueue);
}
}
minSize:最小大小
increment:每次增加的大小。
maxSize:最大大小。
例如: minSize=0;increment=1024;maxSize=64*1024;
这样会有64种大小。
存储ByteBuffer的结构:Bucket:
class Bucket
{
private final Deque _queue = new ConcurrentLinkedDeque<>();
private final ByteBufferPool _pool;
private final int _capacity;
private final AtomicInteger _space;
重要的参数:
_capacity,表示Bucket存放ByteBuffer个数。
_queue,表示实际存放ByteBuffer的容器,ConcurrentLinkedDeque (无锁的实现,并发性很好)。
初始化ArrayByteBufferPool时会创建每个大小的Bucket,并放到Bucket数组( _direct,_indirect)里。但是_queue里的内容是延迟加载的。
acquire():向ArrayByteBufferPool池中申请ByteBuffer。
//size:需要多大的ByteBuffer; direct:是否是直接内存
public ByteBuffer acquire(int size, boolean direct)
{
// _direct[]或_indirect[]在取得合适的Bucket,因为是懒加载,
//所以有可能是空的
ByteBufferPool.Bucket bucket = bucketFor(size,direct);
if (bucket==null)
return newByteBuffer(size,direct);//null,就新建
//如果在 _direct[]或_indirect[]找到合适的bucket,
//就在bucket内部队列中取得一个ByteBuffer
return bucket.acquire(direct);
}
release():
public void release(ByteBuffer buffer)
{
if (buffer!=null)
{
//寻找合适的Bucket
ByteBufferPool.Bucket bucket = bucketFor(buffer.capacity(),buffer.isDirect());
if (bucket!=null)
//放到找到的Bucket的内部队列中
bucket.release(buffer);
}
}
例外处理:
如何一个线程想要128k的bytebuffer,按acquire()会申请一个128k的,但是release时,不会归还回去,这样128k不进入Bucket的内部队列。
3)维护ConnectionFactory:
ConnectionFactory用来创建连接对象,如HttpConnectionFactory。
4)取得可用CPU数量:
int cores = Runtime.getRuntime().availableProcessors();
5)更新acceptor线程数量:
acceptor指有多少线程去做处理客户端请求的工作。
有几个线程做acceptors:acceptors原则上不会超过四个。
服务端设计的经验,
if (acceptors < 0)
acceptors=Math.max(1, Math.min(4,cores/8));`
6)创建acceptor线程组:
_acceptors = new Thread[acceptors];
7)初始化ServerConnectorManager:
ServerConnectorManager继承SelectorManager,用来管理selector的管理器。
selector是NIO中的选择器,管理channel状态的。
_manager = newSelectorManager(getExecutor(), getScheduler(),selectors);
selector的线程个数:
Math.max(1,Math.min(cpus/2,threads/16));
3.设置port:
设置sever的port.
4.关联Server和Connector:
setConnectors(new Connector[]{connector});
3.Server.start():
org.eclipse.jetty.util.component.AbstractLifeCycle:
public final void start() throws Exception
{
synchronized (_lock)
{
try
{
if (_state == __STARTED || _state == __STARTING)
return;
//1.设置启动状态
setStarting();
//2.启动过程doStart()
doStart();
//3.启动完毕
setStarted();
}
catch (Throwable e)
{
setFailed(e);
throw e;
}
}
}
1.设置启动状态:
private void setStarting()
{
if (LOG.isDebugEnabled())
LOG.debug("starting {}",this);
_state = __STARTING;
for (Listener listener : _listeners)
listener.lifeCycleStarting(this);
}
2.启动过程doStart():
包括Server在内,很多jetty对象,包括Connector等,都实现了一个LifeCycle这个接口。LifeCycle表示有一个生命周期的对象,包括一些方法start(),stop()等,一个对象的生老病死都在里面。因为像Server,Connector这样的对象都有生老病死这样一个周期,所以类似有声明周期的对象都会实现这样一个LifeCycle的接口。
@Override
protected void doStart() throws Exception
{
// Create an error handler if there is none
if (_errorHandler==null)
_errorHandler=getBean(ErrorHandler.class);
if (_errorHandler==null)
setErrorHandler(new ErrorHandler());
if (_errorHandler instanceof ErrorHandler.ErrorPageMapper)
LOG.warn("ErrorPageMapper not supported for Server level Error Handling");
_errorHandler.setServer(this);
//If the Server should be stopped when the jvm exits, register
//with the shutdown handler thread.
if (getStopAtShutdown())
ShutdownThread.register(this);
//Register the Server with the handler thread for receiving
//remote stop commands
//1.注册ShutdownMonitor,用来远程停止Server
ShutdownMonitor.register(this);
//Start a thread waiting to receive "stop" commands.
ShutdownMonitor.getInstance().start(); // initialize
LOG.info("jetty-" + getVersion());
if (!Jetty.STABLE)
{
LOG.warn("THIS IS NOT A STABLE RELEASE! DO NOT USE IN PRODUCTION!");
LOG.warn("Download a stable release from http://download.eclipse.org/jetty/");
}
HttpGenerator.setJettyVersion(HttpConfiguration.SERVER_VERSION);
// Check that the thread pool size is enough.
//2.拿到线程池,在前面实例化的时候就已经初始化完了。
SizedThreadPool pool = getBean(SizedThreadPool.class);
int max=pool==null?-1:pool.getMaxThreads();
int selectors=0;
int acceptors=0;
for (Connector connector : _connectors)
{
if (connector instanceof AbstractConnector)
{
AbstractConnector abstractConnector = (AbstractConnector)connector;
Executor connectorExecutor = connector.getExecutor();
if (connectorExecutor != pool)
{
// Do not count the selectors and acceptors from this connector at
// the server level, because the connector uses a dedicated executor.
continue;
}
acceptors += abstractConnector.getAcceptors();
if (connector instanceof ServerConnector)
{
// The SelectorManager uses 2 threads for each selector,
// one for the normal and one for the low priority strategies.
selectors += 2 * ((ServerConnector)connector).getSelectorManager().getSelectorCount();
}
}
}
int needed=1+selectors+acceptors;
if (max>0 && needed>max)
throw new IllegalStateException(String.format("Insufficient threads: max=%d < needed(acceptors=%d + selectors=%d + request=1)",max,acceptors,selectors));
MultiException mex=new MultiException();
try
{
super.doStart();
}
catch(Throwable e)
{
mex.add(e);
}
// start connectors last
for (Connector connector : _connectors)
{
try
{
connector.start();
}
catch(Throwable e)
{
mex.add(e);
}
}
if (isDumpAfterStart())
dumpStdErr();
mex.ifExceptionThrow();
LOG.info(String.format("Started @%dms",Uptime.getUptime()));
}
1.ShutdownMonitor:
单启动一个线程去支持远程关闭Server
```
//remote stop commands
ShutdownMonitor.register(this);
//Start a thread waiting to receive "stop" commands.
ShutdownMonitor.getInstance().start(); // initialize
###2.拿到线程池:
拿到线程池,在前面实例化的时候就已经初始化完了。
jetty实现了自己的一套管理体系:
```
// Check that the thread pool size is enough.
SizedThreadPool pool = getBean(SizedThreadPool.class);
int max=pool==null?-1:pool.getMaxThreads();
int selectors=0;
int acceptors=0;
```
###3.计算selector数量:
累计所有connector下的selector,累加:
最后needed=1+selectors+acceptors; needed是一共所需线程数:
但大于两百终止程序(如果有200个线程去做selector,和acceptor了,系统认为跑不起来了。)
int max=pool==null?-1:pool.getMaxThreads();
int selectors=0;
int acceptors=0;
for (Connector connector : _connectors)
{
if (connector instanceof AbstractConnector)
{
AbstractConnector abstractConnector = (AbstractConnector)connector;
Executor connectorExecutor = connector.getExecutor();
if (connectorExecutor != pool)
{
// Do not count the selectors and acceptors from this connector at
// the server level, because the connector uses a dedicated executor.
continue;
}
acceptors += abstractConnector.getAcceptors();
if (connector instanceof ServerConnector)
{
// The SelectorManager uses 2 threads for each selector,
// one for the normal and one for the low priority strategies.
selectors += 2 * ((ServerConnector)connector).getSelectorManager().getSelectorCount();
}
}
}
//共需要的线程数:
int needed=1+selectors+acceptors;
大于两百终止程序:
if (max>0 && needed>max)
throw new IllegalStateException(String.format("Insufficient threads: max=%d < needed(acceptors=%d + selectors=%d + request=1)",max,acceptors,selectors));
4.管理bean,并开始执行:
QueuedThreadPool实现了LifeCycle,所以在这里执行
/**
* Starts the managed lifecycle beans in the order they were added.
*/
@Override
protected void doStart() throws Exception
{
if (_destroyed)
throw new IllegalStateException("Destroyed container cannot be restarted");
// indicate that we are started, so that addBean will start other beans added.
_doStarted = true;
// start our managed and auto beans
for (Bean b : _beans)
{
if (b._bean instanceof LifeCycle)
{
LifeCycle l = (LifeCycle)b._bean;
switch(b._managed)
{
case MANAGED:
if (!l.isRunning())
start(l);
break;
case AUTO:
if (l.isRunning())
unmanage(b);
else
{
manage(b);
start(l);
}
break;
}
}
}
super.doStart();
}
QueuedThreadPool的启动:
@Override
protected void doStart() throws Exception
{
super.doStart();
_threadsStarted.set(0);
//启动若干线程:创建线程,设置线程的属性,启动线程。
startThreads(_minThreads);
}
//开始线程
private boolean startThreads(int threadsToStart)
{
while (threadsToStart > 0 && isRunning())
{
int threads = _threadsStarted.get();
if (threads >= _maxThreads)
return false;
if (!_threadsStarted.compareAndSet(threads, threads + 1))
continue;
boolean started = false;
try
{
//创建线程:用的是_runnable(实现了Runnable接口)
Thread thread = newThread(_runnable);
//设置线程的属性
thread.setDaemon(isDaemon());
thread.setPriority(getThreadsPriority());
thread.setName(_name + "-" + thread.getId());
_threads.add(thread);
//启动线程
thread.start();
started = true;
--threadsToStart;
}
finally
{
if (!started)
_threadsStarted.decrementAndGet();
}
}
return true;
}
//_runnable详情:用死循环不断地_jobs.poll():这里是启动线程后动_jobs里面取任务了。对于与前面的executor方法
while (job != null && isRunning())
{
if (LOG.isDebugEnabled())
LOG.debug("run {}",job);
//真正的让线程run。
runJob(job);
if (LOG.isDebugEnabled())
LOG.debug("ran {}",job);
if (Thread.interrupted())
{
ignore=true;
break loop;
}
job = _jobs.poll();
}
5.启动连接:
1.获取ConnectionFactory
2.创建Selector并启动程序
3.创建Acceptor线程。
// start connectors last
for (Connector connector : _connectors)
{
try
{
//开始
connector.start();
}
catch(Throwable e)
{
mex.add(e);
}
}
####1.获取ConnectionFactory
//org.eclipse.jetty.server.AbstractConnector
protected void doStart() throws Exception
{
if(_defaultProtocol==null)
throw new IllegalStateException("No default protocol for "+this);
//取得ConnectionFactory:
_defaultConnectionFactory = getConnectionFactory(_defaultProtocol);
if(_defaultConnectionFactory==null)
throw new IllegalStateException("No protocol factory for default protocol '"+_defaultProtocol+"' in "+this);
SslConnectionFactory ssl = getConnectionFactory(SslConnectionFactory.class);
if (ssl != null)
{
String next = ssl.getNextProtocol();
ConnectionFactory cf = getConnectionFactory(next);
if (cf == null)
throw new IllegalStateException("No protocol factory for SSL next protocol: '" + next + "' in " + this);
}
super.doStart();
_stopping=new CountDownLatch(_acceptors.length);
for (int i = 0; i < _acceptors.length; i++)
{
Acceptor a = new Acceptor(i);
addBean(a);
getExecutor().execute(a);
}
LOG.info("Started {}", this);
}
使用上层的LifeCycle,让Connector的bean都run起来:
2创建selector线程并启动。
org.eclipse.jetty.io.SelectorManager
@Override
protected void doStart() throws Exception
{
addBean(new ReservedThreadExecutor(getExecutor(),_reservedThreads),true);
for (int i = 0; i < _selectors.length; i++)
{
ManagedSelector selector = newSelector(i);
_selectors[i] = selector;
addBean(selector);
}
super.doStart();
}
3.创建acceptor线程并启动。
org.eclipse.jetty.server.AbstractConnector.dostart():
_stopping=new CountDownLatch(_acceptors.length);
for (int i = 0; i < _acceptors.length; i++)
{
Acceptor a = new Acceptor(i);
addBean(a);
getExecutor().execute(a);
}
4.创建Acceptor都做了什么?
org.eclipse.jetty.server.AbstractConnector.Acceptor.run()
@Override
public void run()
{
final Thread thread = Thread.currentThread();
String name=thread.getName();
//设置线程名字
_name=String.format("%s-acceptor-%d@%x-%s",name,_id,hashCode(),AbstractConnector.this.toString());
thread.setName(_name);
//设置线程优先级
int priority=thread.getPriority();
if (_acceptorPriorityDelta!=0)
thread.setPriority(Math.max(Thread.MIN_PRIORITY,Math.min(Thread.MAX_PRIORITY,priority+_acceptorPriorityDelta)));
synchronized (AbstractConnector.this)
{
//将线程放入_acceptors数组
_acceptors[_id] = thread;
}
try
{
//监听端口
while (isRunning())
{
try (Locker.Lock lock = _locker.lock())
{
if (!_accepting && isRunning())
{
_setAccepting.await();
continue;
}
}
catch (InterruptedException e)
{
continue;
}
try
{
//监听
accept(_id);
}
catch (Throwable x)
{
if (!handleAcceptFailure(x))
break;
}
}
}
finally
{
thread.setName(name);
if (_acceptorPriorityDelta!=0)
thread.setPriority(priority);
synchronized (AbstractConnector.this)
{
_acceptors[_id] = null;
}
CountDownLatch stopping=_stopping;
if (stopping!=null)
stopping.countDown();
}
}
监听客户请求的代码:
org.eclipse.jetty.server.ServerConnector:
@Override
public void accept(int acceptorID) throws IOException
{
ServerSocketChannel serverChannel = _acceptChannel;
if (serverChannel != null && serverChannel.isOpen())
{
SocketChannel channel = serverChannel.accept();
//Acceptor在这里阻塞等待
accepted(channel);
}
}
如果acceptor数量为零,没有专门的线程进行accept,则设置为非阻塞模式,如果是非零,则有专门的线程进行accept,因此为阻塞模式。
org.eclipse.jetty.server.ServerConnector:
@Override
protected void doStart() throws Exception
{
super.doStart();
if (getAcceptors()==0)
{
//没有专门的线程进行accept,则设置为非阻塞模式
_acceptChannel.configureBlocking(false);
_acceptor.set(_manager.acceptor(_acceptChannel));
}
}
处理Http请求:
1.Accept成功
2.请求处理
1.Accept成功
接着acceptor说:
org.eclipse.jetty.server.ServerConnector:
@Override
public void accept(int acceptorID) throws IOException
{
ServerSocketChannel serverChannel = _acceptChannel;
if (serverChannel != null && serverChannel.isOpen())
{
SocketChannel channel = serverChannel.accept();
//Acceptor在这里阻塞等待
accepted(channel);
}
}
分析accepted():
org.eclipse.jetty.server.ServerConnector:
private void accepted(SocketChannel channel) throws IOException
{
//1.设置为非阻塞模式
channel.configureBlocking(false);
//2.配置socket
Socket socket = channel.socket();
configure(socket);
//3.正式处理:把channel交给selectorManager
_manager.accept(channel);
}
正式处理的分析:
1.选择可用的ManagedSelector线程。
- ManagedSelector处理。
org.eclipse.jetty.io. ManagedSelector
public void accept(SelectableChannel channel, Object attachment)
{
//选择可用的ManagedSelector线程。
final ManagedSelector selector = chooseSelector(channel);
//提交任务
selector.submit(selector.new Accept(channel, attachment));
}
1.选择可用的ManagedSelector线程。
org.eclipse.jetty.io. ManagedSelector
private ManagedSelector chooseSelector(SelectableChannel channel)
{
// Ideally we would like to have all connections from the same client end
// up on the same selector (to try to avoid smearing the data from a single
// client over all cores), but because of proxies, the remote address may not
// really be the client - so we have to hedge our bets to ensure that all
// channels don't end up on the one selector for a proxy.
ManagedSelector candidate1 = null;
if (channel != null)
{
try
{
if (channel instanceof SocketChannel)
{
SocketAddress remote = ((SocketChannel)channel).getRemoteAddress();
if (remote instanceof InetSocketAddress)
{
byte[] addr = ((InetSocketAddress)remote).getAddress().getAddress();
if (addr != null)
{
int s = addr[addr.length - 1] & 0xFF;
candidate1 = _selectors[s % getSelectorCount()];
}
}
}
}
catch (IOException x)
{
LOG.ignore(x);
}
}
// The ++ increment here is not atomic, but it does not matter,
// so long as the value changes sometimes, then connections will
// be distributed over the available selectors.
//这里的_selectorIndex是成员变量并不是线程安全的,
//没有必要线程
//安全,只要_selectorIndex是变化的就可以保证分配到不同 //的selectors,如果用线程安全方法(包括原子类)性能会下降。
//就是说有时候线程安全不是必要的e
long s = _selectorIndex++;
int index = (int)(s % getSelectorCount());
ManagedSelector candidate2 = _selectors[index];
if (candidate1 == null || candidate1.size() >= candidate2.size() * 2)
return candidate2;
return candidate1;
}
2.提交任务:
org.eclipse.jetty.io. ManagedSelector
public void submit(Runnable change)
{
if (LOG.isDebugEnabled())
LOG.debug("Queued change {} on {}", change, this);
Selector selector = null;
try (Locker.Lock lock = _locker.lock())
{
_actions.offer(change);
if (_selecting)
{
selector = _selector;
// To avoid the extra select wakeup.
_selecting = false;
}
}
if (selector != null)
selector.wakeup();
}
2.请求处理:
org.eclipse.jetty.io. ManagedSelector.run()
@Override
public void run()
{
try
{
channel.register(_selector, SelectionKey.OP_CONNECT, this);
}
catch (Throwable x)
{
failed(x);
}
}