一个客户端请求,服务器就会创建一个新的线程来处理。
Server端新建一个socket连接监控客户端请求,理论上一个新建一个线程来处理。
servlet分成三个生命周期,如下图所示。
]
阶段1:初始化阶段servlet调用init()方法。
阶段二:响应客户请求,客户首次向servlet发送请求。
阶段三:终止阶段。当服务器关闭的时候,调用destroy方法。
servlet线程安全分析:servlet是在容器里,以单例的形式存在,也会有线程安全的问题,但是servlet这个类设计的时候,就不存在共享变量,每个请求都有自己的request对象和response对象,不存在共享变量的问题,故servlet是线程安全的。
关于线程池如何设置合理的思考。
首先,我们为什么要使用多线程?
因为单线程程序不能充分利用cpu的性能,多线程程序主要是通过压榨cpu性能,达到提高我们应用性能的目的。
public class StandardThreadExecutor extends LifecycleMBeanBase
implements Executor, ResizableExecutor {
// ---------------------------------------------- Properties
/**
* Default thread priority
*/
protected int threadPriority = Thread.NORM_PRIORITY;
/**
* Run threads in daemon or non-daemon state
*/
protected boolean daemon = true;
/**
* Default name prefix for the thread name
*/
protected String namePrefix = "tomcat-exec-";
/**
* max number of threads
*/
protected int maxThreads = 200;
/**
* min number of threads
*/
protected int minSpareThreads = 25;
/**
* idle time in milliseconds
*/
protected int maxIdleTime = 60000;
/**
* The executor we use for this component
*/
protected ThreadPoolExecutor executor = null;
/**
* the name of this thread pool
*/
protected String name;
/**
* prestart threads?
*/
protected boolean prestartminSpareThreads = false;
/**
* The maximum number of elements that can queue up before we reject them
*/
protected int maxQueueSize = Integer.MAX_VALUE;
/**
* After a context is stopped, threads in the pool are renewed. To avoid
* renewing all threads at the same time, this delay is observed between 2
* threads being renewed.
*/
protected long threadRenewalDelay =
org.apache.tomcat.util.threads.Constants.DEFAULT_THREAD_RENEWAL_DELAY;
private TaskQueue taskqueue = null;
// ---------------------------------------------- Constructors
public StandardThreadExecutor() {
//empty constructor for the digester
}
// ---------------------------------------------- Public Methods
@Override
protected void initInternal() throws LifecycleException {
super.initInternal();
}
/**
* Start the component and implement the requirements
* of {@link org.apache.catalina.util.LifecycleBase#startInternal()}.
*
* @exception LifecycleException if this component detects a fatal error
* that prevents this component from being used
*/
@Override
protected void startInternal() throws LifecycleException {
taskqueue = new TaskQueue(maxQueueSize);
TaskThreadFactory tf = new TaskThreadFactory(namePrefix,daemon,getThreadPriority());
executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), maxIdleTime, TimeUnit.MILLISECONDS,taskqueue, tf);
executor.setThreadRenewalDelay(threadRenewalDelay);
if (prestartminSpareThreads) {
executor.prestartAllCoreThreads();
}
taskqueue.setParent(executor);
setState(LifecycleState.STARTING);
}
/**
* Stop the component and implement the requirements
* of {@link org.apache.catalina.util.LifecycleBase#stopInternal()}.
*
* @exception LifecycleException if this component detects a fatal error
* that needs to be reported
*/
@Override
protected void stopInternal() throws LifecycleException {
setState(LifecycleState.STOPPING);
if ( executor != null ) executor.shutdownNow();
executor = null;
taskqueue = null;
}
@Override
protected void destroyInternal() throws LifecycleException {
super.destroyInternal();
}
@Override
public void execute(Runnable command, long timeout, TimeUnit unit) {
if ( executor != null ) {
executor.execute(command,timeout,unit);
} else {
throw new IllegalStateException("StandardThreadExecutor not started.");
}
}
@Override
public void execute(Runnable command) {
if ( executor != null ) {
try {
executor.execute(command);
} catch (RejectedExecutionException rx) {
//there could have been contention around the queue
if ( !( (TaskQueue) executor.getQueue()).force(command) ) throw new RejectedExecutionException("Work queue full.");
}
} else throw new IllegalStateException("StandardThreadPool not started.");
}
nginx最大的作用就是做负载均衡,只做请求分发,不做任务其它分占cpu、io和内存的事。机器的性能都用在处理请求上。
漏斗算法:外部的请求不论以怎么样的速率进来,总是以同样的速率分发给下游的应用。
nginx里的配置
1. ngx_http_limit_req_module模块
语法:limit_req zone=name [burst=number] [nodelay]
默认值: -
上下文:http,server,location
设置对应的共享内存限制和最大请求阀值。超过阀值的请求,就发在一个队列里,延时处理;如果延时的请求超过了阀值,请求会被终止,并返回503。
limit_req_zone $binary_remote_addr zone=one:10m rate=3000/r
server{
location /search/{
limit_req zone=one burst=100
}
}
}
这里配置的,平均每秒的请求不超过3000个,延时请求阀值为100。
语法:limit_conn zone number;
默认值:-
上下文:http,server,location
指定一块已经设定的共享内存空间,以及每个给定键值的连接数。当连接数超过阀值时,将会返回503。
limit_conn_zone $binary_remote_addr zone=perip:10m;
limit_conn_zone $server_name zone=perserver:10m;
server{
...
limit_conn perip 10;
limit_conn perserver 200;
}
上面的这段配置表示,同一IP同一时间只能有10个连接,单一虚拟机服务器的总连接数只能有200.
在工作中,实现高并发系统的前提:
具体做法就是把单线程运行的业务拆分成多线程的。根据Amdahl定律,一个应用里并行化所占的比例越高,多核cpu机器性能运用的月充分。根据业务合理拆分。具体原则如下:
当我们对业务有了全面的认识后,就可以合理的配置线程池了。原则如下: