redis 事件处理机制及其它

关于redis的事件处理机制,网上有很详细的源码解析了,基于2.0.4的,URL如下:
redis源代码分析8–事件处理(上)

redis源代码分析8–事件处理(中)

redis源代码分析8–事件处理(下)


小总结:

初始化:在redis.c中initServer调用aeCreateEventLoop,并建立了现有唯一的一个time event:serverCron.
使用:在redis.c中main
1) 调用aeSetBeforeSleepProc设置beforeSleep函数,我看的源码是2.2.1的,功能比2.0.4稍作改变,加上了处理之前等待BLPOP的已经unblock的clients的功能.
2) 调用aeMain进入主循环,循环调用beforesleep和aeProcessEvents.程序出口在serverCron中,若接收到SIGTERM,则调用prepareForShutdown,随后退出.
  2.1) 在Ae.c中aeProcessEvents,先处理file event,然后处理time event.小trick:先算好最近的time event的expire的时间(aeSearchNearestTimer),传给aeApiPoll,因此处理完file event以后,马上就可以处理time event.尤其当aeApiPoll使用反射机制的epoll和kqueue时,不会白白占用CPU时间来轮询.

顺便补上select,epoll,kqueue的比较资料:

---------------------<<以下来源于iteye >>-------------------------
先说select :
1.Socket数量限制:该模式可操作的Socket数由FD_SETSIZE决定,内核默认32*32=1024.
2.操作限制:通过遍历FD_SETSIZE个Socket来完成调度,不管哪个Socket是活跃的,都遍历一遍.
后说poll:
1.Socket数量几乎无限制:该模式下的Socket对应的fd列表由一个数组来保存,大小不限(默认4k).
2.操作限制:同Select.
再说:epoll:
1.Socket数量无限制:同Poll
2.操作无限制:基于内核提供的反射模式,有活跃Socket时,内核访问该Socket的callback,不需要遍历轮询.

大部分情况下,反射的效率都比遍历来的高,但是当所有Socket都活跃的时候,反射还会更高么?这时候所有的callback都被唤醒,会导致资源的竞争.既然都是要处理所有的Socket,那么遍历是最简单最有效的实现方式.

举例来说:
对于IM服务器,服务器和服务器之间都是长链接,但数量不多,一般一台60\70个,比如采用ICE这种架构设计,但请求相当频繁和密集,这时候通过反射唤醒callback不一定比用select去遍历处理更好.
对于web portal服务器,都是浏览器客户端发起的http短链接请求,数量很大,好一点的网站动辄每分钟上千个请求过来,同时服务器端还有更多的闲置等待超时 的Socket,这时候没必要把全部的Socket都遍历处理,因为那些等待超时的请求是大多数的,这样用epoll会更好.
---------------------<<blow from stackovelflow>>-------------------------
poll / select
1) Two flavours (BSD vs. System V) of more or less the same thing.
2) Somewhat old and slow, somewhat awkward usage, but there is virtually no platform that does not support them.
3) Waits until "something happens" on a set of descriptors
   3.1) Allows one thread/process to handle many requests at a time.
   3.2) No multi-core usage.
4) Needs to copy list of descriptors from user to kernel space every time you wait. Needs to perform a linear search over descriptors. This limits its effectiveness.
5) Does not scale well to "thousands" (in fact, hard limit around 1024 on most systems, or as low as 64 on some).
6) Use it because it's portable if you only deal with a dozen descriptors anyway (no performance issues there), or if you must support platforms that don't have anything better. Don't use otherwise.

epoll

1) Linux only.
2) Concept of expensive modifications vs. efficient waits:
   2.1) Copies information about descriptors to kernel space when descriptors are added (epoll_ctl)
3) This is usually something that happens rarely.
   3.1) Does not need to copy data to kernel space when waiting for events (epoll_wait)
     This is usually something that happens very often.
   3.2) Adds the waiter (or rather its epoll structure) to descriptors' wait queues
     3.2.1) Descriptor therefore knows who is listening and directly signals waiters when appropriate rather than waiters searching a list of descriptors
     3.2.2) Opposite way of how poll works 

     3.2.3) O(1) and very fast instead of O(n)
4) Works very well with timerfd and eventfd (stunning timer resolution and accuracy, too).
5) Some minor pitfalls:
   5.1) An epoll wakes all threads waiting on it (this is "works as intended"), therefore the naive way of using epoll with threads is useless.
   5.2) Does not work as one would expect with file read/writes ("always ready").
   5.3) Could not be used with AIO until recently, now possible via eventfd, but requires a (to date) undocumented function.


kqueue
1) BSD analogon to epoll, different usage, similar effect.
2) Rumoured to be faster (I've never used it, so cannot tell if that is true).

你可能感兴趣的:(redis,epoll,select,performance,poll)