第一部分,主流程分析
1.main
1.1 initServer 事件处理器
1.2 InitServerLast
1.2.1 initThreadedIO 创建io子线程, 注意这里的
/* Spawn and initialize the I/O threads. */
for (int i = 0; i < server.io_threads_num; i++) {
/* Things we do for all the threads including the main thread. */
io_threads_list[i] = listCreate();
if (i == 0) continue; /* Thread 0 is the main thread. */
/* Things we do only for the additional threads. */
pthread_t tid;
pthread_mutex_init(&io_threads_mutex[i],NULL);
io_threads_pending[i] = 0;
pthread_mutex_lock(&io_threads_mutex[i]); /* Thread will be stopped. */
if (pthread_create(&tid,NULL,IOThreadMain,(void*)(long)i) != 0) {
serverLog(LL_WARNING,"Fatal: Can't initialize IO thread.");
exit(1);
}
io_threads[i] = tid;
}
1.2.2 注意这里创建子线程时的处理,在index=0时,直接continue,猜测这里是为了保持在conf文件中配置的线程数就是总的线程数,而不是io线程数,index为0的就是当前线程,即主线程
2.主循环 beforeSleep
3.事件beforeSleep中 handleClientsWithPendingReadsUsingThreads
3.1 server.clients_pending_read轮转分配到N个io_threads_list队列中
/* Distribute the clients across N different lists. */
listIter li;
listNode *ln;
listRewind(server.clients_pending_read,&li);
int item_id = 0;
while((ln = listNext(&li))) {
client *c = listNodeValue(ln);
int target_id = item_id % server.io_threads_num;
listAddNodeTail(io_threads_list[target_id],c);
item_id++;
}
3.2 主线程 通过将原子开关io_threads_op置位为IO_THREADS_OP_READ,开启各个io线程的读操作,然后主线程处理自己的io_threads_list[0],并通过原子开关io_threads_pending来启动io子线程
/* Give the start condition to the waiting threads, by setting the
* start condition atomic var. */
io_threads_op = IO_THREADS_OP_READ;
for (int j = 1; j < server.io_threads_num; j++) {
int count = listLength(io_threads_list[j]);
io_threads_pending[j] = count;
}
3.3 主线程 自旋检测所有ios线程的处理队列对否都已清空
/* Wait for all the other threads to end their work. */
while(1) {
unsigned long pending = 0;
for (int j = 1; j < server.io_threads_num; j++)
pending += io_threads_pending[j];
if (pending == 0) break;
}
3.4 主线程 在db中查询client上来的读请求
3.5 io子线程 IOThreadMain 自旋等待,通过原子开关 io_threads_pending 来判断
while(1) {
/* Wait for start */
for (int j = 0; j < 1000000; j++) {
if (io_threads_pending[id] != 0) break;
}
/* Give the main thread a chance to stop this thread. */
if (io_threads_pending[id] == 0) {
pthread_mutex_lock(&io_threads_mutex[id]);
pthread_mutex_unlock(&io_threads_mutex[id]);
continue;
}
3.6 io子线程 通过原子开关io_threads_pending和io_threads_op启动客户端请求的解析和查询结果的返回
listIter li;
listNode *ln;
listRewind(io_threads_list[id],&li);
while((ln = listNext(&li))) {
client *c = listNodeValue(ln);
if (io_threads_op == IO_THREADS_OP_WRITE) {
writeToClient(c,0);
} else if (io_threads_op == IO_THREADS_OP_READ) {
readQueryFromClient(c->conn);
} else {
serverPanic("io_threads_op value is unknown");
}
}
listEmpty(io_threads_list[id]);
io_threads_pending[id] = 0;
4.事件beforeSleep中 handleClientsWithPendingWritesUsingThreads, 具体策略类似读操作的io多线程处理
第二部分,关键模块分析
1.readQueryFromClient(connection *conn)
1.1 在多线程模式下会启用postponeClientRead
1.1.1 postponeClientRead, 如果client还没有被加入到server.clients_pending_read队列中,则加入这个全局队列,设置client的CLIENT_PENDING_READ状态,放到下一轮的多线程处理读请求的时候再处理
int postponeClientRead(client *c) {
if (io_threads_active &&
server.io_threads_do_reads &&
!ProcessingEventsWhileBlocked &&
!(c->flags & (CLIENT_MASTER|CLIENT_SLAVE|CLIENT_PENDING_READ)))
{
c->flags |= CLIENT_PENDING_READ;
listAddNodeHead(server.clients_pending_read,c);
return 1;
} else {
return 0;
}
}
1.1.2 如果一个client在一个时间片内被加入到了server.clients_pending_read,那么不会被加入第二次, 这样可以保证一个时间片内,不管cleint有多少次请求,都会被同一个io thread处理完,不会存在client的多个请求被分散在多个io thread中导致的数据不一致问题
1.2 在单线程模式下或者client已经在CLIENT_PENDING_READ状态下, 直接解析读请求,查询db
第三部分 总结
redis多线程的核心思想,就是不改动之前的db处理部分,只对前阶段的网络请求解析和后阶段的网络回包做多线程处理,按照作者的说法,前后两部分才是耗时大户。