Muduo_Day5(EventLoopThread和EventLoopThreadpoll)

EventLoopThread类

一个程序可以有不止一个IO线程,IO线程也不一定是主线程,我们可以在任何一个线程创建并运行Eventloop.且任何一个线程只要创建并运行了Eventloop,就称该线程为IO线程.
EventloopThread类封装了IO线程,该类创建了一个线程,并在线程函数中创建了一个Eventloop对象,然后将其地址赋值给loop_成员变量,然后notify()条件变量,唤醒startLoop()函数,startLoop()中会启动两个线程,一个线程是调用Eventloop::startLoop()线程,一个是执行Eventloop::threadFunc()线程(在构造函数中boost::bind的函数接口).
构造函数:

EventLoopThread::EventLoopThread(const ThreadInitCallback& cb)
  : loop_(NULL),
    exiting_(false),
    thread_(boost::bind(&EventLoopThread::threadFunc, this)),  //绑定回调函数
    mutex_(),
    cond_(mutex_),
    callback_(cb)
{
}

startloop函数:

EventLoop* EventLoopThread::startLoop()
{
  assert(!thread_.started());
  thread_.start();  //启动创建线程,
 //一个是调用EventLoopThread::startLoop()的线程,一个是执行EventLoopThread::threadFunc()的线程(IO线程)
 {
    MutexLockGuard lock(mutex_);
    while (loop_ == NULL)
    {
      cond_.wait();
    }
  }
return loop_;
}

threadFunc函数:

void EventLoopThread::threadFunc()
{
  EventLoop loop;
  if (callback_)
  {
    callback_(&loop);
  }

  {
    MutexLockGuard lock(mutex_);
    // loop_指针指向了一个栈上的对象,threadFunc函数退出之后,这个指针就失效了,栈上对象自动析构
    // threadFunc函数退出,就意味着线程退出了,EventLoopThread对象也就没有存在的价值了。
    // 因而不会有什么大的问题
    loop_ = &loop;
    cond_.notify();
  }
  loop.loop();
}

测试程序:

#include 
#include 
#include 
using namespace muduo;
using namespace muduo::net;
void runInThread()
{
  printf("runInThread(): pid = %d, tid = %d\n",
         getpid(), CurrentThread::tid());
}

int main()
{
  printf("main(): pid = %d, tid = %d\n",
         getpid(), CurrentThread::tid());

  EventLoopThread loopThread;
  EventLoop* loop = loopThread.startLoop();
  // 异步调用runInThread,即将runInThread添加到loop对象所在IO线程,让该IO线程执行
  loop->runInLoop(runInThread);
  sleep(1);
  // runAfter内部也调用了runInLoop,所以这里也是异步调用,IO线程添加一个2s的定时器
  loop->runAfter(2, runInThread);
  sleep(3);
  loop->quit();
  //~EventLoopThread()会调用loop_->quit();
  printf("exit main().\n");
}

输出

main(): pid = 18547, tid = 18547
runInThread(): pid = 18547, tid = 18548
runInThread(): pid = 18547, tid = 18548
exit main()

分析:主线程创建一个EventloopThread对象,该对象通过调用startLoop()函数返回一个EventLoop兑现发给的地址,并且在该函数中通过thread_.start()创建一个线程,该创建的线程为IO线程.且由于主线程并不是IO线程,所以调用runInLoop()函数实现跨线程调用,将runInThread通过queueInLoop()函数添加到任务队列.然后wakeup() IO线程,IO线程在doPendingFunctors() 中取出队列的runInThread()执行,从运行结果可以看出IO线程的tid与主线程不一样.同理,loop->runAfter(2, runInThread);timerfd_ 可读,先handleRead()一下然后执行回调函数runInThread().

EventLoopThreadPool 线程池类

用one loop per thread的思想实现多线程TcpServer的关键步骤是在新建TcpConnection时从event loop pool里挑选一个loop给TcpConnection用.即多线程TcpServer自己的Eventloop只用来接受新连接,而新的连接会用其他EventLoop来执行IO.相反单线程TcpServer的Eventloop是与TcpConnection共享的.

Multiple Reactors.png

IO线程池的功能是开启若干个IO线程,并让这些IO线程处于事件循环的状态.

Threadpool实现
class EventLoopThreadPool : boost::noncopyable
{
 public:
  typedef boost::function ThreadInitCallback;

  EventLoopThreadPool(EventLoop* baseLoop);
  ~EventLoopThreadPool();
  void setThreadNum(int numThreads) { numThreads_ = numThreads; }
  void start(const ThreadInitCallback& cb = ThreadInitCallback());
  EventLoop* getNextLoop();   //Tcpserver每次新建一个TcpConnection就会调用该函数来取得Eventloop.
 //当Eventloop列表loops_为空时,即为单线程服务时,返回baseLoop_,即TcpServer自己用的那个loop,若为 
  //非空,则按照轮询的调度方式选择一个Eventloop.
 
 private:
  EventLoop* baseLoop_; // 与Acceptor所属EventLoop相同
  bool started_;
  int numThreads_;      // 线程数,除去mainReactor
  int next_;            // 新连接到来,所选择的EventLoop对象下标
  boost::ptr_vector threads_;      // IO线程列表
  std::vector loops_;                   // EventLoop列表
};

start函数的实现:

void EventLoopThreadPool::start(const ThreadInitCallback& cb)
{
  assert(!started_);
  baseLoop_->assertInLoopThread();

  started_ = true;

  for (int i = 0; i < numThreads_; ++i)
  {
    EventLoopThread* t = new EventLoopThread(cb);
    threads_.push_back(t);
    loops_.push_back(t->startLoop());   // 启动EventLoopThread线程,在进入事件循环之前,会调用cb
  }
  if (numThreads_ == 0 && cb)
  {
    // 只有一个EventLoop,在这个EventLoop进入事件循环之前,调用cb
    cb(baseLoop_);
  }
}

其中,baseLoop_与TcpServer类和Acceptor类中的私有成员变量EventLoop* loop_是相同的,即主线程中的mainReactor处理监听事件,已连接套接字事件轮询线程池中的subReactors处理.因此在TcpServer中创建一个newConnection时进行修改,只需把原来TcpServe自用的loop_传给TcpConnection改为现在每次从EventloopThreadpool中取得ioLoop.

void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)
{
  loop_->assertInLoopThread();
  // 按照轮叫的方式选择一个EventLoop
  EventLoop* ioLoop = threadPool_->getNextLoop();
 .....
  TcpConnectionPtr conn(...
  ...
  ioLoop->runInLoop(boost::bind(&TcpConnection::connectEstablished, conn));
 }

测试程序

#include 
#include 
#include 

#include 

#include 

using namespace muduo;
using namespace muduo::net;

class TestServer
{
 public:
  TestServer(EventLoop* loop,
             const InetAddress& listenAddr, int numThreads)
    : loop_(loop),
      server_(loop, listenAddr, "TestServer"),
      numThreads_(numThreads)
  {
    server_.setConnectionCallback(
        boost::bind(&TestServer::onConnection, this, _1));
    server_.setMessageCallback(
        boost::bind(&TestServer::onMessage, this, _1, _2, _3));
    server_.setThreadNum(numThreads);
  }

  void start()
  {
      server_.start();
  }

 private:
  void onConnection(const TcpConnectionPtr& conn)
  {
    if (conn->connected())
    {
      printf("onConnection(): new connection [%s] from %s\n",
             conn->name().c_str(),
             conn->peerAddress().toIpPort().c_str());
    }
    else
    {
      printf("onConnection(): connection [%s] is down\n",
             conn->name().c_str());
    }
  }

  void onMessage(const TcpConnectionPtr& conn,
                   const char* data,
                   ssize_t len)
  {
    printf("onMessage(): received %zd bytes from connection [%s]\n",
           len, conn->name().c_str());
  }

  EventLoop* loop_;
  TcpServer server_;
  int numThreads_;
};


int main()
{
  printf("main(): pid = %d\n", getpid());

  InetAddress listenAddr(8888);
  EventLoop loop;

  TestServer server(&loop, listenAddr,4);
  server.start();

  loop.loop();
}

开启两个终端运行"nc 127.0.0.1 8888",一个输入"aaaa",一个输"bbbb"运行结果:

**main(): pid = 1492**
20191012 13:28:49.581281Z  1492 TRACE updateChannel fd = 4 events = 3 - EPollPoller.cc:104
20191012 13:28:49.581472Z  1492 TRACE EventLoop EventLoop created 0x7FFED67C0870 in thread 1492 - EventLoop.cc:62
20191012 13:28:49.581490Z  1492 TRACE updateChannel fd = 5 events = 3 - EPollPoller.cc:104
20191012 13:28:49.581799Z  1493 TRACE updateChannel fd = 9 events = 3 - EPollPoller.cc:104
20191012 13:28:49.581844Z  1493 TRACE EventLoop EventLoop created 0x7FAC2121FA40 in thread 1493 - EventLoop.cc:62
20191012 13:28:49.581868Z  1493 TRACE updateChannel fd = 10 events = 3 - EPollPoller.cc:104
20191012 13:28:49.581902Z  1493 TRACE loop EventLoop 0x7FAC2121FA40 start looping - EventLoop.cc:94
20191012 13:28:49.582118Z  1494 TRACE updateChannel fd = 12 events = 3 - EPollPoller.cc:104
20191012 13:28:49.582167Z  1494 TRACE EventLoop EventLoop created 0x7FAC20A1EA40 in thread 1494 - EventLoop.cc:62
20191012 13:28:49.582185Z  1494 TRACE updateChannel fd = 13 events = 3 - EPollPoller.cc:104
20191012 13:28:49.582207Z  1494 TRACE loop EventLoop 0x7FAC20A1EA40 start looping - EventLoop.cc:94
20191012 13:28:49.582407Z  1495 TRACE updateChannel fd = 15 events = 3 - EPollPoller.cc:104
20191012 13:28:49.582561Z  1495 TRACE EventLoop EventLoop created 0x7FAC1BFFEA40 in thread 1495 - EventLoop.cc:62
20191012 13:28:49.582613Z  1495 TRACE updateChannel fd = 16 events = 3 - EPollPoller.cc:104
20191012 13:28:49.582651Z  1495 TRACE loop EventLoop 0x7FAC1BFFEA40 start looping - EventLoop.cc:94
20191012 13:28:49.582814Z  1496 TRACE updateChannel fd = 18 events = 3 - EPollPoller.cc:104
20191012 13:28:49.582854Z  1496 TRACE EventLoop EventLoop created 0x7FAC1B7FDA40 in thread 1496 - EventLoop.cc:62
20191012 13:28:49.582871Z  1496 TRACE updateChannel fd = 19 events = 3 - EPollPoller.cc:104
20191012 13:28:49.582891Z  1496 TRACE loop EventLoop 0x7FAC1B7FDA40 start looping - EventLoop.cc:94
20191012 13:28:49.582936Z  1492 TRACE updateChannel fd = 6 events = 3 - EPollPoller.cc:104
20191012 13:28:49.582971Z  1492 TRACE loop EventLoop 0x7FFED67C0870 start looping - EventLoop.cc:94
20191012 13:29:03.904403Z  1492 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:03.904700Z  1492 TRACE printActiveChannels {6: IN }  - EventLoop.cc:257
20191012 13:29:03.904778Z  1492 INFO  TcpServer::newConnection [TestServer] - new connection [TestServer:0.0.0.0:8888#1] from 127.0.0.1:35708 - TcpServer.cc:93
20191012 13:29:03.904815Z  1492 DEBUG TcpConnection TcpConnection::ctor[TestServer:0.0.0.0:8888#1] at 0x18BB020 fd=20 - TcpConnection.cc:62
20191012 13:29:03.904833Z  1492 TRACE newConnection [1] usecount=1 - TcpServer.cc:111
20191012 13:29:03.904858Z  1492 TRACE newConnection [2] usecount=2 - TcpServer.cc:113
20191012 13:29:03.904894Z  1492 TRACE newConnection [5] usecount=3 - TcpServer.cc:122
20191012 13:29:03.904913Z  1493 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:03.904971Z  1493 TRACE printActiveChannels {10: IN }  - EventLoop.cc:257
20191012 13:29:03.905012Z  1493 TRACE connectEstablished [3] usecount=3 - TcpConnection.cc:78
20191012 13:29:03.905024Z  1493 TRACE updateChannel fd = 20 events = 3 - EPollPoller.cc:104
onConnection(): new connection [TestServer:0.0.0.0:8888#1] from 127.0.0.1:35708
20191012 13:29:03.905071Z  1493 TRACE connectEstablished [4] usecount=3 - TcpConnection.cc:83
20191012 13:29:16.368826Z  1492 TRACE printActiveChannels {6: IN }  - EventLoop.cc:257
20191012 13:29:16.368843Z  1492 INFO  TcpServer::newConnection [TestServer] - new connection [TestServer:0.0.0.0:8888#2] from 127.0.0.1:35712 - TcpServer.cc:93
20191012 13:29:16.368851Z  1492 DEBUG TcpConnection TcpConnection::ctor[TestServer:0.0.0.0:8888#2] at 0x18BB310 fd=21 - TcpConnection.cc:62
20191012 13:29:16.368857Z  1492 TRACE newConnection [1] usecount=1 - TcpServer.cc:111
20191012 13:29:16.368866Z  1492 TRACE newConnection [2] usecount=2 - TcpServer.cc:113
20191012 13:29:16.368877Z  1492 TRACE newConnection [5] usecount=3 - TcpServer.cc:122
20191012 13:29:16.368881Z  1494 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:16.368896Z  1494 TRACE printActiveChannels {13: IN }  - EventLoop.cc:257
20191012 13:29:16.368907Z  1494 TRACE connectEstablished [3] usecount=3 - TcpConnection.cc:78
20191012 13:29:16.368911Z  1494 TRACE updateChannel fd = 21 events = 3 - EPollPoller.cc:104
onConnection(): new connection [TestServer:0.0.0.0:8888#2] from 127.0.0.1:35712
20191012 13:29:16.368923Z  1494 TRACE connectEstablished [4] usecount=3 - TcpConnection.cc:83
20191012 13:29:18.455550Z  1494 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:18.455578Z  1494 TRACE printActiveChannels {21: IN }  - EventLoop.cc:257
20191012 13:29:18.455582Z  1494 TRACE handleEvent [6] usecount=2 - Channel.cc:67
onMessage(): received 5 bytes from connection [TestServer:0.0.0.0:8888#2]
20191012 13:29:18.455603Z  1494 TRACE handleEvent [12] usecount=2 - Channel.cc:69
20191012 13:29:21.601139Z  1493 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:21.601201Z  1493 TRACE printActiveChannels {20: IN }  - EventLoop.cc:257
20191012 13:29:21.601215Z  1493 TRACE handleEvent [6] usecount=2 - Channel.cc:67
onMessage(): received 5 bytes from connection [TestServer:0.0.0.0:8888#1]
20191012 13:29:21.601272Z  1493 TRACE handleEvent [12] usecount=2 - Channel.cc:69
20191012 13:29:23.948582Z  1494 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:23.948648Z  1494 TRACE printActiveChannels {21: IN }  - EventLoop.cc:257
20191012 13:29:23.948662Z  1494 TRACE handleEvent [6] usecount=2 - Channel.cc:67
20191012 13:29:23.948692Z  1494 TRACE handleClose fd = 21 state = 2 - TcpConnection.cc:144
20191012 13:29:23.948706Z  1494 TRACE updateChannel fd = 21 events = 0 - EPollPoller.cc:104
onConnection(): connection [TestServer:0.0.0.0:8888#2] is down
20191012 13:29:23.948737Z  1494 TRACE handleClose [7] usecount=3 - TcpConnection.cc:152
20191012 13:29:23.948768Z  1494 TRACE handleClose [11] usecount=4 - TcpConnection.cc:155
20191012 13:29:23.948777Z  1494 TRACE handleEvent [12] usecount=3 - Channel.cc:69
20191012 13:29:23.948801Z  1492 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:23.948822Z  1492 TRACE printActiveChannels {5: IN }  - EventLoop.cc:257
20191012 13:29:23.948837Z  1492 INFO  TcpServer::removeConnectionInLoop [TestServer] - connection TestServer:0.0.0.0:8888#2 - TcpServer.cc:153
20191012 13:29:23.948845Z  1492 TRACE removeConnectionInLoop [8] usecount=2 - TcpServer.cc:157
20191012 13:29:23.948867Z  1492 TRACE removeConnectionInLoop [9] usecount=1 - TcpServer.cc:159
20191012 13:29:23.948890Z  1492 TRACE removeConnectionInLoop [10] usecount=2 - TcpServer.cc:170
20191012 13:29:23.948908Z  1494 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:23.948922Z  1494 TRACE printActiveChannels {13: IN }  - EventLoop.cc:257
20191012 13:29:23.948935Z  1494 TRACE removeChannel fd = 21 - EPollPoller.cc:147
20191012 13:29:23.948948Z  1494 DEBUG ~TcpConnection TcpConnection::dtor[TestServer:0.0.0.0:8888#2] at 0x18BB310 fd=21 - TcpConnection.cc:69
20191012 13:29:24.584651Z  1493 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:24.584702Z  1493 TRACE printActiveChannels {20: IN }  - EventLoop.cc:257
20191012 13:29:24.584712Z  1493 TRACE handleEvent [6] usecount=2 - Channel.cc:67
20191012 13:29:24.584735Z  1493 TRACE handleClose fd = 20 state = 2 - TcpConnection.cc:144
20191012 13:29:24.584749Z  1493 TRACE updateChannel fd = 20 events = 0 - EPollPoller.cc:104
onConnection(): connection [TestServer:0.0.0.0:8888#1] is down
20191012 13:29:24.584773Z  1493 TRACE handleClose [7] usecount=3 - TcpConnection.cc:152
20191012 13:29:24.584795Z  1493 TRACE handleClose [11] usecount=4 - TcpConnection.cc:155
20191012 13:29:24.584801Z  1493 TRACE handleEvent [12] usecount=3 - Channel.cc:69
20191012 13:29:24.584803Z  1492 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:24.584840Z  1492 TRACE printActiveChannels {5: IN }  - EventLoop.cc:257
20191012 13:29:24.584860Z  1492 INFO  TcpServer::removeConnectionInLoop [TestServer] - connection TestServer:0.0.0.0:8888#1 - TcpServer.cc:153
20191012 13:29:24.584866Z  1492 TRACE removeConnectionInLoop [8] usecount=2 - TcpServer.cc:157
20191012 13:29:24.584877Z  1492 TRACE removeConnectionInLoop [9] usecount=1 - TcpServer.cc:159
20191012 13:29:24.584897Z  1492 TRACE removeConnectionInLoop [10] usecount=2 - TcpServer.cc:170
20191012 13:29:24.584906Z  1493 TRACE poll 1 events happended - EPollPoller.cc:65
20191012 13:29:24.584944Z  1493 TRACE printActiveChannels {10: IN }  - EventLoop.cc:257
20191012 13:29:24.584959Z  1493 TRACE removeChannel fd = 20 - EPollPoller.cc:147
20191012 13:29:24.584971Z  1493 DEBUG ~TcpConnection TcpConnection::dtor[TestServer:0.0.0.0:8888#1] at 0x18BB020 fd=20 - TcpConnection.cc:69

结果分析:创建4个线程,总共有五个IO线程,一个是主线程,4个线程池中创建的线程subReactor,其中server.start()会启动创建线程池中的4个线程,该函数里面会依次调用TcpServer::start()->EventloopThreadPool::start()->EventLoopThread::startLoop()函数,并且启动mainReactor监听:

void Tcpserver::start()
{
    loop_->runInLoop(boost::bind(&Acceptor::listen, get_pointer(acceptor_)));
}

文件描述符分析:
一个进程本来被打开的文件描述符就有0,1,2;
且每个Reactor 的 EventLoop 对象构造时,默认使用的是EPollPoller,即EPollPoller::epollfd_ ;
此外还有两个channel(EventLoop::timeQueue_ ::timerfd_ 和 EventLoop::wakeupFd_ )
处于被poll()关注可读事件的状态,而且是一直关注直到事件循环结束,可见每个Reactor 都分别有这3个fd;
对于mainReactor来说,还有Acceptor::acceptSocket_.sockfd_ (listenfd); Acceptor::idleFd_ ; (/dev/null 空闲fd);所以,在上面的运行结果中,mainReactor中:epollfd_ = 3; timerfd_ = 4; wakeupFd_ = 5; sockfd_ = 6; idleFd_ = 7;剩下的外下排每三个分别归4个IO线程所有.

TRACE updateChannel fd = 20 events = 3 - EPollPoller.cc:104

所以这样新连接的套接字fd只能从20开始算起.
过程:
当使用nc命令进行连接时,sockfd_可读,mainReactor使用acceptor_进行接受连接,并在Acceptor::handleRead函数中回调TcpServer::newConnection函数,在函数中会新建一个Eventloop*ioloop对象,并采用轮询的方式赋值到threadpool_->getNextLoop()的返回值上.并新建一个TcpConnection对象conn,绑定到轮询产生的ioloop上,并设置conn上的setConnectionCallback与setMessageCallback回调.函数内调用ioLoop->runInLoop(); 会唤醒第一个IO线程,即第一个IO线程的wakeupFd_ (10)可读,handleEvent() 处理后继续处理doPendingFunctors(),执行TcpConnection::connectEstablished()接下来就是调用channel::handleEvent()等等.

你可能感兴趣的:(Muduo_Day5(EventLoopThread和EventLoopThreadpoll))