muduo源码分析:TcpServer类

上篇博文学习了Acceptor class 的实现,它仅仅是对Channel和Socket的简单封装,对使用者来说简单易用。这得益于底层架构Reactor。接下来,开始学习muduo对于建立连接的处理。这属于muduo提到的三个半事件中的第一个。可以想一下,TcpServer class应该也是对Acceptor,Poller的封装。
 

连接处理过程

首先TcpServer通过Acceptor向Poller注册了一个Channel,该Channel关注acceptSocket的readable事件,并设置了回调函数Acceptor::newConnectionCallback 为 TcpServer::newConnection 。

    acceptor_->setNewConnectionCallback(boost::bind(&TcpServer::newConnection, this, _1, _2));	//绑定Acceptor::newConnectionCallback回调函数

然后,当有client连接时,Poller返回该Channel,接着调用该Channel::handleEvent–>handleRead。在Acceptor中accept该连接,然后调用设置好的 Acceptor::newConnectionCallback ,即 TcpServer::newConnection 。

接着,对于每个连接,TcpServer会创建一个TcpConnnection来管理。TcpConnection是最为复杂的一个class,使用shared_ptr管理,因为它的生命周期比较模糊,这一点后面再分析。

最后,会调用TcpConnnection::connectEstablish,它会回调用户设置好的回调函数 connectionCallback。 

(类与类之间通过回调函数联系在了一起)

 

TcpServer.h

class TcpServer : boost::noncopyable
{
 public:
  typedef boost::function ThreadInitCallback;
  enum Option
  {
    kNoReusePort,
    kReusePort,
  };

  //TcpServer(EventLoop* loop, const InetAddress& listenAddr);
  TcpServer(EventLoop* loop,				//构造函数
            const InetAddress& listenAddr,
            const string& nameArg,
            Option option = kNoReusePort);
  ~TcpServer();  // force out-line dtor, for scoped_ptr members.

  const string& ipPort() const { return ipPort_; }
  const string& name() const { return name_; }
  EventLoop* getLoop() const { return loop_; }


  void setThreadNum(int numThreads);			//设置server中需要运行多少个Loop线程
  void setThreadInitCallback(const ThreadInitCallback& cb)	
  { threadInitCallback_ = cb; }
  /// valid after calling start()
  boost::shared_ptr threadPool()
  { return threadPool_; }

  void start();			//启动该TcpServer架构

  void setConnectionCallback(const ConnectionCallback& cb)	//设置新连接回调
  { connectionCallback_ = cb; }

  void setMessageCallback(const MessageCallback& cb)		//设置消息回调
  { messageCallback_ = cb; }

  void setWriteCompleteCallback(const WriteCompleteCallback& cb)	//设置写完成回调
  { writeCompleteCallback_ = cb; }

 private:

  void newConnection(int sockfd, const InetAddress& peerAddr);	//被设置为Acceptor::newConnectionCallback()回调函数
 
  void removeConnection(const TcpConnectionPtr& conn);

  void removeConnectionInLoop(const TcpConnectionPtr& conn);

  typedef std::map ConnectionMap;	//使用map关联容器维护一个连接列表

  EventLoop* loop_;  // the acceptor loop
  const string ipPort_;	//端口号
  const string name_;	//名字
  boost::scoped_ptr acceptor_;				//用于接受连接的Acceptor
  boost::shared_ptr threadPool_;		
  ConnectionCallback connectionCallback_;				//新连接回调
  MessageCallback messageCallback_;					//消息回调
  WriteCompleteCallback writeCompleteCallback_;				//写完成回调
  ThreadInitCallback threadInitCallback_;
  AtomicInt32 started_;			//启动标记
  // always in loop thread
  int nextConnId_;			//下一个连接ID
  ConnectionMap connections_;	        //连接列表
};

几个重要成员:

boost::scoped_ptr acceptor_; 这是上篇文章分析的用于接收连接的class,只在TcpServer内部使用,因此使用scoped_ptr管理

EventLoop* loop_; Reactor的关键class

map connections_; 管理TcpConnection的容器,确切的讲应该是TcpServer通过shared_ptr管理TcpConnection(即TcpConnectionPtr),主要是因为TcpConnection拥有模糊的生命周期。muduo网络库的使用这也会使用TcpConnectionPtr作为参数。每个连接有一个唯一的名字,在创建时生成。
 


TcpServer::TcpServer()

TcpServer::TcpServer(EventLoop* loop,
                     const InetAddress& listenAddr,
                     const string& nameArg,
                     Option option)
  : loop_(CHECK_NOTNULL(loop)),
    ipPort_(listenAddr.toIpPort()),
    name_(nameArg),
    acceptor_(new Acceptor(loop, listenAddr, option == kReusePort)),
    threadPool_(new EventLoopThreadPool(loop, name_)),
    connectionCallback_(defaultConnectionCallback),
    messageCallback_(defaultMessageCallback),
    nextConnId_(1)
{
  acceptor_->setNewConnectionCallback(boost::bind(&TcpServer::newConnection, this, _1, _2));	//绑定newConnectionCallback回调函数
}

 

TcpServer::~TcpServer()

TcpServer::~TcpServer()
{
  loop_->assertInLoopThread();
  LOG_TRACE << "TcpServer::~TcpServer [" << name_ << "] destructing";

  for (ConnectionMap::iterator it(connections_.begin());
      it != connections_.end(); ++it)
  {
    TcpConnectionPtr conn(it->second);
    it->second.reset();
    conn->getLoop()->runInLoop(
      boost::bind(&TcpConnection::connectDestroyed, conn));
  }
}

 

TcpServer::newConnection

void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)	//新连接处理函数
{
  loop_->assertInLoopThread();
  EventLoop* ioLoop = threadPool_->getNextLoop();		//轮询调用线程池的EventLoop循环
  char buf[64];
  snprintf(buf, sizeof buf, "-%s#%d", ipPort_.c_str(), nextConnId_);    //生成唯一的name
  ++nextConnId_;i			//++之后就是下一个连接id
  string connName = name_ + buf;

  LOG_INFO << "TcpServer::newConnection [" << name_
           << "] - new connection [" << connName
           << "] from " << peerAddr.toIpPort();
  InetAddress localAddr(sockets::getLocalAddr(sockfd));	//构造本地地址
  // FIXME poll with zero timeout to double confirm the new connection
  // FIXME use make_shared if necessary
  TcpConnectionPtr conn(new TcpConnection(ioLoop, connName, sockfd, localAddr, peerAddr));	//构造新的TcpConnection,将获取的EventLoop的地址传给新连接对象。
  
  connections_[connName] = conn;		//将该TcpConnection加入到TcpServer的map容器中
  
  //设置TcpConnection三个半事件回调函数,将用户给TcpServer设置的回调传递给TCPConnection
  conn->setConnectionCallback(connectionCallback_);
  conn->setMessageCallback(messageCallback_);
  conn->setWriteCompleteCallback(writeCompleteCallback_);
  conn->setCloseCallback(
      boost::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
  
  //调用conn->connectEstablished()
  ioLoop->runInLoop(boost::bind(&TcpConnection::connectEstablished, conn));
}

TcpServer::removeConnection()

void TcpServer::removeConnection(const TcpConnectionPtr& conn)
{
  // FIXME: unsafe
  loop_->runInLoop(boost::bind(&TcpServer::removeConnectionInLoop, this, conn));
}

void TcpServer::removeConnectionInLoop(const TcpConnectionPtr& conn)
{
  loop_->assertInLoopThread();
  LOG_INFO << "TcpServer::removeConnectionInLoop [" << name_
           << "] - connection " << conn->name();
  size_t n = connections_.erase(conn->name());		//将该TcpConnection从map容器中删除
  (void)n;
  assert(n == 1);
  EventLoop* ioLoop = conn->getLoop();
  ioLoop->queueInLoop(
      boost::bind(&TcpConnection::connectDestroyed, conn));
}

 

使用示例

void onConnection(const muduo::net::TcpConnectionPtr& conn)
{
    if(conn->connected()) {
        std::cout << "New connection" << std::endl;
    } else {
        std::cout << "Connection failed" << std::endl;
    }
}

void onMessage(const muduo::net::TcpConnectionPtr& conn,
               muduo::net::Buffer *buffer)
              //const char* data,
              //ssize_t len)
{
    const std::string readbuf = buffer->retrieveAllAsString();
    std::cout << "Receive :" << readbuf.size()<< " bytes." << std::endl
              << "Content:"  << readbuf << std::endl; 
}

int main()
{

    muduo::net::EventLoop loop;
    muduo::net::TcpServer server(&loop, "8090");
    server.setConnectionCallback(onConnection);
    server.setMessageCallback(onMessage);
    server.start();
    loop.loop();
}

可以看到TcpServer使用比较方便,只需要设置好相应的回调函数,然后start()。 
TcpServer在后台默默地做了很多事情:socket、bind、listen、epoll_wait、accept等等。

 

 

 

 

 

你可能感兴趣的:(moduo网络库,muduo源码分析)