Acceptor类一般由TCPServer创建,负责处理客户端发送的connect,它拥有一个acceptSocket_和acceptChannel_成员。
1、创建Acceptor :
TcpServer::TcpServer(EventLoop* loop, const InetAddress& listenAddr, const string& _name) : loop_(CHECK_NOTNULL(loop)), hostport_(listenAddr.toHostPort()), name_(_name), acceptor_(new Acceptor(loop, listenAddr)), threadPool_(new EventLoopThreadPool(loop)), connectionCallback_(defaultConnectionCallback), messageCallback_(defaultMessageCallback), started_(false), nextConnId_(1) { acceptor_->setNewConnectionCallback( boost::bind(&TcpServer::newConnection, this, _1, _2)); }
Acceptor创建非阻塞的listen socket和channel,并且将Acceptor::handleRead函数设置为acceptChannel_的Read Callback:
Acceptor::Acceptor(EventLoop* loop, const InetAddress& listenAddr, bool reuseport) : loop_(loop), acceptSocket_(sockets::createNonblockingOrDie()), //create a listening socket acceptChannel_(loop, acceptSocket_.fd()), //create listening channel listenning_(false), idleFd_(::open("/dev/null", O_RDONLY | O_CLOEXEC))
{
acceptSocket_.setReuseAddr(true);
acceptSocket_.bindAddress(listenAddr);
acceptChannel_.setReadCallback(
boost::bind(&Acceptor::handleRead, this));
}
客户端每connect一次,服务端都会调用Acceptor::handleRead()进行处理:
void Acceptor::handleRead() { loop_->assertInLoopThread(); InetAddress peerAddr(0); //FIXME loop until no more int connfd = acceptSocket_.accept(&peerAddr); if (connfd >= 0) { // string hostport = peerAddr.toHostPort(); // LOG_TRACE << "Accepts of " << hostport; if (newConnectionCallback_) { newConnectionCallback_(connfd, peerAddr); } else { sockets::close(connfd); } } }
再看看acceptSocket_.accept(&peerAddr)函数:
int sockets::accept(int sockfd, struct sockaddr_in* addr) { socklen_t addrlen = sizeof *addr; #if VALGRIND int connfd = ::accept(sockfd, sockaddr_cast(addr), &addrlen); setNonBlockAndCloseOnExec(connfd); #else int connfd = ::accept4(sockfd, sockaddr_cast(addr), &addrlen, SOCK_NONBLOCK | SOCK_CLOEXEC); #endif if (connfd < 0) { int savedErrno = errno; LOG_SYSERR << "Socket::accept"; switch (savedErrno) { case EAGAIN: case ECONNABORTED: case EINTR: case EPROTO: // ??? case EPERM: // expected errors break; case EBADF: case EFAULT: case EINVAL: case EMFILE: // per-process lmit of open file desctiptor ??? case ENFILE: case ENOBUFS: case ENOMEM: case ENOTSOCK: case EOPNOTSUPP: // unexpected errors LOG_FATAL << "unexpected error of ::accept"; break; default: LOG_FATAL << "unknown error of ::accept " << savedErrno; break; } } return connfd; }
接受客户端的连接,同时设置连接socket为非阻塞方式。
由于listen socket是非阻塞方式,
2、下面看看Acceptor::acceptChannel_是怎么注册到Eventloop中去的:
void TcpServer::start() { if (!started_) { started_ = true; threadPool_->start(); } if (!acceptor_->listenning()) { loop_->runInLoop( boost::bind(&Acceptor::listen, get_pointer(acceptor_))); } }
上面启动TcpServer时调用了Acceptor::listen(),这个函数顾名思义,就是开始监听对端的连接请求:
void Acceptor::listen() { loop_->assertInLoopThread(); listenning_ = true; acceptSocket_.listen(); acceptChannel_.enableReading(); }
Acceptor::enableReading是个内联函数,设置acceptChannel_感兴趣的事件是kReadEvent = POLLIN | POLLPRI
void enableReading() { events_ |= kReadEvent; update(); }
我们注意到,上面的函数同时调用了Channel::update(),这个函数没有其他操作,直接调用了EventLoop::updateChannel(Channel* channel):
void Channel::update() { loop_->updateChannel(this); }
EventLoop::updateChannel(Channel* channel)函数如下:
void EventLoop::updateChannel(Channel* channel) { assert(channel->getLoop() == this); assertInLoopThread(); poller_->updateChannel(channel); }
首先判断channel的loop是否是当前运行的loop以及updateChannel函数是否在当前loop线程中线程中调用的,可以看出:
1、一个channel只能属于一个loop,不能在其他loop中使用当前loop的channel;
2、一个loop只能在属于一个线程,one loop per thread。
接下来调用了poller_->updateChannel,muduo支持两种方式的IO复用:poll和epoll,默认为epoll方式,看下epoll的updateChannel干了什么:
void EPollPoller::updateChannel(Channel* channel) { Poller::assertInLoopThread(); LOG_TRACE << "fd = " << channel->fd() << " events = " << channel->events(); if (channel->index() < 0) { // a new one, add with EPOLL_CTL_ADD int fd = channel->fd(); assert(channels_.find(fd) == channels_.end()); channel->set_index(1); channels_[fd] = channel; update(EPOLL_CTL_ADD, channel); } else { // update existing one with EPOLL_CTL_MOD int fd = channel->fd(); assert(channels_.find(fd) != channels_.end()); assert(channels_[fd] == channel); assert(channel->index() == 1); update(EPOLL_CTL_MOD, channel); } }
Poller::assertInLoopThread()保证updateChannel是在其所属的loop线程中调用的;
channel::index_用来标识channel是否注册到给了EPoll。
如果小于0,说明还没有注册到EPoll中,添加channel到map中,同时调用update(EPOLL_CTL_ADD, channel):
如果大于0,调用update(EPOLL_CTL_MOD, channel)。
下面看看 EPollPoller::update:
void EPollPoller::update(int operation, Channel* channel) { struct epoll_event event; bzero(&event, sizeof event); event.events = channel->events(); event.data.ptr = channel; if (::epoll_ctl(epollfd_, operation, channel->fd(), &event) < 0) { LOG_SYSFATAL << "epoll_ctl op=" << operation; } }
这个地方很明白了,注册监听事件给内核,同时将当前通道的指针传给内核(注意没有把socket的fd传给内核),这个地方做的很巧妙,我们在检测到事件时就会明白这个地方的巧妙之处。
这样,一个Server端基本建立起来了,其中的关键就是怎么把listen socket和Epoll联系起来。