传送门
https://github.com/libevent/libevent
https://libevent.org/
这些文档是Nick Mathewson的(c)2009-2012版权所有,并根据知识共享署名-非商业性共享方式许可版本3.0提供。将来的版本可能会在限制较少的许可下提供。
此外,这些文档中的源代码示例还根据所谓的“ 3-Clause”或“ Modified” BSD许可进行了许可。有关完整条款,请参阅与这些文档一起分发的license_bsd文件。
有关本文档的最新版本,请参见 http://www.wangafu.net/~nickm/libevent-book/TOC.html
要获取本文档最新版本的源代码,请安装git并运行“git clone git://github.com/nmathewson/libevent-book.git”
关于本文件
本文档将教您如何使用Libevent 2.0(及更高版本)以C语言编写快速可移植的异步网络IO程序。我们假设:
你已经知道C.
您已经知道基本的C网络调用(socket(),connect()等)。
范例说明
本文档中的示例应在Linux,FreeBSD,OpenBSD,NetBSD,Mac OS X,Solaris和Android上正常运行。有些示例可能无法在Windows上编译。
大多数入门程序员都从阻塞IO调用开始。IO调用是同步的,如果在调用IO调用之前,它直到操作完成或经过足够的时间以致网络堆栈放弃后才返回。例如,当您在TCP连接上调用“ connect()”时,您的操作系统会将SYN数据包排队到TCP连接另一端的主机。直到它从对等主机接收到SYN ACK数据包,或者经过了足够的时间决定放弃后,它才将控制权交还给您的应用程序。
这是一个使用阻止网络调用的非常简单的客户端的示例。它将打开与www.google.com的连接,向其发送一个简单的HTTP请求,然后将响应输出到stdout。
示例:一个简单的阻止HTTP客户端
/* For sockaddr_in */
#include
/* For socket functions */
#include
/* For gethostbyname */
#include
#include
#include
#include
int main(int c, char **v)
{
const char query[] =
"GET / HTTP/1.0\r\n"
"Host: www.google.com\r\n"
"\r\n";
const char hostname[] = "www.google.com";
struct sockaddr_in sin;
struct hostent *h;
const char *cp;
int fd;
ssize_t n_written, remaining;
char buf[1024];
/* Look up the IP address for the hostname. Watch out; this isn't
threadsafe on most platforms. */
h = gethostbyname(hostname);
if (!h) {
fprintf(stderr, "Couldn't lookup %s: %s", hostname, hstrerror(h_errno));
return 1;
}
if (h->h_addrtype != AF_INET) {
fprintf(stderr, "No ipv6 support, sorry.");
return 1;
}
/* Allocate a new socket */
fd = socket(AF_INET, SOCK_STREAM, 0);
if (fd < 0) {
perror("socket");
return 1;
}
/* Connect to the remote host. */
sin.sin_family = AF_INET;
sin.sin_port = htons(80);
sin.sin_addr = *(struct in_addr*)h->h_addr;
if (connect(fd, (struct sockaddr*) &sin, sizeof(sin))) {
perror("connect");
close(fd);
return 1;
}
/* Write the query. */
/* XXX Can send succeed partially? */
cp = query;
remaining = strlen(query);
while (remaining) {
n_written = send(fd, cp, remaining, 0);
if (n_written <= 0) {
perror("send");
return 1;
}
remaining -= n_written;
cp += n_written;
}
/* Get an answer back. */
while (1) {
ssize_t result = recv(fd, buf, sizeof(buf), 0);
if (result == 0) {
break;
} else if (result < 0) {
perror("recv");
close(fd);
return 1;
}
fwrite(buf, 1, result, stdout);
}
close(fd);
return 0;
}
上面代码中的所有网络调用均被阻止:gethostbyname在成功解析www.google.com之前或失败之前不会返回;连接直到连接才返回;在收到数据或关闭之前,recv调用不会返回;并且send调用至少在将其输出刷新到内核的写缓冲区后才返回。
现在,阻止IO不一定是邪恶的。如果您希望程序在此期间没有其他操作,则阻塞IO将对您来说很好。但是,假设您需要编写一个程序来一次处理多个连接。为了使我们的示例更具体:假设您想从两个连接中读取输入,并且您不知道哪个连接将首先获得输入。你不能说
不好的例子
/* This won't work. */
char buf[1024];
int i, n;
while (i_still_want_to_read()) {
for (i=0; i<n_sockets; ++i) {
n = recv(fd[i], buf, sizeof(buf), 0);
if (n==0)
handle_close(fd[i]);
else if (n<0)
handle_error(fd[i], errno);
else
handle_input(fd[i], buf, n);
}
}
因为如果数据首先到达fd [2],则您的程序甚至不会尝试从fd [2]进行读取,直到从fd [0]和fd [1]进行的读取得到一些数据并完成为止。
有时人们使用多线程或多进程服务器解决此问题。执行多线程的最简单方法之一是使用单独的进程(或线程)来处理每个连接。由于每个连接都有其自己的进程,因此等待一个连接的阻塞IO调用不会使其他任何连接的进程都阻塞。
这是另一个示例程序。这是一个普通的服务器,它侦听端口40713上的TCP连接,一次从其输入的一行中读取数据,并在到达时写出每行的ROT13模糊处理。它使用Unix fork()调用为每个传入连接创建一个新进程。
Example: Forking ROT13 server
/* For sockaddr_in */
#include
/* For socket functions */
#include
#include
#include
#include
#include
#define MAX_LINE 16384
char rot13_char(char c)
{
/* We don't want to use isalpha here; setting the locale would change
* which characters are considered alphabetical. */
if ((c >= 'a' && c <= 'm') || (c >= 'A' && c <= 'M'))
return c + 13;
else if ((c >= 'n' && c <= 'z') || (c >= 'N' && c <= 'Z'))
return c - 13;
else
return c;
}
void child(int fd)
{
char outbuf[MAX_LINE+1];
size_t outbuf_used = 0;
ssize_t result;
while (1) {
char ch;
result = recv(fd, &ch, 1, 0);
if (result == 0) {
break;
} else if (result == -1) {
perror("read");
break;
}
/* We do this test to keep the user from overflowing the buffer. */
if (outbuf_used < sizeof(outbuf)) {
outbuf[outbuf_used++] = rot13_char(ch);
}
if (ch == '\n') {
send(fd, outbuf, outbuf_used, 0);
outbuf_used = 0;
continue;
}
}
}
void run(void)
{
int listener;
struct sockaddr_in sin;
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = 0;
sin.sin_port = htons(40713);
listener = socket(AF_INET, SOCK_STREAM, 0);
#ifndef WIN32
{
int one = 1;
setsockopt(listener, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
}
#endif
if (bind(listener, (struct sockaddr*)&sin, sizeof(sin)) < 0) {
perror("bind");
return;
}
if (listen(listener, 16)<0) {
perror("listen");
return;
}
while (1) {
struct sockaddr_storage ss;
socklen_t slen = sizeof(ss);
int fd = accept(listener, (struct sockaddr*)&ss, &slen);
if (fd < 0) {
perror("accept");
} else {
if (fork() == 0) {
child(fd);
exit(0);
}
}
}
}
int main(int c, char **v)
{
run();
return 0;
}
那么,我们是否具有完美的解决方案来一次处理多个连接?我现在可以停止写这本书了,继续从事其他工作吗?不完全的。首先,在某些平台上,进程创建(甚至线程创建)的成本可能很高。在现实生活中,您想使用线程池而不是创建新进程。但是从根本上讲,线程不会像您期望的那样扩展。如果您的程序需要一次处理成千上万个连接,则处理成千上万个线程的效率将不如每个CPU仅拥有几个线程的效率。
但是,如果线程化不是拥有多个连接的答案,那是什么?在Unix范式中,使套接字成为非阻塞的。对此的Unix调用是:
fcntl(fd,F_SETFL,O_NONBLOCK);
其中fd是套接字的文件描述符。
[文件描述符是内核在打开套接字时分配给它的编号。您可以使用该数字进行指向套接字的Unix调用。]
一旦使fd(套接字)成为非阻塞状态,此后,每当对fd进行网络调用时,该调用将立即完成操作或返回特殊错误代码,指示“我现在无法取得任何进展,请重试。” 因此,我们的两路示例可能天真地写成:
Bad Example: busy-polling all sockets
* This will work, but the performance will be unforgivably bad. */
int i, n;
char buf[1024];
for (i=0; i < n_sockets; ++i)
fcntl(fd[i], F_SETFL, O_NONBLOCK);
while (i_still_want_to_read()) {
for (i=0; i < n_sockets; ++i) {
n = recv(fd[i], buf, sizeof(buf), 0);
if (n == 0) {
handle_close(fd[i]);
} else if (n < 0) {
if (errno == EAGAIN)
; /* The kernel didn't have any data for us to read. */
else
handle_error(fd[i], errno);
} else {
handle_input(fd[i], buf, n);
}
}
}
既然我们正在使用无阻塞套接字,那么上面的代码将可以 工作 ……但仅勉强可用。表现将很糟糕,原因有二。首先,当在任何一个连接上都没有数据可读取时,循环将无限期旋转,从而耗尽所有CPU周期。其次,如果您尝试使用这种方法处理一个或两个以上的连接,则将对每个连接进行内核调用,无论该连接是否具有任何数据。因此,我们需要一种告诉内核“等到这些套接字之一准备好给我一些数据,然后告诉我哪些已经准备好”的方法。
人们仍然使用的最古老的解决方案是select()。select()调用采用三组fds(以位数组的形式实现):一组用于读取,一组用于写入,以及一组用于“ exceptions”。它等待直到其中一个套件中的一个插槽准备就绪,然后将这些套件更改为仅包含可供使用的插槽。
这是我们再次使用select的示例:
Example: Using select
/* If you only have a couple dozen fds, this version won't be awful */
fd_set readset;
int i, n;
char buf[1024];
while (i_still_want_to_read()) {
int maxfd = -1;
FD_ZERO(&readset);
/* Add all of the interesting fds to readset */
for (i=0; i < n_sockets; ++i) {
if (fd[i]>maxfd) maxfd = fd[i];
FD_SET(fd[i], &readset);
}
/* Wait until one or more fds are ready to read */
select(maxfd+1, &readset, NULL, NULL, NULL);
/* Process all of the fds that are still set in readset */
for (i=0; i < n_sockets; ++i) {
if (FD_ISSET(fd[i], &readset)) {
n = recv(fd[i], buf, sizeof(buf), 0);
if (n == 0) {
handle_close(fd[i]);
} else if (n < 0) {
if (errno == EAGAIN)
; /* The kernel didn't have any data for us to read. */
else
handle_error(fd[i], errno);
} else {
handle_input(fd[i], buf, n);
}
}
}
}
这是我们ROT13服务器的重新实现,这次使用select()。
Example: select()-based ROT13 server
/* For sockaddr_in */
#include
/* For socket functions */
#include
/* For fcntl */
#include
/* for select */
#include
#include
#include
#include
#include
#include
#include
#define MAX_LINE 16384
char rot13_char(char c)
{
/* We don't want to use isalpha here; setting the locale would change
* which characters are considered alphabetical. */
if ((c >= 'a' && c <= 'm') || (c >= 'A' && c <= 'M'))
return c + 13;
else if ((c >= 'n' && c <= 'z') || (c >= 'N' && c <= 'Z'))
return c - 13;
else
return c;
}
struct fd_state {
char buffer[MAX_LINE];
size_t buffer_used;
int writing;
size_t n_written;
size_t write_upto;
};
struct fd_state * alloc_fd_state(void)
{
struct fd_state *state = malloc(sizeof(struct fd_state));
if (!state)
return NULL;
state->buffer_used = state->n_written = state->writing =
state->write_upto = 0;
return state;
}
void free_fd_state(struct fd_state *state)
{
free(state);
}
void make_nonblocking(int fd)
{
fcntl(fd, F_SETFL, O_NONBLOCK);
}
int do_read(int fd, struct fd_state *state)
{
char buf[1024];
int i;
ssize_t result;
while (1) {
result = recv(fd, buf, sizeof(buf), 0);
if (result <= 0)
break;
for (i=0; i < result; ++i) {
if (state->buffer_used < sizeof(state->buffer))
state->buffer[state->buffer_used++] = rot13_char(buf[i]);
if (buf[i] == '\n') {
state->writing = 1;
state->write_upto = state->buffer_used;
}
}
}
if (result == 0) {
return 1;
} else if (result < 0) {
if (errno == EAGAIN)
return 0;
return -1;
}
return 0;
}
int do_write(int fd, struct fd_state *state)
{
while (state->n_written < state->write_upto) {
ssize_t result = send(fd, state->buffer + state->n_written,
state->write_upto - state->n_written, 0);
if (result < 0) {
if (errno == EAGAIN)
return 0;
return -1;
}
assert(result != 0);
state->n_written += result;
}
if (state->n_written == state->buffer_used)
state->n_written = state->write_upto = state->buffer_used = 0;
state->writing = 0;
return 0;
}
void run(void)
{
int listener;
struct fd_state *state[FD_SETSIZE];
struct sockaddr_in sin;
int i, maxfd;
fd_set readset, writeset, exset;
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = 0;
sin.sin_port = htons(40713);
for (i = 0; i < FD_SETSIZE; ++i)
state[i] = NULL;
listener = socket(AF_INET, SOCK_STREAM, 0);
make_nonblocking(listener);
#ifndef WIN32
{
int one = 1;
setsockopt(listener, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
}
#endif
if (bind(listener, (struct sockaddr*)&sin, sizeof(sin)) < 0) {
perror("bind");
return;
}
if (listen(listener, 16)<0) {
perror("listen");
return;
}
FD_ZERO(&readset);
FD_ZERO(&writeset);
FD_ZERO(&exset);
while (1) {
maxfd = listener;
FD_ZERO(&readset);
FD_ZERO(&writeset);
FD_ZERO(&exset);
FD_SET(listener, &readset);
for (i=0; i < FD_SETSIZE; ++i) {
if (state[i]) {
if (i > maxfd)
maxfd = i;
FD_SET(i, &readset);
if (state[i]->writing) {
FD_SET(i, &writeset);
}
}
}
if (select(maxfd+1, &readset, &writeset, &exset, NULL) < 0) {
perror("select");
return;
}
if (FD_ISSET(listener, &readset)) {
struct sockaddr_storage ss;
socklen_t slen = sizeof(ss);
int fd = accept(listener, (struct sockaddr*)&ss, &slen);
if (fd < 0) {
perror("accept");
} else if (fd > FD_SETSIZE) {
close(fd);
} else {
make_nonblocking(fd);
state[fd] = alloc_fd_state();
assert(state[fd]);/*XXX*/
}
}
for (i=0; i < maxfd+1; ++i) {
int r = 0;
if (i == listener)
continue;
if (FD_ISSET(i, &readset)) {
r = do_read(i, state[i]);
}
if (r == 0 && FD_ISSET(i, &writeset)) {
r = do_write(i, state[i]);
}
if (r) {
free_fd_state(state[i]);
state[i] = NULL;
close(i);
}
}
}
}
int main(int c, char **v)
{
setvbuf(stdout, NULL, _IONBF, 0);
run();
return 0;
}
但是我们还没有完成。由于生成和读取select()位数组所花费的时间与您为select()提供的最大fd成正比,因此,当套接字数很高时,select()调用会成比例地扩展。
[在用户空间方面,生成和读取位数组可以花费与您为select()提供的fds数量成比例的时间。但是在内核方面,读取位数组所花费的时间与位数组中的最大fd成正比,而该fd往往在整个程序中使用的fds总数附近,而不管向其中的集合添加了多少fds选择()。]
不同的操作系统提供了不同的替换功能供选择。这些包括poll(),epoll(),kqueue(),evports和/ dev / poll。所有这些都提供比select()更好的性能,除poll()之外,所有这些都为O(1)提供了添加套接字,移除套接字以及通知套接字已准备好进行IO的性能。
不幸的是,没有一个有效的接口是普遍存在的标准。Linux有epoll(),BSD(包括Darwin)有kqueue(),Solaris有evports和/ dev / poll… ,而这些操作系统都没有其他操作系统。因此,如果要编写一个可移植的高性能异步应用程序,则需要一个包装所有这些接口的抽象,并提供其中任何一个最有效。
这就是Libevent API最低级别的功能。它使用运行它的计算机上可用的最高效的版本,为各种select()替换提供一致的接口。
这是异步ROT13服务器的另一个版本。这次,它使用Libevent 2而不是select()。请注意,fd_sets现在不见了:相反,我们将事件与struct event_base关联和取消关联,可以通过select(),poll(),epoll(),kqueue()等实现。
Example: A low-level ROT13 server with Libevent
/* For sockaddr_in */
#include
/* For socket functions */
#include
/* For fcntl */
#include
#include
#include
#include
#include
#include
#include
#include
#define MAX_LINE 16384
void do_read(evutil_socket_t fd, short events, void *arg);
void do_write(evutil_socket_t fd, short events, void *arg);
char rot13_char(char c)
{
/* We don't want to use isalpha here; setting the locale would change
* which characters are considered alphabetical. */
if ((c >= 'a' && c <= 'm') || (c >= 'A' && c <= 'M'))
return c + 13;
else if ((c >= 'n' && c <= 'z') || (c >= 'N' && c <= 'Z'))
return c - 13;
else
return c;
}
struct fd_state {
char buffer[MAX_LINE];
size_t buffer_used;
size_t n_written;
size_t write_upto;
struct event *read_event;
struct event *write_event;
};
struct fd_state * alloc_fd_state(struct event_base *base, evutil_socket_t fd)
{
struct fd_state *state = malloc(sizeof(struct fd_state));
if (!state)
return NULL;
state->read_event = event_new(base, fd, EV_READ|EV_PERSIST, do_read, state);
if (!state->read_event) {
free(state);
return NULL;
}
state->write_event =
event_new(base, fd, EV_WRITE|EV_PERSIST, do_write, state);
if (!state->write_event) {
event_free(state->read_event);
free(state);
return NULL;
}
state->buffer_used = state->n_written = state->write_upto = 0;
assert(state->write_event);
return state;
}
void free_fd_state(struct fd_state *state)
{
event_free(state->read_event);
event_free(state->write_event);
free(state);
}
void do_read(evutil_socket_t fd, short events, void *arg)
{
struct fd_state *state = arg;
char buf[1024];
int i;
ssize_t result;
while (1) {
assert(state->write_event);
result = recv(fd, buf, sizeof(buf), 0);
if (result <= 0)
break;
for (i=0; i < result; ++i) {
if (state->buffer_used < sizeof(state->buffer))
state->buffer[state->buffer_used++] = rot13_char(buf[i]);
if (buf[i] == '\n') {
assert(state->write_event);
event_add(state->write_event, NULL);
state->write_upto = state->buffer_used;
}
}
}
if (result == 0) {
free_fd_state(state);
} else if (result < 0) {
if (errno == EAGAIN) // XXXX use evutil macro
return;
perror("recv");
free_fd_state(state);
}
}
void do_write(evutil_socket_t fd, short events, void *arg)
{
struct fd_state *state = arg;
while (state->n_written < state->write_upto) {
ssize_t result = send(fd, state->buffer + state->n_written,
state->write_upto - state->n_written, 0);
if (result < 0) {
if (errno == EAGAIN) // XXX use evutil macro
return;
free_fd_state(state);
return;
}
assert(result != 0);
state->n_written += result;
}
if (state->n_written == state->buffer_used)
state->n_written = state->write_upto = state->buffer_used = 1;
event_del(state->write_event);
}
void do_accept(evutil_socket_t listener, short event, void *arg)
{
struct event_base *base = arg;
struct sockaddr_storage ss;
socklen_t slen = sizeof(ss);
int fd = accept(listener, (struct sockaddr*)&ss, &slen);
if (fd < 0) { // XXXX eagain??
perror("accept");
} else if (fd > FD_SETSIZE) {
close(fd); // XXX replace all closes with EVUTIL_CLOSESOCKET */
} else {
struct fd_state *state;
evutil_make_socket_nonblocking(fd);
state = alloc_fd_state(base, fd);
assert(state); /*XXX err*/
assert(state->write_event);
event_add(state->read_event, NULL);
}
}
void run(void)
{
evutil_socket_t listener;
struct sockaddr_in sin;
struct event_base *base;
struct event *listener_event;
base = event_base_new();
if (!base)
return; /*XXXerr*/
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = 0;
sin.sin_port = htons(40713);
listener = socket(AF_INET, SOCK_STREAM, 0);
evutil_make_socket_nonblocking(listener);
#ifndef WIN32
{
int one = 1;
setsockopt(listener, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
}
#endif
if (bind(listener, (struct sockaddr*)&sin, sizeof(sin)) < 0) {
perror("bind");
return;
}
if (listen(listener, 16)<0) {
perror("listen");
return;
}
listener_event = event_new(base, listener, EV_READ|EV_PERSIST, do_accept, (void*)base);
/*XXX check it */
event_add(listener_event, NULL);
event_base_dispatch(base);
}
int main(int c, char **v)
{
setvbuf(stdout, NULL, _IONBF, 0);
run();
return 0;
}
(代码中需要注意的其他事项:我们不是使用evutil_socket_t类型来输入套接字,而是使用evutil_socket_t类型。不是调用fcntl(O_NONBLOCK)来使套接字成为非阻塞状态,而是调用evutil_make_socket_nonblocking。这些更改使我们代码与Win32网络API的不同部分兼容。)
What about convenience? (and what about Windows?)
您可能已经注意到,随着我们的代码变得越来越高效,它也变得越来越复杂。回到分叉时,我们不必为每个连接管理缓冲区:我们为每个进程只有一个单独的堆栈分配缓冲区。我们不需要显式地跟踪每个套接字是否正在读取或写入:这在代码中的位置是隐式的。而且,我们不需要一种结构来跟踪每个操作完成了多少:我们只使用了循环和堆栈变量。
此外,如果您对Windows上的联网有深刻的经验,您会意识到Libevent像上面的示例中那样使用时可能无法获得最佳性能。在Windows上,执行快速异步IO的方法不是使用类似于select()的接口:而是使用IOCP(IO完成端口)API。与所有快速网络API不同,当套接字准备好要执行的程序必须执行的操作时,IOCP不会警告您的程序。而是,程序告诉Windows网络堆栈开始网络操作,而IOCP告诉程序操作何时完成。
幸运的是,Libevent 2“ bufferevents”接口解决了这两个问题:它使程序更易于编写,并提供了一个可以在Windows 和 Unix上有效实现的接口。
这是上一次使用bufferevents API的ROT13服务器。
Example: A simpler ROT13 server with Libevent
/* For sockaddr_in */
#include
/* For socket functions */
#include
/* For fcntl */
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#define MAX_LINE 16384
void do_read(evutil_socket_t fd, short events, void *arg);
void do_write(evutil_socket_t fd, short events, void *arg);
char rot13_char(char c)
{
/* We don't want to use isalpha here; setting the locale would change
* which characters are considered alphabetical. */
if ((c >= 'a' && c <= 'm') || (c >= 'A' && c <= 'M'))
return c + 13;
else if ((c >= 'n' && c <= 'z') || (c >= 'N' && c <= 'Z'))
return c - 13;
else
return c;
}
void readcb(struct bufferevent *bev, void *ctx)
{
struct evbuffer *input, *output;
char *line;
size_t n;
int i;
input = bufferevent_get_input(bev);
output = bufferevent_get_output(bev);
while ((line = evbuffer_readln(input, &n, EVBUFFER_EOL_LF))) {
for (i = 0; i < n; ++i)
line[i] = rot13_char(line[i]);
evbuffer_add(output, line, n);
evbuffer_add(output, "\n", 1);
free(line);
}
if (evbuffer_get_length(input) >= MAX_LINE) {
/* Too long; just process what there is and go on so that the buffer
* doesn't grow infinitely long. */
char buf[1024];
while (evbuffer_get_length(input)) {
int n = evbuffer_remove(input, buf, sizeof(buf));
for (i = 0; i < n; ++i)
buf[i] = rot13_char(buf[i]);
evbuffer_add(output, buf, n);
}
evbuffer_add(output, "\n", 1);
}
}
void errorcb(struct bufferevent *bev, short error, void *ctx)
{
if (error & BEV_EVENT_EOF) {
/* connection has been closed, do any clean up here */
/* ... */
} else if (error & BEV_EVENT_ERROR) {
/* check errno to see what error occurred */
/* ... */
} else if (error & BEV_EVENT_TIMEOUT) {
/* must be a timeout event handle, handle it */
/* ... */
}
bufferevent_free(bev);
}
void do_accept(evutil_socket_t listener, short event, void *arg)
{
struct event_base *base = arg;
struct sockaddr_storage ss;
socklen_t slen = sizeof(ss);
int fd = accept(listener, (struct sockaddr*)&ss, &slen);
if (fd < 0) {
perror("accept");
} else if (fd > FD_SETSIZE) {
close(fd);
} else {
struct bufferevent *bev;
evutil_make_socket_nonblocking(fd);
bev = bufferevent_socket_new(base, fd, BEV_OPT_CLOSE_ON_FREE);
bufferevent_setcb(bev, readcb, NULL, errorcb, NULL);
bufferevent_setwatermark(bev, EV_READ, 0, MAX_LINE);
bufferevent_enable(bev, EV_READ|EV_WRITE);
}
}
void run(void)
{
evutil_socket_t listener;
struct sockaddr_in sin;
struct event_base *base;
struct event *listener_event;
base = event_base_new();
if (!base)
return; /*XXXerr*/
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = 0;
sin.sin_port = htons(40713);
listener = socket(AF_INET, SOCK_STREAM, 0);
evutil_make_socket_nonblocking(listener);
#ifndef WIN32
{
int one = 1;
setsockopt(listener, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
}
#endif
if (bind(listener, (struct sockaddr*)&sin, sizeof(sin)) < 0) {
perror("bind");
return;
}
if (listen(listener, 16)<0) {
perror("listen");
return;
}
listener_event = event_new(base, listener, EV_READ|EV_PERSIST, do_accept, (void*)base);
/*XXX check it */
event_add(listener_event, NULL);
event_base_dispatch(base);
}
int main(int c, char **v)
{
setvbuf(stdout, NULL, _IONBF, 0);
run();
return 0;
}
How efficient is all of this, really?
XXXX write an efficiency section here. The benchmarks on the libevent page are really out of date.