代码如下:
点击(此处)折叠或打开
import argparse
import asyncio
import functools
import logging
import random
import urllib.parse
loop = asyncio.get_event_loop()
@asyncio.coroutine
def print_http_headers(no, url, keepalive):
url = urllib.parse.urlsplit(url)
wait_for = functools.partial(asyncio.wait_for, timeout=3, loop=loop)
query = ('HEAD {url.path} HTTP/1.1\r\n'
'Host: {url.hostname}\r\n'
'\r\n').format(url=url).encode('utf-8')
rd, wr = yield from wait_for(asyncio.open_connection(url.hostname, 80))
while True:
wr.write(query)
while True:
line = yield from wait_for(rd.readline())
if not line: # end of connection
wr.close()
return no
line = line.decode('utf-8').rstrip()
if not line: # end of header
break
logging.debug('(%d) HTTP header> %s' % (no, line))
yield from asyncio.sleep(random.randint(1, keepalive//2))
@asyncio.coroutine
def do_requests(args):
conn_pool = set()
waiter = asyncio.Future()
def _on_complete(fut):
conn_pool.remove(fut)
exc, res = fut.exception(), fut.result()
if exc is not None:
logging.info('conn#{} exception'.format(exc))
else:
logging.info('conn#{} result'.format(res))
if not conn_pool:
waiter.set_result('event loop is done')
for i in range(args.connections):
fut = asyncio.async(print_http_headers(i, args.url, args.keepalive))
fut.add_done_callback(_on_complete)
conn_pool.add(fut)
if i % 10 == 0:
yield from asyncio.sleep(0.01)
logging.info((yield from waiter))
def main():
parser = argparse.ArgumentParser(description='asyncli')
parser.add_argument('url', help='page address')
parser.add_argument('-c', '--connections', type=int, default=1,
help='number of connections simultaneously')
parser.add_argument('-k', '--keepalive', type=int, default=60,
help='HTTP keepalive timeout')
args = parser.parse_args()
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
loop.run_until_complete(do_requests(args))
loop.close()
if __name__ == '__main__':
main()
测试与分析
硬件:CPU 2.3GHz / 2 cores,RAM 2GB
软件:CentOS 6.5(kernel 2.6.32), Python 3.3 (pip install asyncio), nginx 1.4.7
参数设置:ulimit -n 10240;nginx worker的连接数改为10240
启动WEB服务器,只需一个worker进程:
# ../sbin/nginx
# ps ax | grep nginx
2007 ? Ss 0:00 nginx: master process ../sbin/nginx
2008 ? S 0:00 nginx: worker process
启动benchmark工具, 发起10k个连接,目标URL是nginx的默认测试页面:
$ python asyncli.py http://10.211.55.8/ -c 10000
nginx日志统计平均每秒请求数:
# tail -1000000 access.log | awk '{ print $4 }' | sort | uniq -c | awk '{ cnt+=1; sum+=$1 } END { printf "avg = %d\n", sum/cnt }'
avg = 548
top部分输出:
VIRT RES SHR S %CPU %MEM TIME+ COMMAND
657m 115m 3860 R 60.2 6.2 4:30.02 python
54208 10m 848 R 7.0 0.6 0:30.79 nginx
总结:
1. Python实现简洁明了。不到80行代码,只用到标准库,逻辑直观,想象下C/C++标准库实现这些功能,顿觉“人生苦短,我用Python”。
2. Python运行效率不理想。当连接建立后,客户端和服务端的数据收发逻辑差不多,看上面top输出,Python的CPU和RAM占用基本都是nginx的10倍,意味着效率相差100倍(CPU x RAM),侧面说明了Python与C的效率差距。这个对比虽然有些极端,毕竟nginx不仅用C且为CPU/RAM占用做了深度优化,但相似任务效率相差两个数量级,除非是BUG,说明架构设计的出发点就是不同的,Python优先可读易用而性能次之,nginx就是一个高度优化的WEB服务器,开发一个module都比较麻烦,要复用它的异步框架,简直难上加难。开发效率与运行效率的权衡,永远都存在。
3. 单线程异步IO v.s. 多线程同步IO。上面的例子是单线程异步IO,其实不写demo就知道多线程同步IO效率低得多,每个线程一个连接?10k个线程,仅线程栈就占用600+MB(64KB * 10000)内存,加上线程上下文切换和GIL,基本就是噩梦。
ayncio核心概念
以下是学习asyncio时需要理解的四个核心概念,更多细节请看
1. event loop。单线程实现异步的关键就在于这个高层事件循环,它是同步执行的。
2. future。异步IO有很多异步任务构成,而每个异步任务都由一个future控制。
3. coroutine。每个异步任务具体的执行逻辑由一个coroutine来体现。
4. generator(yield & yield from) 。在asyncio中大量使用,是不可忽视的语法细节。
参考资料
1. asyncio – Asynchronous I/O, event loop, coroutines and tasks, https://docs.python.org/3/library/asyncio.html
2. PEP 3156, Asynchronous IO Support Rebooted: the "asyncio” Module, http://legacy.python.org/dev/peps/pep-3156/
3. PEP 380, Syntax for Delegating to a Subgenerator, http://legacy.python.org/dev/peps/pep-0380/
4. PEP 342, Coroutines via Enhanced Generators, http://legacy.python.org/dev/peps/pep-0342/