爬虫学习之12:多进程爬虫初试

    之前写的代码都是串行的单线程爬虫,当爬取页面数量更多,数据量更大时,速度明显降低,这里使用Python Multiprocessing库的进程池方法测试多进程爬虫的效率,爬取糗事百科文字板块的用户ID,发表段子的文字信息、好笑数量和评论数量这几个数据,由于只是测试性能,对爬取的数据不进行保存。上代码:

import requests
import re
import time
from multiprocessing import Pool

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36'
}

def re_scraper(url):
    res = requests.get(url,headers=headers)
    ids = re.findall('

(.*?)

', res.text, re.S) contents = re.findall('
(.*?)
', res.text, re.S) laughts = re.findall('(\d+)', res.text, re.S) comments = re.findall('(\d+)评论', res.text, re.S) for id,content,laught,comment in zip(ids,contents,laughts,comments): info = { 'id':id, 'content':content, 'laught':laught, 'comment':comment } #只爬取,不存储,测试性能 return if __name__=='__main__': urls = ['https://www.qiushibaike.com/text/page/{}/'.format(str(i)) for i in range(1,36)] start_1 = time.time() for url in urls: re_scraper(url) end_1 = time.time() print("串行爬虫耗费时间:",end_1-start_1) start_2 = time.time() pool = Pool(processes=2) pool.map(re_scraper,urls) end_2 = time.time() print("两个进程爬虫耗费时间:",end_2-start_2) start_3 = time.time() pool = Pool(processes=4) pool.map(re_scraper,urls) end_3 = time.time() print("四个进程爬虫耗费时间:", end_3 - start_3)

结果如下:

串行爬虫耗费时间: 21.60699963569641
两个进程爬虫耗费时间: 11.561000347137451

四个进程爬虫耗费时间: 5.212999820709228

可以看出多进程爬虫确实效率提升明显

你可能感兴趣的:(Python)