Python爬取视频是利用多线程快还是利用协程快?

最近在学习python爬虫相关技术,简单了解了多线程和协程的概念,跟着网上大佬们学写了几个小爬虫玩儿,基本弄懂了如何爬取文字,图片和简单的视频,突然就想测试一下,爬取视频到底是多线程较快还是利用协程较快。于是做了一个简单的测试:爬取一页糗事百科的视频,大概有25个视频,分别用单线程、多线程和协程,探一下高低。

下面贴出代码,核心部分都差不多,因为是初学者,代码有些稚嫩,请大佬们勿喷。

单线程:

 import requests
 from lxml import etree
 import time
 ​
 def getVideo(url):
     headers = {
         "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
     }
     html = requests.get(url=url, headers = headers).text
     tree = etree.HTML(html)
     div_list = tree.xpath('//*[@id="content"]/div/div[2]')
     for div in div_list:
         video_src_list = div.xpath('./div/video/source/@src')
         # print(video_src)
         for video_src in video_src_list:
             name = video_src.rsplit("/",1)[1]
             # print(name)
             with open(f"video/{name}",mode='wb') as f:
                 f.write(requests.get("http:"+video_src,headers = headers).content)
                 print(f"{name}下载完成!")
 if __name__ == '__main__':
     t1 = time.time()
     url = 'https://www.qiushibaike.com/video/'
     getVideo(url) #单线程下载
     print(time.time()-t1)

多线程:

 
import requests
 from lxml import etree
 from concurrent.futures import ThreadPoolExecutor
 import time
 ​
 def getVideoSrcList(url):
     video_src_list = []
     headers = {
         "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
     }
     html = requests.get(url=url, headers = headers).text
     tree = etree.HTML(html)
     div_list = tree.xpath('//*[@id="content"]/div/div[2]')
     for div in div_list:
         video_src_list = div.xpath('./div/video/source/@src')
     return video_src_list
 def getVideo(url):
     name = url.rsplit("/",1)[1]
     with open(f"video/{name}",mode='wb') as f:
         f.write(requests.get("http:" + url).content)
         print(f"{name}下载完成!")
 if __name__ == '__main__':
     t1 = time.time()
     url = 'https://www.qiushibaike.com/video/'
     with ThreadPoolExecutor(50) as t: #开启50个线程的线程池
         for src in getVideoSrcList(url):
             t.submit(getVideo,url = src) #提交多线程下载
     print("over")
     print(time.time()-t1)

协程

 import asyncio
 import aiohttp
 import aiofiles
 import time
 import requests
 from lxml import etree
 async def getVideo(url):
     tasks = []
     headers = {
         "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
     }
     html = requests.get(url=url, headers = headers).text
     tree = etree.HTML(html)
     div_list = tree.xpath('//*[@id="content"]/div/div[2]')
     for div in div_list:
         video_src_list = div.xpath('./div/video/source/@src')
         for video_src in video_src_list:
             name = video_src.rsplit("/",1)[1]
             #准备异步任务
             tasks.append(asyncio.create_task(download(name,video_src)))
         await asyncio.wait(tasks)
 ​
 async def download(name,src):
     async with aiohttp.ClientSession() as session:
         # 观察src字串,发现src是没有http:的,所以拼接处理
         async with session.get("http:"+ src) as reqs:
             async with aiofiles.open(f"video/{name}",mode='wb') as f:
                 # 异步保存视频 ,这里有个坑,如果用reqs.content的话会报错,只能用reqs.read()
                 await f.write(await reqs.read())
 if __name__ == '__main__':
     t1 = time.time()
     url = 'https://www.qiushibaike.com/video/'
     # asyncio.run(getVideo(url))
     loop = asyncio.get_event_loop()
     loop.run_until_complete(getVideo(url))
     loop.close()
     print(time.time()-t1)

协程在写的过程中踩了不少坑,费了很大的劲儿研究,终于能顺利的跑通了,确实不容易。我特意在代码里面加了时间,测算每一种的时常,发现多线程用时最少,协程次之,但线程用时最多,如下图:

多线程用时:

Python爬取视频是利用多线程快还是利用协程快?_第1张图片

协程用时

Python爬取视频是利用多线程快还是利用协程快?_第2张图片

单线程:

Python爬取视频是利用多线程快还是利用协程快?_第3张图片

欢迎大佬们评论区指导!也可以私信我指导

你可能感兴趣的:(python爬虫,python,xpath,多线程)