【python】分享一个多线程爬虫爬取表情包的代码

需要用到的库:requests、lxml、os、threading、queue

多线程爬虫可比单线程爬虫爬取速度多了好几倍,单线程就好比是一辆车来回运输货物,而多线程则是多辆车同时运输货物。效率自然可不一样

该代码还能把数据存储到电脑桌面(明天添加)

使用了线程池来高效爬取 

import requests
from lxml import etree
import os
import threading
from queue import Queue
b = 0
anquanquurls = Queue(100)
anquanqunames = Queue(100)
yeshu=int(input('请输入你要爬取的页数:'))
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36',
    'Referer':'https://www.doutula.com/',
    }
def geturl():
    global yeshu,names,headers
    for i in range(yeshu):
        urll = 'https://www.pkdoutu.com/article/list/?page={}'.format(i+1)
        print('正在获取第{}页所有url'.format(i+1))
        response = requests.get(urll,headers=headers)
        content = response.content.decode('utf8')
        html=etree.HTML(content)
        urlss = html.xpath('//div/div/img/@data-backup')
        names = html.xpath('//div[@class="random_article"]/div/img/@alt')
        for url in urlss:
            anquanquurls.put(url)
        for name in names:
            anquanqunames.put(name)
def xiazai():
    global names,b,headers
    while 1:
        url = anquanquurls.get()
        name = anquanqunames.get()
        if len(name)==0:
            hou = os.path.splitext(url)[1]
            response = requests.get(url,headers=headers)
            content = response.content
            path = "{}{}".format(b,hou)
            with open(path,mode='wb')as f:
                f.write(content)
            print('{}{}  已爬取'.format(b,hou))
            b+=1
        else:
            hou = os.path.splitext(url)[1]
            response = requests.get(url,headers=headers)
            content = response.content
            path = "{}{}".format(name,hou)
            with open(path,mode='wb')as f:
                f.write(content)
            print('{}{}  已爬取'.format(name,hou))
def duoxiancheng():
    global names,b,headers
    t = threading.Thread(target=geturl)
    t.start()
    for i in range(5):
        t = threading.Thread(target=xiazai)
        t.start()
duoxiancheng()

你可能感兴趣的:(python,爬虫,开发语言)