Python多线程爬取斗图表情包

一、使用技术:

  • Lxml:解析网页

  • Requests库:获取网页信息

  • re:替换非法字符

  • os:处理文件名

  • Queue:实现安全的多线程

  • urllib :下载获取的图片

二、设计思路:

这里采用生产者消费者模式来设计多线程,消费者负责解析网页并得到一个网页上所有图片的url,而消费者则负责下载图片到本地即进行IO操作,在这里设计了5个消费者以及五个生产者

三、Demo:

import requests
from lxml import etree
import os
import re
from urllib import request
from queue import Queue
import threading
HEADRES = {
    'User-Agent':
        'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Mobile Safari/537.36'
}
class Producers(threading.Thread):
    def __init__(self, page_queue, img_queue, *args, **kwargs):
        super(Producers, self).__init__(*args, **kwargs)
        self.pq = page_queue
        self.iq = img_queue
    def run(self):
        while True:
            if self.pq.empty():
                break
            url = self.pq.get()
            self.parse_page(url)
    def getHtml(self, url):
        r = requests.get(url, headers=HEADRES)
        r.encoding = r.apparent_encoding
        return r.text
    def parse_page(self, url):
        text = self.getHtml(url)
        html = etree.HTML(text)
        imgs = html.xpath('//div[@class="page-content text-center"]//img[@class!="gif"]')
        imgurls = []
        alts = []
        for img in imgs:
            img_url = img.get('data-original')
            alt = img.get('alt')
            if img_url not in imgurls:
                imgurls.append(img_url)  # 爬取下来的url每个都有两份 处理一下 存在就不加入
            if alt not in alts:
                alts.append(alt)
        for value in zip(imgurls, alts):
            imgurl, alt = value
            alt1 = re.sub(r'[\??\.,。!!*]', '', alt)  # windows文件名不能有这些字符 re处理掉
            suffix = os.path.splitext(imgurl)[1]
            filename = alt1 + suffix
            self.iq.put((imgurl, filename))
class Customer(threading.Thread):
    def __init__(self, page_queue, img_queue, *args, **kwargs):
        super(Customer, self).__init__(*args, **kwargs)
        self.pq = page_queue
        self.iq = img_queue
    def run(self):
        while True:
         if self.pq.empty() and self.iq.empty():
             break
         imgurl, filename  = self.iq.get()
         request.urlretrieve(imgurl, 'images/' + filename)
         print(filename+'下载完毕')
if __name__ == '__main__':
    page_queue = Queue(100)
    img_queue = Queue(1000)
    for i in range(1, 50):
     url = 'http://www.doutula.com/photo/list/?page='+str(i)
     page_queue.put(url)
    for x in range(5):
        t = Producers(page_queue, img_queue)
        t.start()
    for x in range(5):
        t = Customer(page_queue, img_queue)
        t.start()

可以与单线程爬取速度作比较,可明显观察到性能的提升

你可能感兴趣的:(Python)