Python爬虫教程:包图网免费付费素材爬取【附源码】

包图网大家都知道吧 集齐海量设计素材 十分好用 可惜太贵了,今天就带大家使用Python—爬虫爬取这些素材并且保存到本地!

抓取一个网站的内容,我们需要从以下几方面入手:

1-如何抓取网站的下一页链接?

2-目标资源是静态还是动态(视频、图片等)

3-该网站的数据结构格式

Python爬虫教程:包图网免费付费素材爬取【附源码】_第1张图片

源代码如下


import requests
from lxml import etree
import threading
 
 
class Spider(object):
    def __init__(self):
        self.headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36"}
        self.offset = 1
 
    def start_work(self, url):
        print("正在爬取第 %d 页......" % self.offset)
        self.offset += 1
        response = requests.get(url=url,headers=self.headers)
        html = response.content.decode()
        html = etree.HTML(html)
 
        video_src = html.xpath('//div[@class="video-play"]/video/@src')
        video_title = html.xpath('//span[@class="video-title"]/text()')
        next_page = "http:" + html.xpath('//a[@class="next"]/@href')[0]
        # 爬取完毕...
        if next_page == "http:":
            return
 
        self.write_file(video_src, video_title)
        self.start_work(next_page)
 
    def write_file(self, video_src, video_title):
        for src, title in zip(video_src, video_title):
            response = requests.get("http:"+ src, headers=self.headers)
            file_name = title + ".mp4"
            file_name = "".join(file_name.split("/"))
            print("正在抓取%s" % file_name)
            with open('E://python//demo//mp4//'+file_name, "wb") as f:
                f.write(response.content)
 
if __name__ == "__main__":
    spider = Spider()
    for i in range(0,3):
        # spider.start_work(url="https://ibaotu.com/shipin/7-0-0-0-"+ str(i) +"-1.html")
        t = threading.Thread(target=spider.start_work, args=("https://ibaotu.com/shipin/7-0-0-0-"+ str(i) +"-1.html",))
        t.start()

效果展示

Python爬虫教程:包图网免费付费素材爬取【附源码】_第2张图片

对于初学者想更轻松的学好Python开发,爬虫技术,Python数据分析,人工智能等技术,这里也给大家准备了一套系统教学资源,加Python技术学习教程qq裙:855408893,免费领取。学习过程中有疑问,群里有专业的老司机免费答疑解惑!点击加入我们的 python学习圈

你可能感兴趣的:(python,爬虫)