Scrapy 图片下载 之 Bing壁纸

首先,絮絮叨叨一下:
最近学习Scrapy框架中,由于没有Python的编程基础所以学起来很是困难!
在网上找一个免费视频看了一下,就匆匆上手了。
教程来自 Python最火爬虫框架Scrapy入门与实践  讲的确实不错!

一、建立项目
首先用Scrapy命令建立一个爬虫项目

scrapy startproject bingScrapy

然后进入 bingScrapy目录建立爬虫文件

cd bingScrapy
scrapy genspider bingScrapy ioliu.cn

这里使用了https://bing.ioliu.cn/这个地址来获取Bing壁纸,在这里感谢作者提供如此优秀的项目。

二、编写爬虫文件

bingScrapy\spiders\bingScrapy.py

# -*- coding: utf-8 -*-
import scrapy
import re
from bingScrapy.items import BingscrapyItem
class BingscrapySpider(scrapy.Spider):
    name = 'bingScrapy'
    allowed_domains = ['ioliu.cn']
    start_urls = ['https://bing.ioliu.cn/?p=1/']
    def parse(self, response):
        container = response.xpath("//div[@class='container']/div[@class='item']/div")
        next_page = response.xpath("//div[@class='page']/a[2]/@href").extract_first()
        print(next_page)
        if next_page:
            yield scrapy.Request('https://bing.ioliu.cn' + next_page, callback=self.parse)
        for i in container:
            item = BingscrapyItem()
            item['time'] = i.xpath(".//div[@class='description']/p[1]/em[1]/text()").extract_first()
            item['name'] = i.xpath(".//div[@class='description']/h3/text()").extract_first()
            item['image_urls'] = i.xpath(".//img/@src").extract()
            yield item
        print(item)

bingScrapy\spiders\items.py

# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class BingscrapyItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    name = scrapy.Field()
    time = scrapy.Field()
    image_urls = scrapy.Field()
    pass

bingScrapy\spiders\pipelines.py

# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import re
import scrapy
from scrapy.pipelines.images import ImagesPipeline
class BingscrapyPipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        # 循环每一张图片地址下载,若传过来的不是集合则无需循环直接yield
        for image_url in item['image_urls']:
            #src = re.sub(r'1920x1080', '1920x1200', image_url)
            # meta里面的数据是从spider获取,然后通过meta传递给下面方法:file_path
            yield scrapy.Request(image_url,meta={'item': item,})
    # 重命名,若不重写这函数,图片名为哈希,就是一串乱七八糟的名字
    def file_path(self, request, response=None, info=None):
        item = request.meta['item']
        name = item['name']
        # 过滤windows字符串,不经过这么一个步骤,你会发现有乱码或无法下载
        name = re.sub(r'[\/\\\:\*\?\"\<\>\|]','_',name)
        # 分文件夹存储的关键
        #filename = u'{0}'.format(name)
        folder = item['time']
        #folder_strip = strip(folder)
        image_guid = request.url.split('/')[-1]
        filename = u'full/{0}/{1}{2}'.format(folder, name, '.jpg')
        return filename

bingScrapy\spiders\settings.py

ROBOTSTXT_OBEY = False
DOWNLOAD_DELAY = 3
IMAGES_STORE = 'D:\bing'
ITEM_PIPELINES = {
   'bingScrapy.pipelines.BingscrapyPipeline': 300,
}

这样一个爬虫就基本完成了!

三、运行与结果

scrapy crawl bingScrapy

以上是最近学习的成果,虽然代码可能写的不够严谨与高效,但是对于非专业人员来说觉得比较满意了! 
提醒的是在代码运行的过程中遇到报错尽量的用Google吧! 

你可能感兴趣的:(Scrapy 图片下载 之 Bing壁纸)