Scrapy基本概念——Item Pipeline

一、Item Pipeline介绍

蜘蛛抓取的每一个Item都会被发送到Item Pipeline。根据ITEM_PIPELINES的优先级设置,不同的Item Pipeline依次处理每一个Item,最后可删除该Item不做处理,也可将该Item发送到下一个Item Pipeline。Item Pipeline的主要用途有:

1、清洗数据

2、验证数据(检查Item某些字段是否为空)

3、数据查重

4、存储数据

二、Item Pipeline的方法

1、process_item

语法:process_item(self, item, spider)
参数:
item (item object) -- Item实例
spider (Spider object) -- spider实例
用法:每个Item Pipeline都需要调用此方法,这个方法必须返回一个 Item (或任何继承类)对象, 或是抛出DropItem异常,被丢弃的Item将不会被之后的Item Pipeline所处理。

2、open_spider

语法:open_spider(self, spider)
参数:spider (Spider object) -- spider实例
用法:当spider打开时调用此方法。

3、close_spider

语法:close_spider(self, spider)
参数:spider (Spider object) -- spider实例
用法:当spider关闭时调用此方法。

4、from_crawler

语法:from_crawler(cls, crawler)
参数:crawler (Crawler object) -- 使用此管道的爬虫程序
用法:调用此方法从爬虫程序Crawler生成实例。返回实例对象括号里面的参数,是会进入初始化方法__init__的

三、Item Pipeline的示例

1、验证数据——价格数据验证(删除Item中price字段为空的Item)

from itemadapter import ItemAdapter
from scrapy.exceptions import DropItem
class PricePipeline:
    vat_factor = 1.15
    def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        if adapter.get('price'):
            if adapter.get('price_excludes_vat'):
                adapter['price'] = adapter['price'] * self.vat_factor
            return item
        else:
            raise DropItem(f"Missing price in {item}")

2、存储数据——将项目写入JSON文件

import json
from itemadapter import ItemAdapter
class JsonWriterPipeline:
    def open_spider(self, spider):
        self.file = open('items.jl', 'w')
    def close_spider(self, spider):
        self.file.close()
    def process_item(self, item, spider):
        line = json.dumps(ItemAdapter(item).asdict()) + "\n"
        self.file.write(line)
        return item

3、存储数据——将项目写入MongoDB

import pymongo
from itemadapter import ItemAdapter
class MongoPipeline:
    collection_name = 'scrapy_items'
    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db
    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),                #从settings读取配置
            mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')    #从settings读取配置
        )
    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]
    def close_spider(self, spider):
        self.client.close()
    def process_item(self, item, spider):
        self.db[self.collection_name].insert_one(ItemAdapter(item).asdict())
        return item

4、存储数据——项目截图

此Item Pipeline生成一个请求,向本地运行的Splash实例(一个javascript渲染服务器)发出,以渲染Item中URL的屏幕截图。Item Pipeline使用协同程序将请求响应下载后,将屏幕截图保存到文件中,并将文件名添加到Item中。

import hashlib
from urllib.parse import quote
import scrapy
from itemadapter import ItemAdapter
from scrapy.utils.defer import maybe_deferred_to_future
class ScreenshotPipeline:

    SPLASH_URL = "http://localhost:8050/render.png?url={}"
    async def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        encoded_item_url = quote(adapter["url"])
        screenshot_url = self.SPLASH_URL.format(encoded_item_url)
        request = scrapy.Request(screenshot_url)
        response = await maybe_deferred_to_future(spider.crawler.engine.download(request, spider))
        if response.status != 200:
            # Error happened, return item.
            return item
        # Save screenshot to file, filename will be hash of url.
        url = adapter["url"]
        url_hash = hashlib.md5(url.encode("utf8")).hexdigest()
        filename = f"{url_hash}.png"
        with open(filename, "wb") as f:
            f.write(response.body)
        # Store filename in item.
        adapter["screenshot_filename"] = filename
        return item

5、数据查重——重复筛选器

from itemadapter import ItemAdapter
from scrapy.exceptions import DropItem
class DuplicatesPipeline:
    def __init__(self):
        self.ids_seen = set()
    def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        if adapter['id'] in self.ids_seen:
            raise DropItem(f"Duplicate item found: {item!r}")
        else:
            self.ids_seen.add(adapter['id'])
            return item

四、Item Pipeline的激活

将Item Pipeline的类添加到ITEM_PIPELINES设置,在此设置中分配给类的整数值决定了运行顺序,由低到高。如下:

ITEM_PIPELINES = {
    'myproject.pipelines.PricePipeline': 300,
    'myproject.pipelines.JsonWriterPipeline': 800,
}

更多爬虫知识以及实例源码,可关注微信公众号:angry_it_man

你可能感兴趣的:(JavaScript,scrapy,python,爬虫,开发语言)