Scrapy学习过程之六:pipeline

参考:https://docs.scrapy.org/en/latest/topics/item-pipeline.html#topics-item-pipeline

架构图:

Scrapy学习过程之六:pipeline_第1张图片

Item Pipeline 

就是一些简单的处理Item的类,输入是Item输出也是Item,多个类就组成一个管道。

典型用法:

  • 清洗数据
  • 验证数据的有效性
  • 去重
  • 排序

Writing your own item pipeline

process_item(selfitemspider)

必需实现,必需返回dict或者Item或者Twisted Deferred或者触发DropItem异常,DropItem异常将导致Item停止在Pipeline中的流动。

Parameters:
  • item (Item object or a dict) – the item scraped
  • spider (Spider object) – the spider which scraped the item

可选择性实现的方法:

open_spider(selfspider)

当Spider打开时调用此方法,可以做一些初始化工作,出个日志什么的。

Parameters: spider (Spider object) – the spider which was opened

close_spider(selfspider)

SPIDER关闭的时候调用,清理工作。

Parameters: spider (Spider object) – the spider which was closed

from_crawler(clscrawler)

不知道什么时候调用,通过crawler可以访问Scrapy架构中的所有组件。

Parameters: crawler (Crawler object) – crawler that uses this pipeline

 Item pipeline example

from scrapy.exceptions import DropItem

class PricePipeline(object):

    vat_factor = 1.15

    def process_item(self, item, spider):
        if item.get('price'):
            if item.get('price_excludes_vat'):
                item['price'] = item['price'] * self.vat_factor
            return item
        else:
            raise DropItem("Missing price in %s" % item)

加一点对数据进行整形的逻辑。

Write items to a JSON file

import json

class JsonWriterPipeline(object):

    def open_spider(self, spider):
        self.file = open('items.jl', 'w')

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + "\n"
        self.file.write(line)
        return item

Write items to MongoDB

import pymongo

class MongoPipeline(object):

    collection_name = 'scrapy_items'

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        self.db[self.collection_name].insert_one(dict(item))
        return item

Activating an Item Pipeline component

Item pipeline写好了以后,需要配置一下才能生效,配置项为ITEM_PIPELINES,示例如下:

ITEM_PIPELINES = {
    'myproject.pipelines.PricePipeline': 300,
    'myproject.pipelines.JsonWriterPipeline': 800,
}

 

你可能感兴趣的:(Scrapy)