参考:https://docs.scrapy.org/en/latest/topics/item-pipeline.html#topics-item-pipeline
架构图:
就是一些简单的处理Item的类,输入是Item输出也是Item,多个类就组成一个管道。
典型用法:
process_item
(self, item, spider)
必需实现,必需返回dict或者Item或者Twisted Deferred或者触发DropItem异常,DropItem异常将导致Item停止在Pipeline中的流动。
Parameters: |
|
---|
可选择性实现的方法:
open_spider
(self, spider)
当Spider打开时调用此方法,可以做一些初始化工作,出个日志什么的。
Parameters: | spider (Spider object) – the spider which was opened |
---|
close_spider
(self, spider)
SPIDER关闭的时候调用,清理工作。
Parameters: | spider (Spider object) – the spider which was closed |
---|
from_crawler
(cls, crawler)
不知道什么时候调用,通过crawler可以访问Scrapy架构中的所有组件。
Parameters: | crawler (Crawler object) – crawler that uses this pipeline |
---|
from scrapy.exceptions import DropItem
class PricePipeline(object):
vat_factor = 1.15
def process_item(self, item, spider):
if item.get('price'):
if item.get('price_excludes_vat'):
item['price'] = item['price'] * self.vat_factor
return item
else:
raise DropItem("Missing price in %s" % item)
加一点对数据进行整形的逻辑。
import json
class JsonWriterPipeline(object):
def open_spider(self, spider):
self.file = open('items.jl', 'w')
def close_spider(self, spider):
self.file.close()
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
import pymongo
class MongoPipeline(object):
collection_name = 'scrapy_items'
def __init__(self, mongo_uri, mongo_db):
self.mongo_uri = mongo_uri
self.mongo_db = mongo_db
@classmethod
def from_crawler(cls, crawler):
return cls(
mongo_uri=crawler.settings.get('MONGO_URI'),
mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')
)
def open_spider(self, spider):
self.client = pymongo.MongoClient(self.mongo_uri)
self.db = self.client[self.mongo_db]
def close_spider(self, spider):
self.client.close()
def process_item(self, item, spider):
self.db[self.collection_name].insert_one(dict(item))
return item
Item pipeline写好了以后,需要配置一下才能生效,配置项为ITEM_PIPELINES,示例如下:
ITEM_PIPELINES = {
'myproject.pipelines.PricePipeline': 300,
'myproject.pipelines.JsonWriterPipeline': 800,
}