解决scrapy下载小说乱序

由于scrapy使用异步下载,所以会出现下载小说章节的结果是乱序的。

可以通过下面的方法将章节顺序传递给item,并保存起来:

在解析主页得到所有章节信息(章节名、网址、还有顺序)后,通过Request()的cb_kwargs传递一个关键字参数‘order’给回调函数parse_item(),代表该章节的顺序。

items.py:

# -*- coding: utf-8 -*-
import scrapy

class XiaoshuoItem(scrapy.Item):
    order = scrapy.Field()		# 序号,章节排序的依据
    name = scrapy.Field()		# 章节名
    content = scrapy.Field()	# 章节内容

xiaoshuo_spyder.py:

# -*- coding: utf-8 -*-
import scrapy
from scrapy import Request
from Xiaoshuo.items import XiaoshuoItem

class XiaoshuoSpider(scrapy.Spider):
    name = 'Xiaoshuo_spider'
    start_urls = ['https://www.biquge.biz/0_844/']

    def parse(self, response):
    """解析主页里所有章节地址并下载,通过cb_kwargs={'order': i + 1}来传递章节顺序"""
        sels = response.xpath('//div[@id="list"]//dd/a')
        for i, a in enumerate(sels):
            # yield response.follow(a, callback=self.parse_item, cb_kwargs={'order': i + 1})
            yield Request(response.urljoin(a.xpath('@href').get()), callback=self.pasrse_item, cb_kwargs={'order': i + 1})

    def parse_item(self, response, order):
    """主页解析后章节顺序通过order传递进来"""
        item = XiaoshuoItem()
        item['order'] = order
        item['name'] = response.xpath('//h1/text()').get()
        item['content'] = response.xpath('//div[@id="content"]').get()
        return item

pipelines.py

# -*- coding: utf-8 -*-
class XiaoshuoPipeline(object):
    def open_spider(self, spider):
    """定义items,用来保存每个item"""
        self.items = []
        
	def process_item(self, item, spider):
    """将下载解析到的各个item添加到items,此时是乱序的"""
        self.items.append(item)
        return item
        
    def close_spider(self, spider):
    """在爬虫结束的时候,将items按照'order'字段排列,并最终合并成一个html文件"""
        with open('御魂者传奇.html', 'w', encoding='utf-8') as f:
            header = ''
            footer = ''
            f.write(header)
            
            # 所有章节按order字段排序
            self.items.sort(key=lambda i: i['order'])
            
            for item in self.items:
                cont = '

{}

{}


'
.format(item['name'], item['content']) f.write(cont) f.write(footer)

settings.py:(打开管道)

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'project_name.pipelines.XiaoshuoPipeline': 300,
}

如果本文对您有帮助,请给我留个言。

你可能感兴趣的:(技术)