Scrapy Request对象多层爬取

总结

爬虫往往不是一个网址一个网址的爬,那还不如自己复制粘贴来的快,往往是一个网址下有很多链接,需要逐个去爬取,怎么操作呢?
大致方法是:先爬取最外成的网址,获取其中需要爬取的网址(url),对网址逐个遍历,生成Request对象,即爬取对象,逐个爬取。爬取成功后调用回调函数分析爬取的结果。
这里就需要知道scrapy.Request对象的几个重要参数:
url :Request要请求(爬取)的地址
callback :Request要请求成功后的回调函数,支持两种类型,一个是函数类型;一个是字符串,注意这里不能写成函数调用(曾习惯而为之)
meta :作为参数传递到response对象中,dict类型

另外:还需要理解一下yield

# coding:utf-8
import scrapy
from scrapy import Request

from ..items import KPItem


class AppendixSpider(scrapy.Spider):
    # 爬虫名称
    name = "appendix"
    allowed_domains = ['ches.org.cn']
    # 目标网址,爬虫启动后自动爬取得链接,列表内可以放多个链接
    start_urls = ['http://www.ches.org.cn/ches/slkp/slkpsy/']

    # 爬虫启动时,爬取链接成功后自动回调的函数,默认parse,参数self和response
    def parse(self, response):
        # 实例化item对象
        title_list = response.xpath("/html/body/div[5]/div/div[1]/div[2]/ul/li/a/p/text()").extract()
        url_list = response.xpath("/html/body/div[5]/div/div[1]/div[2]/ul/li/a/@href").extract()
        for i, j in zip(title_list, url_list):
            # 将爬取的数据写入到item中
            kp = KPItem()
            kp['type'] = i
            url ='http://www.ches.org.cn/ches' + j[5:] + '/'
            # 注意这里要用yield,因为item是单个传递的
            # yield可以理解为return,将pr返回,但是下一次警戒着上次的循环继续执行, meta={'item': kp}
            yield scrapy.Request(url, callback=self.title_parse, meta={'item': kp, 'url': url})
            # print(i, ':', url)

    def title_parse(self, response):
        # 获取KPItem
        kp = response.meta['item']
        purl = response.meta['url']
        title_list = response.xpath("/html/body/div/div/div[4]/div[1]/div/div[1]/div/div/div/div/div[1]/h5/a/text()").extract()
        url_list = response.xpath("/html/body/div/div/div[4]/div[1]/div/div[1]/div/div/div/div/div[1]/h5/a/@href").extract()
        time_list = response.xpath("/html/body/div/div/div[4]/div[1]/div/div[1]/div/div/div/div/div[2]/h6/text()").extract()
        # # pageUrl_list = response.xpath("/html/body/div/div/div[4]/div[1]/div/div[1]/div/div/div[17]/nav/ul/a[4]").extract()
        for title, url, time in zip(title_list, url_list, time_list):
            kp['title'] = title
            kp['pubTime'] = time
            url = purl + url[2:]
            # print(title, ':', time, ':', url)
            yield scrapy.Request(url, callback=self.content_parse, meta={'item': kp})

    def content_parse(self, response):

        # 获取KPItem
        kp = response.meta['item']
        content = ''
        p_list = response.xpath("div[class='juzhongtupian'] p")
        for p in p_list:
            c = p.xpath('string(.)').extract_first()
            img = p.xpath('img/@src').extract()
            if img != '':
                kp['picture'] = 'http://www.ches.org.cn/ches'+img[2:]
                content = content + '#' + kp['picture']
            elif c != '':
                content = content + '#' + c.xpath('string(.)').extract_first()
        kp['content'] = content
        yield kp

参考:

Scrapy框架--Requests对象

Scrapy-Request和Response(请求和响应)

你可能感兴趣的:(Scrapy Request对象多层爬取)