PW05

一、创建爬虫项目

通过xshell连接了服务器,并在服务器中输入scrapy startproject quetos创建项目,项目名quotes。


二、定义item

将quotes文件夹中的item.py下载并修改,代码如下:

class QuotesItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    content = scrapy.Field()
    author = scrapy.Field()
    tags = scrapy.Field()

三、编写爬虫文件

创建quotesspider.py,并上传至spider文件夹中。quotesspider代码如下:

import scrapy
from quotes.items import QuotesItem

class quotesSpider(scrapy.Spider):
    name = 'quotes'
    start_urls = ['http://quotes.toscrape.com/page/1']

    def parse(self, response):
        for motto in response.xpath('//div[@class="quote"]'):
            item = QuotesItem()
            item['content'] = motto.xpath('./span[@class="text"]/text()').extract_first()
            item['author'] = motto.xpath('.//small[@class="author"]/text()').extract_first()
            item['tags'] = motto.xpath('.//a[@class="tag"]//text()').extract()
            yield item
        
        next_page = response.xpath('//a[contains(text(),"Next")]/@href').extract_first()
        if next_page:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse)

四、爬虫结果

输入scrapy crawl quotesspider -o quotes.json,爬取结果保存在quotes.json文件里。部分爬取结果如下:

你可能感兴趣的:(PW05)