scrapy- 分布式爬虫框架搭建

1分布式使用

scrapy_redis组件

​ pip install scrapy_redis

1、scrapy和scrapy_redis的区别

​ scrapy是一个通用的爬虫框架,不支持分布式

​ scrapy_redis就是为实现scrapy的分布式而诞生的,它里面提功了redis的组件,通过这些redis组件,就可以实现分布式

​ 2、官网案例

​ [http://github.com/rmax/scrapy-redis](http://github.com/rmax/scrapy-redis)

​ 三个样本

​ dmoz.py 传统的CrawlSpider,目的就是把数据保存在redis,运行方式 指令:scrapy crawl dmoz

​ myspider_redis.py 继承自RedisCrawlSpider,start_url被redis_key给取代了,其他地方不变

2 分布式爬虫开发的步骤:

把原来普通的部署在分布式的系统上运行,就构成了分布式爬虫

​ 1、环境部署

​ scrapy: 爬虫运行的基础 (如果服务端不参与分布式的爬取可以不装)

​ scrapy_redis :组件(也可以认为是scrapy和redis交互的中间件),用于把scrapy爬虫和redis数据库联系起来

​ redis服务器:用于存储分布式爬虫爬取到的数据,一般情况下我们将redis服务器部署在Linux系统上,Windows上也可安装(但是这个服务不能用于生产环境);无论是在Linux上还是在Windows上,都必须配置其能够远程访问

​ 2、测试redis服务器是否联通

​ 如果ping了不PONG

​ 1)服务器没有配置远程连接

​ 2)服务器崩溃

​ 3)服务器出现冲突 config set stop-writes-on-bgsave-error no

​ 4)其他意外情况:[http://www.baidu.com](http://www.baidu.com)

​ 3、在普通的scrapy爬虫框架下去爬取,保证爬虫爬取的数据没有错误再去分布式系统上部署

​ 测试数据格式是否正确

​ 先测:json 目的主要是看代码是否有误 【比如:用xpath的时候路径写错等】

​ 再测redis 目的主要是看一下有木有系统级的错误 【如:redis服务器崩溃】

4、部署分布式

​ 服务器端(master端):

​ 可以用某一台主机作为redis服务器的运行方(即服务端),也称为master

​ 客户端(slaver端):

​ 1)把普通爬虫修改成分布式,去掉start_urls(不让slaver随意的执行),替换成redis_key(为了让master能够控制slaver的爬去)

2)修改自己配置文件,即在配置文件中加上scrapy_redis的组件
5、爬去
slaver端:首先要运行起来,等待master发出起始地址再开始
master端:在适当的时候想redis服务器中放入一个起始地址的列表

注:如果想将 Scrapy 改造成分布式,就会有两个问题必须要解决
①request 队列集中管理
②去重集中管理

scrapy-redis框架搭建:

1 setting(项目配置环境)

# -*- coding: utf-8 -*-

# Scrapy settings for CrawlSpiderDemo project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'CrawlSpiderDemo'

SPIDER_MODULES = ['CrawlSpiderDemo.spiders']
NEWSPIDER_MODULE = 'CrawlSpiderDemo.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'CrawlSpiderDemo (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'CrawlSpiderDemo.middlewares.CrawlspiderdemoSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'CrawlSpiderDemo.middlewares.CrawlspiderdemoDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'CrawlSpiderDemo.pipelines.CrawlspiderdemoPipeline': 300,
    # 分布式的爬虫的数据可以不通过本地的管道(数据不需要往本地存),数据需要存在redis数据库中,在这里需要加入一个redis数据库的管道组件
    "scrapy_redis.pipelines.RedisPipeline":400
}

# 指定Redis数据库相关配置
# Redis的主机地址
REDIS_HOST = "10.36.133.159"
# 端口号
REDIS_PORT = 6379
# 密码
# REDIS_PARAMS = {"password":'xxxx'}


# 1、调度器需要切换成Scrapy_Redis的调度器(这个调度器是Scrapy_Redis组件对scrapy原生调度器的重写,加入一些分布式调度的算法)
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# 2、加入scrapy_redis的去重组件
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
# 3、爬取过程中是否允许暂停
SCHEDULER_PERSIST = True



# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


2 middlewares.py (中间件)

    ##   暂且不需要进行修改

3,items.py (定义数据结构地方)

    ##   暂且不需要进行修改

4 pipelines.py (管道文件)

    ##   暂且不需要进行修改

myspider.py ( 爬虫文件,以后的爬虫代码)

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
# 导入LinkExtractor用于提取链接
from scrapy.spiders import CrawlSpider, Rule
# Rule定义一个规则,然后让LinkExtractor取根据这些规则提取url

# 引入分布式爬虫类
from scrapy_redis.spiders import RedisCrawlSpider

from CrawlSpiderDemo.items import CrawlspiderdemoItem

# 在scrapy框架中包了两个分类的爬虫分别是:Spider(基本爬虫)和CrawlSpider(增量模板爬虫)
# CrawlSpider是Spider的一个派生类,spider类设计原则只从start_urls列表中提取内容,CrawlSpider定义了一些规则,这些规则可以跟踪链接,从而可以使得一个页面中所有的符合规则的链接都被提取出来放入调度器中
# 在不断访问url的过程中,爬虫匹配到的url越来越多

class DushuSpider(RedisCrawlSpider):
    name = 'dushu'
    allowed_domains = ['dushu.com']
    # start_urls = ['https://www.dushu.com/book/1002.html'] # 分布式的爬虫所有的url都是从redis数据库的相关键下面提取

    # redis_key这个属性指定了分布式爬虫在获取url的时候从哪些键中获取的
    redis_key = "dushu:start_urls"

    rules = (
        Rule(LinkExtractor(allow=r'/book/100\d_\d+\.html'), callback='parse_item', follow=True),
    )
    # rules 规则: 包含若干个Rule对象,每一个Rule对象对我们爬取网站的规则都做了一些特定的操作,根据LinkExtractor里面的规则提取出所有的链接,然后把这些链接通过引擎压入调度器的调度队列中,调度器进而去调度下载,然后回调parse_item  (这里的回调方法写成了字符串形式) ,再从二次请求的这些url对应的页面中根据LinkExtractor的规则继续匹配(如果有重复,自动剔除),依次类推,直到匹配到所有的页面

    # LinkExtractor的匹配规则:
    # 用正则表达式来匹配:LinkExtractor(allow="某正则") # /book/1002_\d\.html
    # 用xpath匹配:LinkExtractor(restrict_xpath="某xpath路径")
    # 用css选择器:LinkExtractor(restrict_css="某css选择器")

    def parse_item(self, response):
        print(response.url)
        # 解析页面
        book_list = response.xpath("//div[@class='bookslist']//li")
        for book in book_list:
            item = CrawlspiderdemoItem()
            item["book_name"] = book.xpath(".//h3/a/text()").extract_first()

            # 其他自己解析

            # 获取到二级页面的url
            next_url = "https://www.dushu.com" + book.xpath(".//h3/a/@href").extract_first()

            yield scrapy.Request(url=next_url,callback=self.parse_next,meta={"item":item})

    def parse_next(self, response):
        item = response.meta["item"]
        item["price"] = response.xpath("//span[@class='num']/text()").extract_first()
        m = response.xpath("//div[@class='text txtsummary']")[2]
        item["mulu"] = m.xpath(".//text()").extract()

        yield item



你可能感兴趣的:(scrapy- 分布式爬虫框架搭建)