解决Scrapy-Redis爬取完毕之后继续空跑的问题

解决Scrapy-Redis爬取完毕之后继续空跑的问题

1. 背景

这里写图片描述
根据scrapy-redis分布式爬虫的原理,多台爬虫主机共享一个爬取队列。当爬取队列中存在request时,爬虫就会取出request进行爬取,如果爬取队列中不存在request时,爬虫就会处于等待状态,行如下:

  1.  
    E:\Miniconda\python.exe E:/PyCharmCode/redisClawerSlaver/redisClawerSlaver/spiders/main.py
  2.  
    2017 -12 -12 15: 54: 18 [scrapy.utils.log] INFO: Scrapy 1.4 .0 started (bot: scrapybot)
  3.  
    2017 -12 -12 15: 54: 18 [scrapy.utils.log] INFO: Overridden settings: { 'SPIDER_LOADER_WARN_ONLY': True}
  4.  
    2017 -12 -12 15: 54: 18 [scrapy.middleware] INFO: Enabled extensions:
  5.  
    [ 'scrapy.extensions.corestats.CoreStats',
  6.  
    'scrapy.extensions.telnet.TelnetConsole',
  7.  
    'scrapy.extensions.logstats.LogStats']
  8.  
    2017 -12 -12 15: 54: 18 [myspider_redis] INFO: Reading start URLs from redis key 'myspider:start_urls' (batch size: 110, encoding: utf -8
  9.  
    2017 -12 -12 15: 54: 18 [scrapy.middleware] INFO: Enabled downloader middlewares:
  10.  
    [ 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
  11.  
    'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
  12.  
    'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
  13.  
    'redisClawerSlaver.middlewares.ProxiesMiddleware',
  14.  
    'redisClawerSlaver.middlewares.HeadersMiddleware',
  15.  
    'scrapy.downloadermiddlewares.retry.RetryMiddleware',
  16.  
    'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
  17.  
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
  18.  
    'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
  19.  
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
  20.  
    'scrapy.downloadermiddlewares.stats.DownloaderStats']
  21.  
    2017 -12 -12 15: 54: 18 [scrapy.middleware] INFO: Enabled spider middlewares:
  22.  
    [ 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
  23.  
    'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
  24.  
    'scrapy.spidermiddlewares.referer.RefererMiddleware',
  25.  
    'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
  26.  
    'scrapy.spidermiddlewares.depth.DepthMiddleware']
  27.  
    2017 -12 -12 15: 54: 18 [scrapy.middleware] INFO: Enabled item pipelines:
  28.  
    [ 'redisClawerSlaver.pipelines.ExamplePipeline',
  29.  
    'scrapy_redis.pipelines.RedisPipeline']
  30.  
    2017 -12 -12 15: 54: 18 [scrapy.core.engine] INFO: Spider opened
  31.  
    2017 -12 -12 15: 54: 18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
  32.  
    2017 -12 -12 15: 55: 18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
  33.  
    2017 -12 -12 15: 56: 18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
  • 可是,如果所有的request都已经爬取完毕了呢?这件事爬虫程序是不知道的,它无法区分结束和空窗期状态的不同,所以会一直处于上面的那种等待状态,也就是我们说的空跑。
  • 那有没有办法让爬虫区分这种情况,自动结束呢?

2. 环境

  • 系统:win7
  • scrapy-redis
  • redis 3.0.5
  • python 3.6.1

3. 解决方案

  • 从背景介绍来看,基于scrapy-redis分布式爬虫的原理,爬虫结束是一个很模糊的概念,在爬虫爬取过程中,爬取队列是一个不断动态变化的过程,随着request的爬取,又会有新的request进入爬取队列。进进出出。爬取速度高于填充速度,就会有队列空窗期(爬取队列中,某一段时间会出现没有request的情况),爬取速度低于填充速度,就不会出现空窗期。所以对于爬虫结束这件事来说,只能模糊定义,没有一个精确的标准。
  • 所以,下面这两种方案都是一种大概的思路。

3.1. 利用scrapy的关闭spider扩展功能

  • 参考官方文档:http://scrapy-chs.readthedocs.io/zh_CN/0.24/topics/extensions.html
  1.  
    # 关闭spider扩展
  2.  
    class scrapy.contrib.closespider.CloseSpider
  3.  
    当某些状况发生,spider会自动关闭。每种情况使用指定的关闭原因。
  4.  
     
  5.  
    关闭spider的情况可以通过下面的设置项配置:
  6.  
     
  7.  
    CLOSESPIDER_TIMEOUT
  8.  
    CLOSESPIDER_ITEMCOUNT
  9.  
    CLOSESPIDER_PAGECOUNT
  10.  
    CLOSESPIDER_ERRORCOUNT
  • CLOSESPIDER_TIMEOUT
  1.  
    CLOSESPIDER_TIMEOUT
  2.  
    默认值: 0
  3.  
     
  4.  
    一个整数值,单位为秒。如果一个spider在指定的秒数后仍在运行, 它将以 closespider_timeout 的原因被自动关闭。 如果值设置为 0(或者没有设置),spiders不会因为超时而关闭。
  • CLOSESPIDER_ITEMCOUNT
  1.  
    CLOSESPIDER_ITEMCOUNT
  2.  
    缺省值: 0
  3.  
     
  4.  
    一个整数值,指定条目的个数。如果spider爬取条目数超过了指定的数, 并且这些条目通过item pipeline传递,spider将会以 closespider_itemcount 的原因被自动关闭。
  • CLOSESPIDER_PAGECOUNT
  1.  
    CLOSESPIDER_PAGECOUNT
  2.  
    0 .11 新版功能.
  3.  
     
  4.  
    缺省值: 0
  5.  
     
  6.  
    一个整数值,指定最大的抓取响应( reponses)数。 如果 spider抓取数超过指定的值,则会以 closespider_pagecount 的原因自动关闭。 如果设置为0(或者未设置), spiders不会因为抓取的响应数而关闭。
  • CLOSESPIDER_ERRORCOUNT
  1.  
    CLOSESPIDER_ERRORCOUNT
  2.  
    0 .11 新版功能.
  3.  
     
  4.  
    缺省值: 0
  5.  
     
  6.  
    一个整数值,指定 spider可以接受的最大错误数。 如果 spider生成多于该数目的错误,它将以 closespider_errorcount 的原因关闭。 如果设置为0(或者未设置), spiders不会因为发生错误过多而关闭。
  • 示例:打开 settings.py,添加一个配置项,如下
  1.  
    # 爬虫运行超过23.5小时,如果爬虫还没有结束,则自动关闭
  2.  
    CLOSESPIDER_TIMEOUT = 84600
  • 特别注意:如果爬虫在规定时限没有把request全部爬取完毕,此时强行停止的话,爬取队列中就还会存有部分request请求。那么爬虫下次开始爬取时,一定要记得在master端对爬取队列进行清空操作。

3.2. 修改scrapy-redis源码

  1.  
    # ----------- 修改scrapy-redis源码时,特别需要注意的是:---------
  2.  
    # 第一,要留有原始代码的备份。
  3.  
    # 第二,当项目移植到其他机器上时,需要将scrapy-redis源码一起移植过去。一般代码位置在\Lib\site-packages\scrapy_redis\下
  • 想象一下,爬虫已经结束的特征是什么?那就是爬取队列已空,从爬取队列中无法取到request信息。那着手点应该就在从爬取队列中获取request和调度这个部分。查看scrapy-redis源码,我们发现了两个着手点:

3.2.1. 细节

  1.  
    # .\Lib\site-packages\scrapy_redis\schedluer.py
  2.  
    def next_request(self):
  3.  
    block_pop_timeout = self.idle_before_close
  4.  
    # 下面是从爬取队列中弹出request
  5.  
    # 这个block_pop_timeout 我尚未研究清除其作用。不过肯定不是超时时间......
  6.  
    request = self.queue.pop(block_pop_timeout)
  7.  
    if request and self. stats:
  8.  
    self.stats.inc_value( 'scheduler/dequeued/redis', spider= self.spider)
  9.  
    return request
  • 9
  1.  
    # .\Lib\site-packages\scrapy_redis\spiders.py
  2.  
    def next_requests(self):
  3.  
    "" "Returns a request to be scheduled or none." ""
  4.  
    use_set = self.settings.getbool( 'REDIS_START_URLS_AS_SET', defaults.START_URLS_AS_SET)
  5.  
    fetch_one = self.server.spop if use_set else self.server.lpop
  6.  
    # XXX: Do we need to use a timeout here?
  7.  
    found = 0
  8.  
    # TODO: Use redis pipeline execution.
  9.  
    while found < self. redis_batch_size:
  10.  
    data = fetch_one( self.redis_key)
  11.  
    if not data:
  12.  
    # 代表爬取队列为空。但是可能是永久为空,也可能是暂时为空
  13.  
    # Queue empty.
  14.  
    break
  15.  
    req = self.make_request_from_data(data)
  16.  
    if req:
  17.  
    yield req
  18.  
    found += 1
  19.  
    else:
  20.  
    self.logger.debug( "Request not made from data: %r", data)
  21.  
     
  22.  
    if found:
  23.  
    self.logger.debug( "Read %s requests from '%s'", found, self.redis_key)
  • 参考注释,从上述源码来看,就只有这两处可以做手脚。但是爬虫在爬取过程中,队列随时都可能出现暂时的空窗期。想判断爬取队列为空,一般是设定一个时限,如果在一个时段内,队列一直持续为空,那我们可以基本认定这个爬虫已经结束了。所以有了如下的改动:
  1.  
    # .\Lib\site-packages\scrapy_redis\schedluer.py
  2.  
     
  3.  
    # 原始代码
  4.  
    def next_request(self):
  5.  
    block_pop_timeout = self.idle_before_close
  6.  
    request = self.queue.pop(block_pop_timeout)
  7.  
    if request and self. stats:
  8.  
    self.stats.inc_value( 'scheduler/dequeued/redis', spider= self.spider)
  9.  
    return request
  10.  
     
  11.  
    # 修改后的代码
  12.  
    def __init__(self, server,
  13.  
    persist=False,
  14.  
    flush_on_start=False,
  15.  
    queue_key=defaults.SCHEDULER_QUEUE_KEY,
  16.  
    queue_cls=defaults.SCHEDULER_QUEUE_CLASS,
  17.  
    dupefilter_key=defaults.SCHEDULER_DUPEFILTER_KEY,
  18.  
    dupefilter_cls=defaults.SCHEDULER_DUPEFILTER_CLASS,
  19.  
    idle_before_close=0,
  20.  
    serializer=None):
  21.  
    # ......
  22.  
    # 增加一个计数项
  23.  
    self.lostGetRequest = 0
  24.  
     
  25.  
    def next_request(self):
  26.  
    block_pop_timeout = self.idle_before_close
  27.  
    request = self.queue.pop(block_pop_timeout)
  28.  
    if request and self. stats:
  29.  
    # 如果拿到了就恢复这个值
  30.  
    self.lostGetRequest = 0
  31.  
    self.stats.inc_value( 'scheduler/dequeued/redis', spider= self.spider)
  32.  
    if request is None:
  33.  
    self.lostGetRequest += 1
  34.  
    print(f "request is None, lostGetRequest = {self.lostGetRequest}, time = {datetime.datetime.now()}")
  35.  
    # 100个大概8分钟的样子
  36.  
    if self.lostGetRequest > 200:
  37.  
    print(f "request is None, close spider.")
  38.  
    # 结束爬虫
  39.  
    self.spider.crawler.engine.close_spider( self.spider, 'queue is empty')
  40.  
    return request
  • 相关log信息
  1.  
    2017 -12 -14 16: 18: 06 [scrapy.middleware] INFO: Enabled item pipelines:
  2.  
    [ 'redisClawerSlaver.pipelines.beforeRedisPipeline',
  3.  
    'redisClawerSlaver.pipelines.amazonRedisPipeline',
  4.  
    'scrapy_redis.pipelines.RedisPipeline']
  5.  
    2017 -12 -14 16: 18: 06 [scrapy.core.engine] INFO: Spider opened
  6.  
    2017 -12 -14 16: 18: 06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
  7.  
    request is None, lostGetRequest = 1, time = 2017 -12 -14 16: 18: 06.370400
  8.  
    request is None, lostGetRequest = 2, time = 2017 -12 -14 16: 18: 11.363400
  9.  
    request is None, lostGetRequest = 3, time = 2017 -12 -14 16: 18: 16.363400
  10.  
    request is None, lostGetRequest = 4, time = 2017 -12 -14 16: 18: 21.362400
  11.  
    request is None, lostGetRequest = 5, time = 2017 -12 -14 16: 18: 26.363400
  12.  
    request is None, lostGetRequest = 6, time = 2017 -12 -14 16: 18: 31.362400
  13.  
    request is None, lostGetRequest = 7, time = 2017 -12 -14 16: 18: 36.363400
  14.  
    request is None, lostGetRequest = 8, time = 2017 -12 -14 16: 18: 41.362400
  15.  
    request is None, lostGetRequest = 9, time = 2017 -12 -14 16: 18: 46.363400
  16.  
    request is None, lostGetRequest = 10, time = 2017 -12 -14 16: 18: 51.362400
  17.  
    2017 -12 -14 16: 18: 56 [scrapy.core.engine] INFO: Closing spider (queue is empty)
  18.  
    request is None, lostGetRequest = 11, time = 2017 -12 -14 16: 18: 56.363400
  19.  
    request is None, close spider.
  20.  
    登录结果:loginRes = ( 235, b 'Authentication successful')
  21.  
    登录成功,code = 235
  22.  
    mail has been send successfully. message:Content-Type: text/plain; charset= "utf-8"
  23.  
    MIME-Version: 1.0
  24.  
    Content-Transfer-Encoding: base64
  25.  
    From: 548516910@qq.com
  26.  
    To: 548516910@qq.com
  27.  
    Subject: =?utf -8?b? 54is6Jmr57uT5p2f54q25oCB5rGH5oql77yabmFtZSA9IHJlZGlzQ2xhd2VyU2xhdmVyLCByZWFzb24gPSBxdWV1ZSBpcyBlbXB0eSwgZmluaXNoZWRUaW1lID0gMjAxNy0xMi0xNCAxNjoxODo1Ni4zNjQ0MDA=?=
  28.  
     
  29.  
    57uG6IqC77yacmVhc29uID0gcXVldWUgaXMgZW1wdHksIHN1Y2Nlc3NzISBhdDoyMDE3LTEyLTE0
  30.  
    IDE2OjE4OjU2LjM2NDQwMA==
  31.  
     
  32.  
    2017 -12 -14 16: 18: 56 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
  33.  
    { 'finish_reason': 'queue is empty',
  34.  
    'finish_time': datetime.datetime(2017, 12, 14, 8, 18, 56, 364400),
  35.  
    'log_count/INFO': 8,
  36.  
    'start_time': datetime.datetime(2017, 12, 14, 8, 18, 6, 362400)}
  37.  
    2017 -12 -14 16: 18: 56 [scrapy.core.engine] INFO: Spider closed (queue is empty)
  38.  
    Unhandled Error
  39.  
    Traceback (most recent call last):
  40.  
    File "E:\Miniconda\lib\site-packages\scrapy\commands\runspider.py", line 89, in run
  41.  
    self.crawler_process.start()
  42.  
    File "E:\Miniconda\lib\site-packages\scrapy\crawler.py", line 285, in start
  43.  
    reactor.run(installSignalHandlers= False) # blocking call
  44.  
    File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 1243, in run
  45.  
    self.mainLoop()
  46.  
    File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 1252, in mainLoop
  47.  
    self.runUntilCurrent()
  48.  
    --- ---
  49.  
    File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 878, in runUntilCurrent
  50.  
    call.func(* call.args, ** call.kw)
  51.  
    File "E:\Miniconda\lib\site-packages\scrapy\utils\reactor.py", line 41, in __call__
  52.  
    return self._func(*self._a, **self._kw)
  53.  
    File "E:\Miniconda\lib\site-packages\scrapy\core\engine.py", line 137, in _next_request
  54.  
    if self.spider_is_idle(spider) and slot.close_if_idle:
  55.  
    File "E:\Miniconda\lib\site-packages\scrapy\core\engine.py", line 189, in spider_is_idle
  56.  
    if self.slot.start_requests is not None:
  57.  
    builtins.AttributeError: 'NoneType' object has no attribute 'start_requests'
  58.  
     
  59.  
    2017 -12 -14 16: 18: 56 [twisted] CRITICAL: Unhandled Error
  60.  
    Traceback (most recent call last):
  61.  
    File "E:\Miniconda\lib\site-packages\scrapy\commands\runspider.py", line 89, in run
  62.  
    self.crawler_process.start()
  63.  
    File "E:\Miniconda\lib\site-packages\scrapy\crawler.py", line 285, in start
  64.  
    reactor.run(installSignalHandlers= False) # blocking call
  65.  
    File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 1243, in run
  66.  
    self.mainLoop()
  67.  
    File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 1252, in mainLoop
  68.  
    self.runUntilCurrent()
  69.  
    --- ---
  70.  
    File "E:\Miniconda\lib\site-packages\twisted\internet\base.py", line 878, in runUntilCurrent
  71.  
    call.func(* call.args, ** call.kw)
  72.  
    File "E:\Miniconda\lib\site-packages\scrapy\utils\reactor.py", line 41, in __call__
  73.  
    return self._func(*self._a, **self._kw)
  74.  
    File "E:\Miniconda\lib\site-packages\scrapy\core\engine.py", line 137, in _next_request
  75.  
    if self.spider_is_idle(spider) and slot.close_if_idle:
  76.  
    File "E:\Miniconda\lib\site-packages\scrapy\core\engine.py", line 189, in spider_is_idle
  77.  
    if self.slot.start_requests is not None:
  78.  
    builtins.AttributeError: 'NoneType' object has no attribute 'start_requests'
  79.  
     
  80.  
     
  81.  
    Process finished with exit code 0
  • 有一个问题,如上所述,当通过engine.close_spider(spider, ‘reason’)来关闭spider时,有时会出现几个错误之后才能关闭。可能是因为scrapy会开启多个线程同时抓取,然后其中一个线程关闭了spider,其他线程就找不到spider才会报错。

3.2.2. 注意事项

整个调度过程如下:

  • scheduler.py 
    这里写图片描述

  • queue.py 
    这里写图片描述

  • 所以,PriorityQueue和另外两种队列FifoQueue,LifoQueue有所不同,特别需要注意。如果会使用到timeout这个参数,那么在setting中就只能指定爬取队列为FifoQueue或LifoQueue。
  1.  
    # 指定排序爬取地址时使用的队列,
  2.  
    # 默认的 按优先级排序(Scrapy默认),由sorted set实现的一种非FIFO、LIFO方式。
  3.  
    # 'SCHEDULER_QUEUE_CLASS': 'scrapy_redis.queue.SpiderPriorityQueue',
  4.  
    # 可选的 按先进先出排序(FIFO)
  5.  
    'SCHEDULER_QUEUE_CLASS': 'scrapy_redis.queue.SpiderQueue',
  6.  
    # 可选的 按后进先出排序(LIFO)
  7.  
    # SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderStack'
     
     
    https://blog.csdn.net/mr_hui_/article/details/81455387

转载于:https://www.cnblogs.com/du-jun/p/11434113.html

你可能感兴趣的:(python,数据库,爬虫)