scrapy-redis分布式爬虫的搭建过程(代码篇)

scrapy-redis分布式爬虫的搭建过程(代码篇)

1. 背景

  • 关于环境搭建和理论部分请参考前面的文章:
  • scrapy-redis分布式爬虫的搭建过程(理论篇):http://blog.csdn.net/zwq912318834/article/details/78854571
  • redis数据库在windows下的安装,配置与使用:http://blog.csdn.net/zwq912318834/article/details/78770209

2. 环境

  • 系统:win7
  • scrapy-redis
  • redis 3.0.5
  • python 3.6.1

3. 代码结构

3.1. 主机分布。

这里写图片描述

3.2. Master机器。

3.3. Slaver机器。

4. 执行步骤

  • 第一步:在slaver端的爬虫中,指定好 redis_key,并指定好redis数据库的地址,比如:
class MySpider(RedisSpider):
    """Spider that reads urls from redis queue (myspider:start_urls)."""
    name = 'amazon'
    redis_key = 'amazonCategory:start_urls'
# 指定redis数据库的连接参数
'REDIS_HOST': '172.16.1.99',
'REDIS_PORT': 6379,
  • 第二步:启动slaver端的爬虫,爬虫进入等待状态,**等待 redis 中出现 redis_key **,Log如下:
2017-12-12 15:54:18 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-12-12 15:54:18 [scrapy.utils.log] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}
2017-12-12 15:54:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2017-12-12 15:54:18 [myspider_redis] INFO: Reading start URLs from redis key 'myspider:start_urls' (batch size: 110, encoding: utf-8
2017-12-12 15:54:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'redisClawerSlaver.middlewares.ProxiesMiddleware',
 'redisClawerSlaver.middlewares.HeadersMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-12-12 15:54:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-12-12 15:54:18 [scrapy.middleware] INFO: Enabled item pipelines:
['redisClawerSlaver.pipelines.ExamplePipeline',
 'scrapy_redis.pipelines.RedisPipeline']
2017-12-12 15:54:18 [scrapy.core.engine] INFO: Spider opened
2017-12-12 15:54:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-12-12 15:55:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-12-12 15:56:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
  • 第三步:启动脚本,往redis数据库中填入redis_key(start_urls)
#!/usr/bin/env python
# -*- coding:utf-8 -*-

import redis

# 将start_url 存储到redis中的redis_key中,让爬虫去爬取
redis_Host = "172.16.1.99"
redis_key = 'amazonCategory:start_urls'

# 创建redis数据库连接
rediscli = redis.Redis(host = redis_Host, port = 6379, db = "0")

# 先将redis中的requests全部清空
flushdbRes = rediscli.flushdb()
print(f"flushdbRes = {flushdbRes}")
rediscli.lpush(redis_key, "https://www.baidu.com")

这里写图片描述

  • 第四步:slaver端的爬虫开始爬取数据。Log如下:
2017-12-12 15:56:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
parse url = https://www.baidu.com, status = 200, meta = {'download_timeout': 25.0, 'proxy': 'http://proxy.abuyun.com:9020', 'download_slot': 'www.baidu.com', 'download_latency': 0.2569999694824219, 'depth': 7}
parse url = https://www.baidu.com, status = 200, meta = {'download_timeout': 25.0, 'proxy': 'http://proxy.abuyun.com:9020', 'download_slot': 'www.baidu.com', 'download_latency': 0.8840000629425049, 'depth': 8}
2017-12-12 15:57:18 [scrapy.extensions.logstats] INFO: Crawled 2 pages (at 2 pages/min), scraped 1 items (at 1 items/min)
  • 第五步:启动python脚本,将redis中的items,转储到mongodb中。
这部分代码,请参照:scrapy-redis分布式爬虫的搭建过程(理论篇)

你可能感兴趣的:(阿里云服务器环境搭建)