scrapy框架中多个spider同时运行:scrapyd的部署及使用

scrapy是一个爬虫框架,而scrapyd是一个网页版管理scrapy的工具,scrapy爬虫写好后,可以使用命令运行,但是如果能够在网页上操作就比较方便。scrapyd就是为了解决这个问题,能够在网页端查看正在执行的任务,也能新建和终止爬虫任务,功能比较强大。

Scrapyd使用详解:

1.安装scrapyd

pip install scrapyd

2.安装scrapy-client

pip install scrapy-client
注:windows系统下,在c:\python27\Scripts下生成的是scrapyd-deploy,无法直接运行scrapyd-deploy
解决办法:
在c:\python27\Scripts下新建一个scrapyd-deploy.bat,文件内如下“`

@echo off
C:\Python27\python C:\Python27\Scripts\scrapyd-deploy %*

添加环境变量:C:\Python27\Scripts;

3.运行scrapyd

首先切命令行路径到Scrapy项目的根目录下,执行scrapyd,将它运行起来
scrapyd

2018-09-04T16:16:12+0800 [-] Loading f:\anaconda\lib\site-packages\scrapyd\txapp.py...
2018-09-04T16:16:14+0800 [-] Scrapyd web console available at http://127.0.0.1:6800/
2018-09-04T16:16:14+0800 [-] Loaded.
2018-09-04T16:16:14+0800 [twisted.application.app.AppLogger#info] twistd 18.7.0 (f:\anaconda\python.exe 3.6.2) starting up.
2018-09-04T16:16:14+0800 [twisted.application.app.AppLogger#info] reactor class: twisted.internet.selectreactor.SelectReactor.
2018-09-04T16:16:14+0800 [-] Site starting on 6800
2018-09-04T16:16:14+0800 [twisted.web.server.Site#info] Starting factory 
2018-09-04T16:16:14+0800 [Launcher] Scrapyd 1.2.0 started: max_proc=16, runner='scrapyd.runner'

4.发布工程到scrapyd

4.1修改爬虫scrapy.cfg文件
取消#url = http://localhost:6800/前面的#,具体如下:

# Automatically created by: scrapy startproject
#
# For more information about the [deploy] section see:
# https://scrapyd.readthedocs.io/en/latest/deploy.html

[settings]
default = CZBK.settings
[deploy]                   
url = http://localhost:6800/   将#注释掉
project = CZBK

然后在命令行切换命令至scrapy工程根目录,运行如下命令:

scrapyd-deploy <target> -p <project>

示例:

scrapyd-deploy -p Newsspider

接下来验证是否发布成功:

scrapyd-deploy -l

default              http://localhost:6800/

4.2创建爬虫任务

curl http://localhost:6800/schedule.json -d project=Newsspider -d spider=news

未解决scrapy中有的spider不出现,显示只有0个spiders的情况,需要注释掉settings中的

#LOG_LEVEL = 'ERROR'
#LOG_STDOUT = True
#LOG_FILEV = "/tmp/spider.log"
#LOG_FORMAT = "%(asctime)s[%(name)s:%(message)s]"

4.3查看爬虫任务
在网页中输入:http://localhost:6800

4.4运行配置
配置文件:C:\Python27\Lib\site-packages\scrapyd-1.1.0-py2.7.egg\scrapyd\default_scrapyd.conf

[scrapyd]
eggs_dir = eggs
items_dir = items
jobs_to_keep = 50 
max_proc = 0 
max_proc_per_cpu = 4 
finished_to_keep = 100 
poll_interval = 5 
http_port = 6800 
debug = off 
runner = scrapyd.runner 
application = scrapyd.app.application 
launcher = scrapyd.launcher.Launcher 

[services] 
schedule.json = scrapyd.webservice.Schedule 
cancel.json = scrapyd.webservice.Cancel 
addversion.json = scrapyd.webservice.AddVersion 
listprojects.json = scrapyd.webservice.ListProjects listversions.json = scrapyd.webservice.ListVersions listspiders.json = scrapyd.webservice.ListSpiders 
delproject.json = scrapyd.webservice.DeleteProject delversion.json = scrapyd.webservice.DeleteVersion 
listjobs.json = scrapyd.webservice.ListJobs 

你可能感兴趣的:(爬虫)