scrapyd 部署

验证 命令行输入:

scrapyd 输出如下表示打开成功: 

 bdccl@bdccl-virtual-machine:~$ scrapyd 

Removing stale pidfile /home/bdccl/twistd.pid 

2017-12-15T19:01:09+0800 [-] Removing stale pidfile /home/bdccl/twistd.pid 

2017-12-15T19:01:09+0800 [-] Loading /usr/local/lib/python2.7/dist-packages/scrapyd/txapp.py... 

2017-12-15T19:01:10+0800 [-] Scrapyd web console available at http://127.0.0.1:6800/ 2017-12-15T19:01:10+0800 [-] Loaded. 2017-12-15T19:01:10+0800 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 17.9.0 (/usr/bin/python 2.7.12) starting up. 

2017-12-15T19:01:10+0800 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor. 

2017-12-15T19:01:10+0800 [-] Site starting on 6800 

2017-12-15T19:01:10+0800 [twisted.web.server.Site#info] Starting factory  

2017-12-15T19:01:10+0800 [Launcher] Scrapyd 1.2.0 started: max_proc=4, runner=u'scrapyd.runner'1234567891011

 发布爬虫 


 常用命令: 

 部署爬虫到scrapyd: 

首先切换到爬虫项目根目录下,修改scrapy.cfg,将下面这一行的注释去掉: 

 url = http://localhost:6800/ 

 然后在终端中执行如下命令: 

 scrapyd-deploy <*target> -p PROJECT_NAME (target 为项目标签,与scrapy.cfg文件中[deploy]选项对应,可选)

 然后在浏览器中打开:http://localhost:6800/或http://127.0.0.1:6800/即可在浏览器中查看爬虫任务执行状态以及对应爬虫的job_id 

 查看状态: scrapyd-deploy -l 

 启动爬虫: curl http://localhost:6800/schedule.json -d project=PROJECT_NAME -d spider=SPIDER_NAME

 停止爬虫: curl http://localhost:6800/cancel.json -d project=PROJECT_NAME -d job=JOB_ID

 删除项目: curl http://localhost:6800/delproject.json -d project=PROJECT_NAME 

 列出部署过的项目: curl http://localhost:6800/listprojects.json 列出某个项目内的爬虫: curlhttp://localhost:6800/listspiders.json?project=PROJECT_NAME

 列出某个项目的job: curl http://localhost:6800/listjobs.json?project=PROJECT_NAME ----!

你可能感兴趣的:(scrapyd 部署)