scrapy中命令介绍

一、显示全部命令

1、在项目外输入 scrapy -h 

(scrapy_env) frange@ubuntu:~/workspace/spider$ scrapy -h
Scrapy 1.5.1 - no active project

Usage:
scrapy  [options] [args]

Available commands:
bench Run quick benchmark test
fetch Fetch a URL using the Scrapy downloader
genspider Generate new spider using pre-defined templates
runspider Run a self-contained spider (without creating a project)
settings Get settings values
shell Interactive scraping console
startproject Create new project
version Print Scrapy version
view Open URL in browser, as seen by Scrapy

[ more ] More commands available when run from project directory

Use "scrapy  -h" to see more info about a command
scrapy -h

 2、在项目内输入scrapy -h

(scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy -h
Scrapy 1.5.1 - project: spider_lago

Usage:
  scrapy  [options] [args]

Available commands:
  bench         Run quick benchmark test
  check         Check spider contracts
  crawl         Run a spider
  edit          Edit spider
  fetch         Fetch a URL using the Scrapy downloader
  genspider     Generate new spider using pre-defined templates
  list          List available spiders
  parse         Parse URL (using its spider) and print the results
  runspider     Run a self-contained spider (without creating a project)
  settings      Get settings values
  shell         Interactive scraping console
  startproject  Create new project
  version       Print Scrapy version
  view          Open URL in browser, as seen by Scrapy

Use "scrapy  -h" to see more info about a command
scrapy -h

 

二、单个命令介绍

1、bench

  对网站进行快速爬取测试,用于检测本地硬件的性能

(scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago$ scrapy bench http://baidu.com
2018-08-14 02:12:01 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: spider_lago)
2018-08-14 02:12:01 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.5.2 (default, Nov 23 2017, 16:37:01) - [GCC 5.4.0 20160609], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h  27 Mar 2018), cryptography 2.3, Platform Linux-4.15.0-29-generic-x86_64-with-Ubuntu-16.04-xenial
2018-08-14 02:12:02 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'spider_lago.spiders', 'LOG_LEVEL': 'INFO', 'BOT_NAME': 'spider_lago', 'LOGSTATS_INTERVAL': 1, 'SPIDER_MODULES': ['spider_lago.spiders'], 'CLOSESPIDER_TIMEOUT': 10}
2018-08-14 02:12:03 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.closespider.CloseSpider']
2018-08-14 02:12:03 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-08-14 02:12:03 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-08-14 02:12:03 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-08-14 02:12:03 [scrapy.core.engine] INFO: Spider opened
2018-08-14 02:12:03 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:04 [scrapy.extensions.logstats] INFO: Crawled 53 pages (at 3180 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:05 [scrapy.extensions.logstats] INFO: Crawled 117 pages (at 3840 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:06 [scrapy.extensions.logstats] INFO: Crawled 173 pages (at 3360 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:07 [scrapy.extensions.logstats] INFO: Crawled 229 pages (at 3360 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:08 [scrapy.extensions.logstats] INFO: Crawled 269 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:09 [scrapy.extensions.logstats] INFO: Crawled 325 pages (at 3360 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:10 [scrapy.extensions.logstats] INFO: Crawled 365 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:11 [scrapy.extensions.logstats] INFO: Crawled 373 pages (at 480 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:12 [scrapy.extensions.logstats] INFO: Crawled 421 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:13 [scrapy.extensions.logstats] INFO: Crawled 461 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-08-14 02:12:13 [scrapy.core.engine] INFO: Closing spider (closespider_timeout)
2018-08-14 02:12:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 199285,
 'downloader/request_count': 477,
 'downloader/request_method_count/GET': 477,
 'downloader/response_bytes': 1332429,
 'downloader/response_count': 477,
 'downloader/response_status_count/200': 477,
 'finish_reason': 'closespider_timeout',
 'finish_time': datetime.datetime(2018, 8, 14, 9, 12, 14, 485240),
 'log_count/INFO': 17,
 'memusage/max': 53321728,
 'memusage/startup': 53321728,
 'request_depth_max': 17,
 'response_received_count': 477,
 'scheduler/dequeued': 477,
 'scheduler/dequeued/memory': 477,
 'scheduler/enqueued': 9541,
 'scheduler/enqueued/memory': 9541,
 'start_time': datetime.datetime(2018, 8, 14, 9, 12, 3, 671156)}
2018-08-14 02:12:14 [scrapy.core.engine] INFO: Spider closed (closespider_timeout)
scrapy bench http://baidu.com

 2、fetch

  显示爬取过程

 

(scrapy_env) frange@ubuntu:~/workspace/spider$ scrapy fetch --nolog http://baidu.com

 百度一下,你就知道  

关于百度 About Baidu

©2017 Baidu 使用百度前必读  意见反馈 京ICP证030173号 

爬取百度

 

3、genspider

  创建一个爬虫项目,需要在爬虫项目内运行

4、runspider

  直接运行一个爬虫文件

5、settings

  查看scrapy对应的配置信息

(scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy settings --get ROBOTSTXT_OBEY
False

6、shell

  启动scrapy交互终端

(scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy shell
2018-08-14 03:01:30 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: spider_lago)
2018-08-14 03:01:30 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.5.2 (default, Nov 23 2017, 16:37:01) - [GCC 5.4.0 20160609], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h  27 Mar 2018), cryptography 2.3, Platform Linux-4.15.0-29-generic-x86_64-with-Ubuntu-16.04-xenial
2018-08-14 03:01:30 [scrapy.crawler] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0, 'SPIDER_MODULES': ['spider_lago.spiders'], 'NEWSPIDER_MODULE': 'spider_lago.spiders', 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'BOT_NAME': 'spider_lago'}
2018-08-14 03:01:30 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage']
2018-08-14 03:01:30 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-08-14 03:01:30 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-08-14 03:01:30 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-08-14 03:01:30 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    
[s]   item       {}
[s]   settings   
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects 
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
>>> 
scrapy shell

7、startproject

  创键爬虫

8、version

  scrapy版本信息

9、view

  实现下载某个网页并用浏览器查看

三、项目内命令

由于scrapy全局命令可以在非爬虫项目中使用也可以在项目中使用,所以,在项目命令中也会有全局命令。

1、genspider

  在爬虫项目目录中,基于爬虫模板直接创建一个scrapy爬虫文件

  下面的代码为查看模板

(scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy genspider -l
Available templates:
  basic
  crawl
  csvfeed
  xmlfeed
View Code

  查看模板内容

(scrapy_env) frange@ubuntu:~/workspace/spider/spider_lago/spider_lago$ scrapy genspider -d csvfeed
# -*- coding: utf-8 -*-
from scrapy.spiders import CSVFeedSpider


class $classname(CSVFeedSpider):
    name = '$name'
    allowed_domains = ['$domain']
    start_urls = ['http://$domain/feed.csv']
    # headers = ['id', 'name', 'description', 'image_link']
    # delimiter = '\t'

    # Do any adaptations you need here
    #def adapt_response(self, response):
    #    return response

    def parse_row(self, response, row):
        i = {}
        #i['url'] = row['url']
        #i['name'] = row['name']
        #i['description'] = row['description']
        return i
View Code

  创建一个爬虫

scrapy genspider -t basic weisuen baidu.com

2、check

  实现对某个爬虫文件进行合同检查

scrapy check 爬虫名

3、crawl

  启动某个爬虫

scrapy crawl 爬虫名 --loglevel=INFO

4、list

  列出当前可使用的爬虫文件

5 、edit

  用编辑器打开爬虫文件进行编辑

6、parse

  获取指定的url网址,如果没指定爬虫文件使用默认的爬虫文件和默认的处理函数进行处理

 

转载于:https://www.cnblogs.com/Frange/p/9476029.html

你可能感兴趣的:(python,爬虫,shell)