如题,其中一个可能的原因是使用的scrapy版本问题:
在0.16下:
参考:http://doc.scrapy.org/en/0.16/intro/tutorial.html
scrapy shell http://www.dmoz.org/Computers/Programming/Languages/Python/Books/
[ ... Scrapy log here ... ] [s] Available Scrapy objects: [s] 2010-08-19 21:45:59-0300 [default] INFO: Spider closed (finished) [s] hxs <HtmlXPathSelector (http://www.dmoz.org/Computers/Programming/Languages/Python/Books/) xpath=None> [s] item Item() [s] request <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] spider <BaseSpider 'default' at 0x1b6c2d0> [s] xxs <XmlXPathSelector (http://www.dmoz.org/Computers/Programming/Languages/Python/Books/) xpath=None> [s] Useful shortcuts: [s] shelp() Print this help [s] fetch(req_or_url) Fetch a new request or URL and update shell objects [s] view(response) View response in a browser In [1]:
而在0.24版本下:
参考:http://doc.scrapy.org/en/0.24/intro/tutorial.html
#记住:总是加上双引号,否则参数可能不工作 scrapy shell "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
[ ... Scrapy log here ... ] 2014-01-23 17:11:42-0400 [default] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None) [s] Available Scrapy objects: [s] crawler <scrapy.crawler.Crawler object at 0x3636b50> [s] item {} [s] request <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] settings <scrapy.settings.Settings object at 0x3fadc50> [s] spider <Spider 'default' at 0x3cebf50> [s] Useful shortcuts: [s] shelp() Shell help (print this help) [s] fetch(req_or_url) Fetch request (or URL) and update local objects [s] view(response) View response in a browser In [1]: