首先我们需要先安装scrapy框架,没有安装的同学可以看ubuntu下安装scrapy网络爬虫框架
创建一个项目 Creating a project
1 进入到想要创建项目的目录: scrapy startproject tutorial
这样就可以创建了一个新的scrapy项目tutorial
2 看一下项目的树形图
tutorial/ scrapy.cfg tutorial/ __init__.py items.py pipelines.py settings.py spiders/ __init__.py ...
scrapy.cfg 是项目的配置文件
tutorial/ 是项目的入口
items.py 是项目的数据字段文件
pipelines.py 是项目的管道文件
settings.py 是项目的配置文件
spiders/ 是项目中放网络蜘蛛的目录
定义我们要的数据字段 Defining our Item
1 定义自己所需要的数据字段是从我们爬取下来的数据中提取的
2 定义字段在items.py中定义Item类来实现的
3 我们在items.py中定义出三个字段,titile和link以及desc
from scrapy.item import Item, Field class DmozItem(Item): title = Field() link = Field() desc = Field()
1 网络蜘蛛是指从用户定义好的一组域中爬取数据
2 要创建一个网络蜘蛛,我们必须在spiders/ 目录下创建一个文件
3 我们创建第一个网络蜘蛛,保存为dmoz_spider.py
from scrapy.spider import BaseSpider class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): filename = response.url.split("/")[-2] open(filename, 'wb').write(response.body)
parse()函数 是网络蜘蛛爬取后response的对象,负责解析响应数据
运行项目 Crawling
1 回到这个项目的最顶层运行:scrapy crawl dmoz
2 有如下结果
2014-01-23 18:13:07-0400 [scrapy] INFO: Scrapy started (bot: tutorial) 2014-01-23 18:13:07-0400 [scrapy] INFO: Optional features available: ... 2014-01-23 18:13:07-0400 [scrapy] INFO: Overridden settings: {} 2014-01-23 18:13:07-0400 [scrapy] INFO: Enabled extensions: ... 2014-01-23 18:13:07-0400 [scrapy] INFO: Enabled downloader middlewares: ... 2014-01-23 18:13:07-0400 [scrapy] INFO: Enabled spider middlewares: ... 2014-01-23 18:13:07-0400 [scrapy] INFO: Enabled item pipelines: ... 2014-01-23 18:13:07-0400 [dmoz] INFO: Spider opened 2014-01-23 18:13:08-0400 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None) 2014-01-23 18:13:09-0400 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None) 2014-01-23 18:13:09-0400 [dmoz] INFO: Closing spider (finished)
3 运行完这个项目之后,在这个项目tutorial产生两个文件Books和Resources
项目是怎样工作的? What just happened under the hood?
scrapy对定义在spider里面的每一个url产生一个http的request请求,然后通过parse()函数进行回滚处理。
提取数据字段 Extracting Items
1 有几种方法从web页面中提取数据,比如XPath和CSS
2 几个XPath例子的解释
/html/head/title: 选择所有head内部的title内容
/html/head/title/text(): 选择所有的位于title内部的text内容
//td: 选择所有的<td>元素
//div[@class="mine"]: 选择所有的class名叫mine的div元素
3 选择器的四个基本方法
xpath(): 返回一个选择器列表,每一个代表xpath选择的
css(): 返回一个选择器列表,每一个代表css选择的
extract(): 返回一个unicode字符串
re(): 返回一个unicode字符串从正则表达式中选出的
4 为了说明使用selectors,我们使用scrapy shell
回到项目的最顶层: scrapy shell "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
5 出现如下
[ ... Scrapy log here ... ] 2014-01-23 17:11:42-0400 [default] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None) [s] Available Scrapy objects: [s] crawler <scrapy.crawler.Crawler object at 0x3636b50> [s] item {} [s] request <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] sel <Selector xpath=None data=u'<html>\r\n<head>\r\n<meta http-equiv="Conten'> [s] settings <CrawlerSettings module=None> [s] spider <Spider 'default' at 0x3cebf50> [s] Useful shortcuts: [s] shelp() Shell help (print this help) [s] fetch(req_or_url) Fetch request (or URL) and update local objects [s] view(response) View response in a browser In [1]:
In [1]: sel.xpath('//title') Out[1]: [<Selector xpath='//title' data=u'<title>Open Directory - Computers: Progr'>] In [2]: sel.xpath('//title').extract() Out[2]: [u'<title>Open Directory - Computers: Programming: Languages: Python: Books</title>'] In [3]: sel.xpath('//title/text()') Out[3]: [<Selector xpath='//title/text()' data=u'Open Directory - Computers: Programming:'>] In [4]: sel.xpath('//title/text()').extract() Out[4]: [u'Open Directory - Computers: Programming: Languages: Python: Books'] In [5]: sel.xpath('//title/text()').re('(\w+):') Out[5]: [u'Computers', u'Programming', u'Languages', u'Python']
from scrapy.spider import BaseSpider from scrapy.selector import Selector class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): sel = Selector(response) sites = sel.xpath('//ul/li') for site in sites: title = site.xpath('a/text()').extract() link = site.xpath('a/@href').extract() desc = site.xpath('text()').extract() print title, link, desc
from scrapy.spider import BaseSpider from scrapy.selector import Selector from tutorial.items import DmozItem class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): sel = Selector(response) sites = sel.xpath('//ul/li') items = [] for site in sites: item = DmozItem() item['title'] = site.xpath('a/text()').extract() item['link'] = site.xpath('a/@href').extract() item['desc'] = site.xpath('text()').extract() items.append(item) return items
scrapy crawl dmoz -o items.json -t json
这个命令将生成items.json文件,包含所有爬取的字段