基于python的crawler

考虑到垂直爬虫及站内搜索的重要性,重新思考一下项目爬虫的技术架构及实现方案。以前的垂直爬虫曾经使用过heritrix、htmlparser、nutch等,各有优缺点。尤其是要做垂直网站的定向爬取时候,并没有太好的方案,只能够做指定页面的定向解析,因此以前主要还是使用htmlparser的方案。

    考察垂直爬虫的几个原则:

  • 性能较高:较好支持多线程并发处理;支持异步、非阻塞socket;支持分布式爬取;爬取调度算法性能较高;内存使用效率较高,不要老是出现out of memory问题;
  • 架构优美:组件式设计式架构,扩展方便;架构设计精巧。至少值得花时间去学习架构设计思想。
  • 扩展方便:能够与现有框架较好集成;由于是垂直爬虫,需要针对不同的网页定制爬取规则集逻辑,需要能够方便测试,不要老是重新编译,因此最好支持python等脚本语言
  • 功能全面:内置支持ajax/javascript爬取、登录认证、深度爬取设置、类似heritrix的爬取过滤器(filter)、页面压缩处理等
  • 管理功能:提供爬虫管理接口,能够实时监控和管理爬取

   厌烦了基于java的爬虫方案,尤其是考虑到python在网络编程上的易用性,因此打算考察基于python做新版本爬虫的可行性,刚好把久不使用的python捡起来。

    整理了一下目前基于python的crawler,大致有如下一些现成的项目方案可供参考:

    Mechanizehttp://wwwsearch.sourceforge.net/mechanize/

    Twillhttp://twill.idyll.org/

    Scrapyhttp://scrapy.org

    HarvestManhttp://www.harvestmanontheweb.com/

    Ruyahttp://ruya.sourceforge.net/

    psilibhttp://pypi.python.org/pypi/spider.py/0.5

    BeautifulSoup + urllib2http://www.crummy.com/software/BeautifulSoup/

    比较之后,选择Scrapy作为重点考察学习对象,尽管没有Mechanize及Harvestman成熟,但从其架构来看,还是很有前途的,尤其是基于twisted高性能框架的架构,很有吸引力。

    看看Scrapy的架构:

 

Components

  • Scrapy Engine

    The engine is responsible for controlling the data flow between all components of the system, and triggering events when certain actions occur. See the Data Flow section below for more details.

  • Scheduler

    The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.

  • Downloader

    The Downloader is responsible for fetching web pages and feeding them to the engine which, in turns, feeds them to the spiders.

  • Spiders

    Spiders are custom classes written by Scrapy users to parse response and extract items (aka scraped items) from them or additional URLs (requests) to follow. Each spider is able to handle a specific domain (or group of domains). For more information see Spiders.

  • Item Pipeline

    The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders. Typical tasks include cleansing, validation and persistence (like storing the item in a database). For more information see Item Pipeline.

  • Downloader middlewares

    Downloader middlewares are specific hooks that sit between the Engine and the Downloader and process requests when they pass from the Engine to the downloader, and responses that pass from Downloader to the Engine. They provide a convenient mechanism for extending Scrapy functionality by plugging custom code. For more information see Downloader Middleware.

  • Spider middlewares

    Spider middlewares are specific hooks that sit between the Engine and the Spiders and are able to process spider input (responses) and output (items and requests). They provide a convenient mechanism for extending Scrapy functionality by plugging custom code. For more information see Spider Middleware.

  • Scheduler middlewares

    Spider middlewares are specific hooks that sit between the Engine and the Scheduler and process requests when they pass from the Engine to the Scheduler and vice-versa. They provide a convenient mechanism for extending Scrapy functionality by plugging custom code.

参考资料:

    http://doc.scrapy.org

    http://stackoverflow.com/questions/419235/anyone-know-of-a-good-python-based-web-crawler-that-i-could-use

    http://en.wikipedia.org/wiki/Web_crawler#Open-source_crawlers

    http://java-source.net/open-source/crawlers

http://chuanliang2007.spaces.live.com/blog/cns!E5B7AB2851A4C9D2!795.entry

你可能感兴趣的:(Ajax,Web,框架,python,项目管理)