开源网络蜘蛛(Spider)一览

spider是搜索引擎的必须模块.spider数据的结果直接影响到搜索引擎的评价指标.

第一个spider程序由MIT的Matthew K Gray操刀该程序的目的是为了统计互联网中主机的数目

Spier定义(关于Spider的定义,有广义和狭义两种).

  • 狭义:利用标准的http协议根据超链和web文档检索的方法遍历万维网信息空间的软件程序.
  • 广义:所有能利用http协议检索web文档的软件都称之为spider.

其中Protocol Gives Sites Way To Keep Out The 'Bots Jeremy Carl, Web Week, Volume 1, Issue 7, November 1995 是和spider息息相关的协议,可以参考robotstxt.org.

Heritrix

Heritrix is the Internet Archive's open-source, extensible, web-scale, archival-quality web crawler project.

Heritrix (sometimes spelled heretrix, or misspelled or missaid as heratrix/heritix/heretix/heratix) is an archaic word for heiress (woman who inherits). Since our crawler seeks to collect and preserve the digital artifacts of our culture for the benefit of future researchers and generations, this name seemed apt.

语言:JAVA, (下载地址)

WebLech URL Spider 

WebLech is a fully featured web site download/mirror tool in Java, which supports many features required to download websites and emulate standard web-browser behaviour as much as possible. WebLech is multithreaded and comes with a GUI console.

语言:JAVA, (下载地址)

JSpider

A Java implementation of a flexible and extensible web spider engine. Optional modules allow functionality to be added (searching dead links, testing the performance and scalability of a site, creating a sitemap, etc ..

语言:JAVA, (下载地址)

WebSPHINX

WebSPHINX is a web crawler (robot, spider) Java class library, originally developed by Robert Miller of Carnegie Mellon University. Multithreaded, tollerant HTML parsing, URL filtering and page classification, pattern matching, mirroring, and more.

语言:JAVA, (下载地址)

PySolitaire

PySolitaire is a fork of PySol Solitaire that runs correctly on Windows and has a nice clean installer. PySolitaire (Python Solitaire) is a collection of more than 300 solitaire and Mahjongg games like Klondike and Spider.

语言:Python , (下载地址)

The Spider Web Network Xoops Mod Team     

The Spider Web Network Xoops Module Team provides modules for the Xoops community written in the PHP coding language. We develop mods and or take existing php script and port it into the Xoops format. High quality mods is our goal.

语言:php , (下载地址)

Fetchgals

A multi-threaded web spider that finds free porn thumbnail galleries by visiting a list of known TGPs (Thumbnail Gallery Posts). It optionally downloads the located pictures and movies. TGP list is included. Public domain perl script running on Linux.

语言:perl , (下载地址)

Where Spider


 

The purpose of the Where Spider software is to provide a database system for storing URL addresses. The software is used for both ripping links and browsing them offline. The software uses a pure XML database which is easy to export and import.

语言:XML , (下载地址)

Sperowider

Sperowider Website Archiving Suite is a set of Java applications, the primary purpose of which is to spider dynamic websites, and to create static distributable archives with a full text search index usable by an associated Java applet.

语言:Java , (下载地址)

SpiderPy

SpiderPy is a web crawling spider program written in Python that allows users to collect files and search web sites through a configurable interface.

语言:Python , (下载地址)

Spidered Data Retrieval

Spider is a complete standalone Java application designed to easily integrate varied datasources. * XML driven framework * Scheduled pulling * Highly extensible * Provides hooks for custom post-processing and configuration

语言:Java , (下载地址)

webloupe

WebLoupe is a java-based tool for analysis, interactive visualization (sitemap), and exploration of the information architecture and specific properties of local or publicly accessible websites. Based on web spider (or web crawler) technology.

语言:java , (下载地址)

ASpider

Robust featureful multi-threaded CLI web spider using apache commons httpclient v3.0 written in java. ASpider downloads any files matching your given mime-types from a website. Tries to reg.exp. match emails by default, logging all results using log4j.

语言:java , (下载地址)

larbin

Larbin is an HTTP Web crawler with an easy interface that runs under Linux. It can fetch more than 5 million pages a day on a standard PC (with a good network).

语言:C++, (下载地址)

你可能感兴趣的:(Other)