从Nutch的输出日志分析其流程

http://blog.csdn.net/amuseme_lu/article/details/5993916

一、简介

1. Nutch是一个基于Hadoop和Lucene的一个网络爬行器,用于收集网页信息。

2. 特点:基于Plugin机制以提高可扩展性;多协议和多线程分布式抓取;基于插件的内容分析机制;强大的抓取预处理控制;可扩展的数据处理模型(mapReduce);全文索引器和搜索引擎(Lucene or Solor),支持分布式查询;强大的API和集成配置。

 

二、一些必须的配置

1. Nutch 1.2版本,ubuntu 10.01 xfce

2. nutch你可以在http://www.apache.org/dyn/closer.cgi/nutch/中下到,你也可以在其Subversion中下最新的版本

3. 下载后在其要目录下有一个bin/nutch脚本用来启动nutch,也可以用其来看一下help帮助信息

4. 抓取之前要做一些配置:写一个urls文件用于生成crawlDB数据库;以conf/crawl-urlfilter.txt进行配置,把MY.DOMAIN.NAME变量替换成你想抓取的Domain的名字;修改conf/nutch-site.xml,加入<property><name>http.agent.name</name><value>test-nutch</value></property>信息;另入JAVA_HOME环境变量,一般在/usr/lib/jvm下会装有java的环境。

 

三、输出日志分析

1.  我用了如下命令,其中dir为指定抓取目录,depth为抓取深度,输出日志如下:

 

lemo@lemo-laptop:~/Workspace/java/Apache/Nutch/nutch-1.2$ bin/nutch crawl urls -dir crawl.test -depth 1 

crawl started in: crawl.test <--- 抓取目录,抓取后有如下目录crawldb,index,indexes,linkdb, segments

rootUrlDir = urls <--- 初始化urls目录

threads = 10 <--- 抓取线程数

depth = 1 <--- 抓取深度

indexer=lucene <--- 索引器名字

<<<<<<<<<<<<<<<<<<<  Injector开始,把urls目录合并到crawl db中去

Injector: starting at 2010-11-07 19:52:45

Injector: crawlDb: crawl.test/crawldb <--- inject的输出目录

Injector: urlDir: urls <--- 输入目录

Injector: Converting injected urls to crawl db entries. <--- 这里是对urls数据进行数据模型的转换

Injector: Merging injected urls into crawl db. <--- 这里利用MP计算模型来进行Inject操作

Injector: finished at 2010-11-07 19:52:48, elapsed: 00:00:03

<<<<<<<<<<<<<<<<<<<   Generate开始,产生适合抓取的urls

Generator: starting at 2010-11-07 19:52:48

Generator: Selecting best-scoring urls due for fetch. <--- 对urls进行分数计算,产生topN个进行抓取

Generator: filtering: true <--- 进行相应的urls过滤,这个在conf/regex-urlfilter.txt有配置

Generator: normalizing: true <--- 是否对urls进行规范化,这个在conf/regex-normalize.txt中配置

Generator: jobtracker is 'local', generating exactly one partition. <--- 没有使用MP,只是本地读取

Generator: Partitioning selected urls for politeness. <--- 

Generator: segment: crawl.test/segments/20101107195251

Generator: finished at 2010-11-07 19:52:52, elapsed: 00:00:03

<<<<<<<<<<<<<<<<<<<    Fetcher开始,进行Generator产生出来的urls进行抓取

Fetcher: Your 'http.agent.name' value should be listed first in 'http.robots.agents' property.

Fetcher: starting at 2010-11-07 19:52:52

Fetcher: segment: crawl.test/segments/20101107195251 <--- 抓取数据存放目录

Fetcher: threads: 10 <--- 抓取线程数,nutch是用了一个改进的work-crew的线程模型来进行网页抓取

QueueFeeder finished: total 1 records + hit by time limit :0

fetching http://www.baidu.com/ <--- 正在抓取的url

-finishing thread FetcherThread, activeThreads=8 <--- 每个线程结束抓取提示

-finishing thread FetcherThread, activeThreads=7

-finishing thread FetcherThread, activeThreads=6

-finishing thread FetcherThread, activeThreads=5

-finishing thread FetcherThread, activeThreads=4

-finishing thread FetcherThread, activeThreads=3

-finishing thread FetcherThread, activeThreads=2

-finishing thread FetcherThread, activeThreads=1

-finishing thread FetcherThread, activeThreads=1

-finishing thread FetcherThread, activeThreads=0

-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0 <--- 所有线程的一个状态和抓取队列状态

-activeThreads=0

Fetcher: finished at 2010-11-07 19:52:54, elapsed: 00:00:02

<<<<<<<<<<<<    dbupdate开始,把新生产的outlink等数据更新来原的crawldb数据库

CrawlDb update: starting at 2010-11-07 19:52:55

CrawlDb update: db: crawl.test/crawldb

CrawlDb update: segments: [crawl.test/segments/20101107195251]

CrawlDb update: additions allowed: true

CrawlDb update: URL normalizing: true

CrawlDb update: URL filtering: true

CrawlDb update: Merging segment data into db.

CrawlDb update: finished at 2010-11-07 19:52:56, elapsed: 00:00:01

<<<<<<<<<<<<< 更新linkDb数据库

LinkDb: starting at 2010-11-07 19:52:56

LinkDb: linkdb: crawl.test/linkdb

LinkDb: URL normalize: true

LinkDb: URL filter: true

LinkDb: adding segment: file:/home/lemo/Workspace/java/Apache/Nutch/nutch-1.2/crawl.test/segments/20101107195251

LinkDb: finished at 2010-11-07 19:52:58, elapsed: 00:00:01

<<<<<<<<<<<<<  index开始,对抓取数据进行索引

Indexer: starting at 2010-11-07 19:52:58

Indexer: finished at 2010-11-07 19:53:01, elapsed: 00:00:03

<<<<<<<<<<<<<  去重复数据

Dedup: starting at 2010-11-07 19:53:01

Dedup: adding indexes in: crawl.test/indexes

Dedup: finished at 2010-11-07 19:53:06, elapsed: 00:00:04

<<<<<<<<<<<<< 把新的索引与老的索引进行合并

IndexMerger: starting at 2010-11-07 19:53:06

IndexMerger: merging indexes to: crawl.test/index

Adding file:/home/lemo/Workspace/java/Apache/Nutch/nutch-1.2/crawl.test/indexes/part-00000

IndexMerger: finished at 2010-11-07 19:53:06, elapsed: 00:00:00

crawl finished: crawl.test

 

从上面的日志输出,

我们可以看出Nutch的抓取流程:inject->Generate->Fetch->Parse->UpdateCrawlDB->UpdateLinkDB->index shards,

而数据流模型为:inject: urls->CrawlDB;

    generate: CrawlDB->segment(crawl_generate),对于哪些urls要进行generate呢?这里使用了静态和动态产生机制,静态的是那些带宽优先的、超过抓取时间的、高优先级的(PageRank)、新加入的。Fetchlist的产生一般会以topN的方式来产生,选择最优的后选者,这方面也有学者用遗传算法来实现,优先级由不同的因素来决定,一般是通过插件来实现的。动态的是那些自动检查网页更新频率和时间变化来决定网页的generate的优先级。

    fetch: crawl_generate->crawl_fetch+content

    crawldbupdate: parse_data->crawldb,把分析出的外链接更新到crawldb中,用于下一轮抓取

    linkdb: parse_data->linkdb,把提供出的锚文本、反向链接等信息放入linkdb中

    indexer: CrawlDB,LinkDB,Segment->indexes,对抓取的数据进行全文索引

    Dedup: Segment->Segment,对网页进行去重,这里使用是的网页指纹的方法,对重复网页加删除标记

    IndexMerger:indexes->index,把新的索引合并到旧的索引中去

四:抓取数据模型

1. CrawlDB,用于存储所有的urls信息,包括抓取机制,抓取状态,网页指纹和元数据。

2. LinkDB,存储每一个url的连入锚链接和锚文本

3. Segment,原始的网页内容;解析后的网页;元数据;外链接;用于索引的元文本

你可能感兴趣的:(从Nutch的输出日志分析其流程)