nutch爬取新闻,如何做到指定的定时更新

适用于 nutch 1.7 版。 建立于  linux环境下。

爬新闻,需要注意的是,

     1、一定要更新入口 url 列表     2、爬取过的新闻不需要再爬取      3、如何控制nutch对爬取过的url进行检查


修改nutch-site.xml 添加如下配置



	db.fetch.interval.default
	420480000
	The default number of seconds between re-fetches of a page (30 days).
	



	db.fetch.interval.max
	630720000
	The maximum number of seconds between re-fetches of a page
	(90 days). After this period every page in the db will be re-tried, no
	matter what is its status.
	


往crontab 定时计划里添加定时执行如下脚本。

如下shell脚本,是控制的关键

#!/bin/sh

export JAVA_HOME=/usr/java/jdk1.6.0_45
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/jre/lib/dt.jar:$JAVA_HOME/jre/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

#Set Workspace

nutch_work=/home/nutch/SearchEngine/nutch-Test
tmp_dir=$nutch_work/out_tmp
save_dir=$nutch_work/out
solrurl=http://192.168.123.205:8080/solr-4.6.1/core-nutch

# Set Parameter
depth=2
threads=200

#-------Start--------
$nutch_work/bin/nutch inject $tmp_dir/crawldb $nutch_work/urls

#-----循环此操作,次数有深度决定-----
for((i=0;i<$depth;i++)) 
do
	#-----step 1-----
	if ((i==0))
	then
		$nutch_work/bin/nutch generate $tmp_dir/crawldb $tmp_dir/segments
		segment=`ls -d  $tmp_dir/segments/* | tail -1`
	else
		$nutch_work/bin/nutch generate $save_dir/crawldb $save_dir/segments
		segment=`ls -d  $save_dir/segments/* | tail -1`
	fi
	#-----step 2-----
	$nutch_work/bin/nutch fetch $segment -threads $threads
	#-----step 3-----
	$nutch_work/bin/nutch parse $segment
	#-----step 4-----
	if ((i==0))
	then
		$nutch_work/bin/nutch updatedb $save_dir/crawldb $segment
	else
		$nutch_work/bin/nutch updatedb $save_dir/crawldb $segment -noAdditions
	fi
	#-----step 5-----
	$nutch_work/bin/nutch invertlinks $save_dir/linkdb $segment
done

#-----step 6-----
$nutch_work/bin/nutch solrindex $solrurl $save_dir/crawldb -linkdb $save_dir/linkdb $segment

#-----step 7-----
$nutch_work/bin/nutch solrdedup $solrurl

#-----step 8-----
rm -rf $tmp_dir/*

#-----Finished-----


你可能感兴趣的:(原创)