大数据时代,搜索无处不在,利用开源软件快速搭建搜索引擎,经过几天的尝试,终于成功,整个过程分享出来免得大家再浪费不必要的时间。
请尊重原创,转载请注明以及原始链接地址
一、需要的软件及其版本
hadoop 1.2.1
hbase 0.94.27
nutch 2.3
solr 4.9.1
以上参考下载地址如下:
http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1503-01.iso
https://www.apache.org/dist/hadoop/core/hadoop-1.2.1/hadoop-1.2.1.tar.gz
http://mirror.bit.edu.cn/apache/hbase/hbase-0.94.27/hbase-0.94.27.tar.gz
http://www.apache.org/dyn/closer.cgi/nutch/2.3/apache-nutch-2.3-src.tar.gz
http://archive.apache.org/dist/lucene/solr/4.9.1/solr-4.9.1.tgz
二、系统环境准备
单独新建一个hadoop用户:useradd hadoop
设置密码:passwd hadoop
开启管理员权限:vi /etc/sudoers,增加一行hadoop ALL=(ALL) ALL
确保配置localhost地址映射到127.0.0.1,vi /etc/hosts
设置无密码登陆(百度),至ssh localhost可以实现无密码登陆
确保已经安装java-1.7,并配置好JAVA_HOME环境变量,如:
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/
三、安装hadoop 1.2.1
sudo chown -R hadoop:hadoop hadoop
修改./conf/core-site.xml配置如下:
<configuration>
<property>
<name>hadoop.tmp.dirname>
<value>/data/hadoop-data/tmpvalue>
<description>Abase for other temporary directories.description>
property>
<property>
<name>fs.default.namename>
<value>hdfs://localhost:9000value>
property>
configuration>
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/
<configuration>
<property>
<name>dfs.replicationname>
<value>1value>
property>
configuration>
<configuration>
<property>
<name>mapred.job.trackername>
<value>localhost:9001value>
property>
configuration>
export HADOOP_PREFIX=/usr/local/hadoop/
export PATH=${HADOOP_PREFIX}/bin/:${PATH}
hadoop namenode -format
start-all.sh
hadoop fs -ls /
返回:
Found 1 items
drwxr-xr-x – hadoop supergroup 0 2015-07-31 16:53 /data
hadoop job -list
返回:
0 jobs currently running
JobId State StartTime UserName Priority SchedulingInfo
jobtracker状态页面:http://localhost:50030/jobtracker.jsp
datanode状态页面:http://localhost:50075
请尊重原创,转载请注明来源网站www.shareditor.com以及原始链接地址
四、安装hbase 0.94.27
创建目录/data/hbase/zookeeper/
修改./conf/hbase-env.sh中的JAVA_HOME配置如下:
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/
并开启:
export HBASE_MANAGES_ZK=true
<configuration>
<property>
<name>hbase.rootdirname>
<value>hdfs://localhost:9000/hbasevalue>
property>
<property>
<name>hbase.cluster.distributedname>
<value>truevalue>
property>
<property>
<name>hbase.zookeeper.property.dataDirname>
<value>/data/hbase/zookeepervalue>
property>
configuration>
cp /usr/local/hadoop/hadoop-core-1.2.1.jar ./lib/
./bin/start-hbase.sh
hbase(main):002:0> list
TABLE
0 row(s) in 0.0170 seconds
hbase(main):003:0>
五、安装nutch 2.3
修改./conf/gora.properties增加如下一行:
gora.datastore.default=org.apache.gora.hbase.store.HBaseStore
<configuration>
<property>
<name>storage.data.store.classname>
<value>org.apache.gora.hbase.store.HBaseStorevalue>
<description>Default class for storing datadescription>
property>
<property>
<name>http.agent.namename>
<value>My Nutch Spidervalue>
property>
<property>
<name>plugin.includesname>
<value>protocol-httpclient|urlfilter-regex|index-(basic|more)|query-(basic|site|url|lang)|indexer-solr|nutch-extensionpoints|protocol-httpclient|urlfilter-regex|parse-(text|html|msexcel|msword|mspowerpoint|pdf)|summary-basic|scoring-opic|urlnormalizer-(pass|regex|basic)protocol-http|urlfilter-regex|parse-(html|tika|metatags)|index-(basic|anchor|more|metadata)value>
property>
configuration>
将依赖的hadoop-core和hadoop-test的版本由1.2.0改为1.2.1
将gora-hbase依赖解除注释如下:
<dependency org=”org.apache.gora” name=”gora-hbase” rev=”0.5″ conf=”*->default” />
六、安装solr 4.9.1
sudo chown -R hadoop:hadoop solr
cp /usr/local/nutch/runtime/local/conf/schema.xml solr/collection1/conf/schema.xml
java -jar start.jar
七、启动抓取并测试搜索效果
http://nutch.apache.org/
./bin/crawl ./myUrls/ TestCrawl http://localhost:8983/solr 2