Solr是一个全文检索服务器,基于Lucene3.5开发的,我们可以在solr的基础上进行定制化的开发。
要进行solr开发首先得搭建一个solr服务器。
1、首先我们下载solr3.5和lucene3.5,分别在:
Solr:http://apache.etoak.com//lucene/solr/3.5.0
Lucene: http://apache.etoak.com//lucene/java/3.5.0/
2、将apache-solr-3.5.0\apache-solr-3.5.0\dist\apache-solr-3.5.0.war复制到apache-tomcat-6.0.29\webapps目录下面,
改名为solr.war,并启动tomcat,浏览http://localhost:8080/solr/,没有出错表示已经部署没有问题,停tomcat。
3、apache-solr-3.5.0\apache-solr-3.5.0\example\multicore复制到apache-tomcat\webapps\solr\conf下面
4、配置war运行的context上下文,在apache-tomcat\conf\Catalina\localhost创建文件solr.xml文件,此文件内容为:
<?xml version="1.0" encoding="UTF-8"?>
<Context docBase="${catalina.home}/webapps/solr.war" debug="0" crossContext="true" >
<Environment name="solr/home" type="java.lang.String"
value="${catalina.home}/webapps/solr/conf/multicore" override="true" />
</Context>
5、这个配置也可以写死到Tomcat 5.5\conf\web.xml里去,找到
<!-- solr-->
<env-entry>
<env-entry-name>solr/home</env-entry-name>
<env-entry-value>${catalina.home}/webapps/solr/conf/multicore</env-entry-value>
<env-entry-type>java.lang.String</env-entry-type>
</env-entry>
6、启动tomcat,访问http://localhost:8080/solr。出现两个链接Admin core0和Admin core1则搭建成功
7、配置上solr默认安装时一个Solr 的主目录
将上述Tomcat 5.5\conf\web.xml中的配置 <env-entry-value>${catalina.home}/webapps/solr/conf/multicore</env-entry-value>
改为 <env-entry-value>${catalina.home}/webapps/solr/conf/solr</env-entry-value>
将apache-solr-3.5.0\apache-solr-3.5.0\example\下的solr文件夹内容拷贝在/webapps/solr/conf/下。
8、配置solrconfig.xml文件找到
<!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not
loaded (causing an error if contrib/velocity has not been built fully) -->
<queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" enable="${solr.velocity.enabled:true}"/>
看注释也知道这是一个测试项目中的一个response writer,没有加载,因为没有built fully,所以先注释掉。要不然会起不来的。
还要注意的是下面
<lib dir="../../contrib/extraction/lib" />
<lib dir="../../contrib/clustering/lib/" />
<lib dir="../../contrib/velocity/lib" />
<!-- When a regex is specified in addition to a directory, only the
files in that directory which completely match the regex
(anchored on both ends) will be included.
-->
<lib dir="../../dist/" regex="apache-solr-cell-\d.*\.jar" />
<lib dir="../../dist/" regex="apache-solr-clustering-\d.*\.jar" />
<lib dir="../../dist/" regex="apache-solr-dataimporthandler-\d.*\.jar" />
<lib dir="../../dist/" regex="apache-solr-langid-\d.*\.jar" />
<lib dir="../../dist/" regex="apache-solr-velocity-\d.*\.jar" />
<!-- If a dir option (with or without a regex) is used and nothing
is found that matches, it will be ignored
-->
<lib dir="../../contrib/clustering/lib/" />
注意这个lib的加载路径一定要设置好,先从解压的工程中找出这些路径下的lib放到webapps的solr里面的一个路径中,并配置上
我配置的路径是:
<lib dir="solr/WEB-INF/lib/contrib/extraction/lib" />
<lib dir="solr/WEB-INF/lib/contrib/clustering/lib/" />
<lib dir="solr/WEB-INF/lib/contrib/velocity/lib" />
<!-- When a regex is specified in addition to a directory, only the
files in that directory which completely match the regex
(anchored on both ends) will be included.
-->
<lib dir="solr/WEB-INF/lib/dist/" regex="apache-solr-cell-\d.*\.jar" />
<lib dir="solr/WEB-INF/lib/dist/" regex="apache-solr-clustering-\d.*\.jar" />
<lib dir="solr/WEB-INF/lib/dist/" regex="apache-solr-dataimporthandler-\d.*\.jar" />
<lib dir="solr/WEB-INF/lib/dist/" regex="apache-solr-langid-\d.*\.jar" />
<lib dir="solr/WEB-INF/lib/dist/" regex="apache-solr-velocity-\d.*\.jar" />
<!-- If a dir option (with or without a regex) is used and nothing
is found that matches, it will be ignored
-->
<lib dir="solr/WEB-INF/lib/contrib/clustering/lib/" />
启动后没有问题
9、配置IK分析器,我使用的是IKAnalyzer3.2.8 版本
将IKAnalyzer的jar包导入刚生成的项目中lib目录下,并修改其下schema.xml
<!-- IKAnalyzer3.2.8 中文分词-->
<fieldType name="text" class="solr.TextField">
<analyzer type="index">
<tokenizer class="org.wltea.analyzer.solr.IKTokenizerFactory" isMaxWordLength="false"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="org.wltea.analyzer.solr.IKTokenizerFactory" isMaxWordLength="true"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>
如果采用上面的这种方式,那么相当于我们重新定义了一种fieldType,在后面的fields中需要把那些text_general修改为text,
为了避免这种麻烦,我们可以修改原有的text_general类型:
找到fieldType name="text_general" ,修改其中的<tokenizer class="solr.StandardTokenizerFactory"/>为
<tokenizer class="org.wltea.analyzer.solr.IKTokenizerFactory" isMaxWordLength="true"/>
这样就不需要修改后面的配置了。
10.http://localhost:8080/solr/admin/analysis.jsp下测试分词效果