一.安装jdk6,tomcat5.5。
二.解压solr1.3,创建以下目录:
/usr/local/solr/solrApps: 保存的是solr.war
/usr/local/solr/multicore : 保存的是多核 solr的配制文件
拷贝solr1.3解压目录里的dist/apache-solr-1.3.0.war到solrApps里,example/multicore里的文件到multicore里。
三.配制如下:
1.在tomcat/conf/Catalina/localhost目录里增加solr.xml内容如下:
<?xml version="1.0" encoding="UTF-8" ?>
<Context docBase="/usr/local/solr/solrApps/solr.war" debug="0" crossContext="true" >
<Environment name="solr/home" type="java.lang.String" value="/usr/local/solr/multicore" override="true" />
</Context>
2.修改solr/multicore/solr.xml配制solr的多核参数为:
<solr persistent="false" sharedLib="paodingLib"> //增加一个共享lib配制,存放中文分词paoding的文件。
<!--
adminPath: RequestHandler path to manage cores.
If 'null' (or absent), cores will not be manageable via REST
-->
<cores adminPath="/admin/cores">
<core name="core0" instanceDir="core0" />
<core name="core1" instanceDir="core1" />
</cores>
</solr>
3.配制multicore/core0/conf/solrconfig.xml,内容如下:
<config>
<updateHandler class="solr.DirectUpdateHandler2" />
<dataDir>/usr/local/solr/multicore/sodao</dataDir> //此处配制每个core创建个自索引文件的目录,在配制目录下自动创建一个index目录,保存索引文件
<requestDispatcher handleSelect="true" >
<requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="2048" />
</requestDispatcher>
<requestHandler name="standard" class="solr.StandardRequestHandler" default="true" />
<requestHandler name="/update" class="solr.XmlUpdateRequestHandler" />
<requestHandler name="/admin/" class="org.apache.solr.handler.admin.AdminHandlers" />
<!-- config for the admin interface -->
<admin>
<defaultQuery>solr</defaultQuery>
</admin>
</config>
这个目录里的schema.xml为创建索引文件的配制。
四、起动tomcat,输入http://localhost:8080/solr,测试
五、配制中文paoding分词
1.在网上下载paoding分词,将分词的lib包放到/usr/local/solr/multicore放到这个目录下面,并编写如下类:
package net.qhsoft.analyzer;
import java.io.Reader;
import java.util.Map;
import net.paoding.analysis.analyzer.PaodingTokenizer;
import net.paoding.analysis.analyzer.TokenCollector;
import net.paoding.analysis.knife.PaodingMaker;
import net.paoding.analysis.analyzer.impl.MostWordsTokenCollector;
import net.paoding.analysis.analyzer.impl.MaxWordLengthTokenCollector;
import org.apache.lucene.analysis.TokenStream;
import org.apache.solr.analysis.BaseTokenizerFactory;
public class ChineseTokenizerFactory extends BaseTokenizerFactory {
/**
* 最多切分 默认模式
*/
public static final String MOST_WORDS_MODE = "most-words";
/**
* 按最大切分
*/
public static final String MAX_WORD_LENGTH_MODE = "max-word-length";
private String mode = null;
public void setMode(String mode) {
if (mode==null||MOST_WORDS_MODE.equalsIgnoreCase(mode)
|| "default".equalsIgnoreCase(mode)) {
this.mode=MOST_WORDS_MODE;
} else if (MAX_WORD_LENGTH_MODE.equalsIgnoreCase(mode)) {
this.mode=MAX_WORD_LENGTH_MODE;
}
else {
throw new IllegalArgumentException("不合法的分析器Mode参数设置:" + mode);
}
}
@Override
public void init(Map<String, String> args) {
super.init(args);
setMode(args.get("mode"));
}
public TokenStream create(Reader input) {
return new PaodingTokenizer(input, PaodingMaker.make(),
createTokenCollector());
}
private TokenCollector createTokenCollector() {
if( MOST_WORDS_MODE.equals(mode))
return new MostWordsTokenCollector();
if( MAX_WORD_LENGTH_MODE.equals(mode))
return new MaxWordLengthTokenCollector();
throw new Error("never happened");
}
}
打成一个net.jar包,也放到lib包里面
2. 把paoding的字典放到/usr/local/solr/multicore下面的dic目录里,并编辑/etc/profile文件,增加:
export PAODING_DIC_HOME=/usr/local/solr/multicore/dic
3.将lib目录下的paoding-analysis.properties,paoding-dic-home.properties两个文件考到/usr/local/solr目录下面。