https://github.com/richardwilly98/elasticsearch-river-mongodb
https://github.com/mallocator/Elasticsearch-MySQL-River
https://github.com/BioMedCentralLtd/spring-data-elasticsearch-sample-application/blob/master/src/test/resources/springContext-book-test.xml 实例
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/search.html
http://docs.spring.io/spring-data/elasticsearch/docs/current/reference/html/#repositories.create-instances.spring
为了替换现在使用体验比较差的SegmentFault搜索,我开始了前期搜索引擎的探索。目前首选是elasticsearch
elasticsearch需要java环境
安装java
sudo aptitude install openjdk-7-jre
下载elasticsearch
http://www.elasticsearch.org/overview/elkdownloads/
https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.0.deb
安装
因为我的环境是ubuntu,所以直接用它的deb包。
sudo dpkg -i elasticsearch-1.1.0.deb
启动
sudo /etc/init.d/elasticsearch start
jdbc river
用于定期或者实时导入需要搜索的数据
我们数据库是mysql,所以用的官方elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
river jdbc quickstart
https://github.com/jprante/elasticsearch-river-jdbc/wiki/Quickstart
安装
cd /usr/share/elasticsearch sudo /bin/plugin --install river-jdbc --url http://bit.ly/1jyXrR9
如果安装失败,可以手动下载后再安装。
sudo /bin/plugin --install river-jdbc --url file:///tmp/elasticsearch-river-jdbc-1.0.0.1.zip
创建一个JDBC river
curl -XPUT 'localhost:9200/_river/my_jdbc_river/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mysql://localhost:3306/test",
"user" : "root",
"password" : "",
"sql" : "select * from question", "index" : "question", "type" : "question" } }'
测试导入效果:
curl -XGET 'localhost:9200/question/_search?pretty&q=*'
or
localhost:9200/question/_search?pretty&q=*
官方有中文分词支持,但是不是非常准确,这里使用medcl的ik分词
安装elasticsearch-analysis-ik
cd /tmp wget https://github.com/medcl/elasticsearch-analysis-ik/archive/master.zip unzip master.zip cd elasticsearch-analysis-ik/
这里需要用mvn package命令打包成elasticsearch-analysis-ik-1.2.6.jar
mvn package
没有maven的可以安装一下
sudo aptitude install maven
复制elasticsearch-analysis-ik-1.2.6.jar到ES_HOME/plugins/analysis-ik下
sudo cp elasticsearch-analysis-ik-1.2.6.jar /usr/share/elasticsearch/plugins/analysis-ik
将ik的配置和字典都复制到ES_HOME/config下
sudo cp -R ik /etc/elasticsearch
elasticsearch配置启用ik
sudo vim /etc/elasticsearch
底部增加一行
index.analysis.analyzer.ik.type : 'ik'
重启服务加载配置
sudo service elasticsearch restart
测试分词效果
localhost:9200/question/_analyze?analyzer=ik&pretty=true&text=杭州堆栈科技有限公司
返回
{
"tokens" : [ { "token" : "杭州", "start_offset" : 0, "end_offset" : 2, "type" : "CN_WORD", "position" : 1 }, { "token" : "堆栈", "start_offset" : 2, "end_offset" : 4, "type" : "CN_WORD", "position" : 2 }, { "token" : "科技", "start_offset" : 4, "end_offset" : 6, "type" : "CN_WORD", "position" : 3 }, { "token" : "有限公司", "start_offset" : 6, "end_offset" : 10, "type" : "CN_WORD", "position" : 4 } ] }
测试官方的php客户端
官方的php客户端通过composer安装
先安装composer
curl -s http://getcomposer.org/installer | php sudo mv composer.phar /usr/bin/composer
生成一个composer.json,写入
{
"require": { "elasticsearch/elasticsearch": "~1.0" } }
开始安装
composer install --no-dev
项目中require之后就可以使用了
<?php require 'vendor/autoload.php'; $client = new Elasticsearch\Client();
使用中文分词ik的mapping
$params['index'] = 'question'; $params['type'] = 'question'; $myTypeMapping = array( '_source' => array( 'enabled' => true ), '_all' => array( 'indexAnalyzer' => 'ik', 'searchAnalyzer' => 'ik', 'term_vector' => 'no', 'store' => 'false' ), 'properties' => array( 'text' => array( 'type' => 'string', 'term_vector' => 'with_positions_offsets', 'indexAnalyzer' => 'ik', 'searchAnalyzer' => 'ik', 'include_in_all' => 'true', 'boost' => 8 ), 'title' => array( 'type' => 'string', 'term_vector' => 'with_positions_offsets', 'indexAnalyzer' => 'ik', 'searchAnalyzer' => 'ik', 'include_in_all' => 'true', 'boost' => 8 ) ) ); $params['body']['question'] = $myTypeMapping; $response = $client->indices()->putMapping($params);
测试一下效果,搜索问题内容‘php框架’
$searchParams = array(); $searchParams['index'] = 'question'; $searchParams['type'] = 'question'; $searchParams['body']['query']['match']['text'] = 'php框架';
返回,取了几条
总结
elasticsearch安装和使用还是非常简单的,从没有优化的返回结果来看也比现有的搜索要理想。
不过唯一的缺点就是文档相比solr还是太少,很多都只给了最基本的例子。
优化 组合搜索等等,都要自己琢磨和查找。