【++yong的博客地址:http://blog.csdn.net/qjyong 】
开源全文搜索工具包Lucene2.9.1的使用。
1. 搭建Lucene的开发环境:在classpath中添加lucene-core-2.9.1.jar包
2. 全文搜索的两个工作: 建立索引文件,搜索索引.
3. Lucene的索引文件逻辑结构
1) 索引(Index)由若干块(片段)(Segment)组成
★2) 块由若干文档(Document)组成: 一个文件映射成一个文档。数据库表中的一条记录映射成一个文档。
★3) 文档由若干域(Field)组成:文件的属性(文件路径,文件的内容)映射成一个域。记录的某个字段映射成一个域。
☆4) 域由若干词(关键字)(Term)组成:文件的属性的内容中某个字符串映射成一个词。
4. Lucene包结构
1) analysis模块:负责词法分析及语言处理而形成Term(词)。提供了一些内置的分析器:最常用的是StandardAnalyzer
2) index模块:负责索引的读写。 对索引文件的segment进行写、合并、优化的IndexWriter类。对索引进行读取和删除操作的IndexReader类。
3) store模块:负责索引的存储。提供索引的各种存储类:FSDirectory,RAMDirectory等。
4) document模块:索引文件内部的基础存储结构封装。如:Document类和Field类等。
5) search模块:负责对索引的搜索。提供了索引搜索器IndexSearcher类和各种Query类,如TermQuery、BooleanQuery等。
6) queryParser模块:负责查询语句的语法分析。提供了解析查询语句的QueryParser类
7) util模块:包含一些公共工具类。
5. 创建索引
1) IndexWriter:索引写出器
a) 构造方法:
IndexWriter(Directory d, Analyzer a, IndexWriter.MaxFieldLength mfl)
如果索引不存在,就会被创建。如果索引存在,就追加.
IndexWriter(Directory d, Analyzer a, boolean create, IndexWriter.MaxFieldLength mfl)
create为true时,原索引文件不存在就创建,存在就覆盖。
create为false时,原索引文件不存在就报错,存在就追加。
b) 常用方法:
void addDocument(Document doc); //把指定文档添加到索引写出器中
void iw.close(); //关闭索引写出器,此时才把索引写到目标存储地
2) Directory: 索引存放地。
a) 文件系统:FSDirectory: FSDirectory.open(File file);
b) 内存RAMDirectory: new RAMDirectory();
3) Analyzer: 分词器。
a) StandardAnalyzer: 标准分词器。对英文采用空白, 标点符号进行分词。对中文采用单字分词。
b) SmartChineseAnalyzer: 智能中文分词器。(LUCENE_HOME/contrib/analyzers/smartcn/lucene-smartcn-2.9.1.jar)
C) 第三方的中文分词器:如PaodingAnalyzer、IKAnalyzer
4) IndexWriter.MaxFieldLength: 指定域值的最大长度。
a) UNLIMITED 无限制的。
b) LIMITED 有限制的。值为10000
5) Document: 索引的组成单元. 一组Field的集合.
a) 构造方法: Document();
b) 常用方法: void add(Field f); //添加指定域到这个文档中
6) Field: 域,代表文档的某个索引域.
a) 构造方法: Field(String name, String value, Field.Store.YES, Field.Index.ANALYZED)
name: 域的名称, 只能是字符串.
value: 域的值, 只能是字符串.
Field.Store: 指定Field的值是否存储或怎样存储. NO(不存储), YES(存储),COMPRESS(压缩后存储)
Field.Index: 指定Field是否被索引或怎么被索引. NO(不索引), ANALYZED(分词后索引), NOT_ANALYZED(不分词直接索引)
7) 示例代码:
-
- public static void createIndex(File src, File destDir){
- Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);
- IndexWriter iwriter = null ;
- Directory directory = null ;
- try {
- directory = FSDirectory.open(destDir);
-
- iwriter = new IndexWriter(directory, analyzer, true , IndexWriter.MaxFieldLength.UNLIMITED);
-
-
- Document doc = new Document();
-
- doc.add(new Field( "path" , src.getCanonicalPath(), Field.Store.YES, Field.Index.NOT_ANALYZED));
-
- StringBuilder sb = new StringBuilder();
- BufferedReader br = new BufferedReader( new FileReader(src));
- for (String str = null ; (str = br.readLine())!= null ;){
- sb.append(str).append(System.getProperty("line.separator" ));
- }
-
- doc.add(new Field( "contents" , sb.toString(), Field.Store.YES, Field.Index.ANALYZED));
-
- iwriter.addDocument(doc);
- iwriter.optimize();
- } catch (IOException e) {
- e.printStackTrace();
- } finally {
- if (iwriter != null ) {
- try {
- iwriter.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- if (directory != null ) {
- try {
- directory.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- }
-
- public static void createIndex(File src, File destDir){
- Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);
- IndexWriter iwriter = null;
- Directory directory = null;
- try {
- directory = FSDirectory.open(destDir);
-
- iwriter = new IndexWriter(directory, analyzer,true, IndexWriter.MaxFieldLength.UNLIMITED);
-
-
- Document doc = new Document();
-
- doc.add(new Field("path", src.getCanonicalPath(), Field.Store.YES, Field.Index.NOT_ANALYZED));
-
- StringBuilder sb = new StringBuilder();
- BufferedReader br = new BufferedReader(new FileReader(src));
- for(String str = null; (str = br.readLine())!=null;){
- sb.append(str).append(System.getProperty("line.separator"));
- }
-
- doc.add(new Field("contents", sb.toString(), Field.Store.YES, Field.Index.ANALYZED));
-
- iwriter.addDocument(doc);
- iwriter.optimize();
- } catch (IOException e) {
- e.printStackTrace();
- } finally {
- if (iwriter != null) {
- try {
- iwriter.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- if (directory != null) {
- try {
- directory.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- }
//src要创建索引的文件,destDir索引存放的目录 public static void createIndex(File src, File destDir){ Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT); //创建一个语法分析器 IndexWriter iwriter = null; Directory directory = null; try { directory = FSDirectory.open(destDir); //把索引文件存储到磁盘目录 //创建一个IndexWriter(存放索引文件的目录,分析器,Field的最大长度) iwriter = new IndexWriter(directory, analyzer,true, IndexWriter.MaxFieldLength.UNLIMITED); //iwriter.setUseCompoundFile(true);//使用复合文件 Document doc = new Document(); //创建一个Document对象 //把文件路径作为"path"域:不分词,索引,保存 doc.add(new Field("path", src.getCanonicalPath(), Field.Store.YES, Field.Index.NOT_ANALYZED)); StringBuilder sb = new StringBuilder(); BufferedReader br = new BufferedReader(new FileReader(src)); for(String str = null; (str = br.readLine())!=null;){ sb.append(str).append(System.getProperty("line.separator")); } //文件内容作为"content"域:分词,索引,保存 doc.add(new Field("contents", sb.toString(), Field.Store.YES, Field.Index.ANALYZED)); iwriter.addDocument(doc); //把Document存放到IndexWriter中 iwriter.optimize(); //对索引进行优化 } catch (IOException e) { e.printStackTrace(); } finally { if (iwriter != null) { try { iwriter.close(); //关闭IndexWriter时,才把内存中的数据写到文件 } catch (IOException e) { e.printStackTrace(); } } if (directory != null) { try { directory.close(); //关闭索引存放目录 } catch (IOException e) { e.printStackTrace(); } } } }
6. 查询索引
1) IndexSearcher: 索引查询器
a) 构造器: IndexSearcher(Directory path, boolean readOnly)
b) 常用方法:
TopDocs search(Query query, Filter filter, int n); //执行查询。n指的是最多返回的Document的数量。
Document doc(int 文件内部编号); //根据文档的内部编号获取到该Document
void close(); //关闭查询器
2) Query: 查询对象。把用户输入的查询字符串封装成Lucene能够识别的Query对象。
3) Filter: 用来过虑搜索结果的对象。
4) TopDocs: 代表查询结果集信息对象。它有两个属性:
a) totalHits: 查询命中数。
b) scoreDocs: 查询结果信息。它包含符合条件的Document的内部编号(doc)及评分(score)。
5) 示例代码:
-
- public static void searcher(String keyword, File indexDir){
- IndexSearcher isearcher = null ;
- Directory directory = null ;
- try {
- Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);
- directory = FSDirectory.open(indexDir);
-
-
- QueryParser parser = new QueryParser(Version.LUCENE_CURRENT, "contents" , analyzer);
- Query query = parser.parse(keyword);
-
-
-
-
-
-
-
-
-
-
- isearcher = new IndexSearcher(directory, true );
- TopDocs ts = isearcher.search(query, null , 100 );
-
- int totalHits = ts.totalHits;
- System.out.println("命中数:" + totalHits);
-
- ScoreDoc[] hits = ts.scoreDocs;
- for ( int i = 0 ; i < hits.length; i++) {
- Document hitDoc = isearcher.doc(hits[i].doc);
- System.out.println(hitDoc.getField("contents" ).stringValue());
- }
- } catch (IOException e) {
- e.printStackTrace();
- } catch (ParseException e) {
- e.printStackTrace();
- } finally {
- if (isearcher != null ) {
- try {
- isearcher.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- if (directory != null ) {
- try {
- directory.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- }
-
- public static void searcher(String keyword, File indexDir){
- IndexSearcher isearcher = null;
- Directory directory = null;
- try{
- Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);
- directory = FSDirectory.open(indexDir);
-
-
- QueryParser parser = new QueryParser(Version.LUCENE_CURRENT, "contents", analyzer);
- Query query = parser.parse(keyword);
-
-
-
-
-
-
-
-
-
-
- isearcher = new IndexSearcher(directory, true);
- TopDocs ts = isearcher.search(query, null, 100);
-
- int totalHits = ts.totalHits;
- System.out.println("命中数:" + totalHits);
-
- ScoreDoc[] hits = ts.scoreDocs;
- for (int i = 0; i < hits.length; i++) {
- Document hitDoc = isearcher.doc(hits[i].doc);
- System.out.println(hitDoc.getField("contents").stringValue());
- }
- } catch (IOException e) {
- e.printStackTrace();
- } catch (ParseException e) {
- e.printStackTrace();
- } finally {
- if (isearcher != null) {
- try {
- isearcher.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- if (directory != null) {
- try {
- directory.close();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- }
//keyword要搜索的关键字。indexDir索引存放的目录 public static void searcher(String keyword, File indexDir){ IndexSearcher isearcher = null; Directory directory = null; try{ Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT); directory = FSDirectory.open(indexDir); //创建解析器 QueryParser parser = new QueryParser(Version.LUCENE_CURRENT, "contents", analyzer); Query query = parser.parse(keyword);//获取查询对象 // Query query1 = new TermQuery(new Term("contents", keyword)); // Query query2 = new TermQuery(new Term("contents", keyword2)); // BooleanQuery query = new BooleanQuery(); // query.add(query1, Occur.SHOULD); // query.add(query2, Occur.SHOULD); // QueryParser parser = new MultiFieldQueryParser(Version.LUCENE_CURRENT, new String[]{"path", "contents"}, analyzer); // Query query = parser.parse(keyword); isearcher = new IndexSearcher(directory, true); //创建索引搜索器 TopDocs ts = isearcher.search(query, null, 100); //执行搜索,获取查询结果集对象 int totalHits = ts.totalHits; //获取命中数 System.out.println("命中数:" + totalHits); ScoreDoc[] hits = ts.scoreDocs; //获取命中的文档信息对象 for (int i = 0; i < hits.length; i++) { Document hitDoc = isearcher.doc(hits[i].doc); //根据命中的文档的内部编号获取该文档 System.out.println(hitDoc.getField("contents").stringValue()); //输出该文档指定域的值 } } catch (IOException e) { e.printStackTrace(); } catch (ParseException e) { e.printStackTrace(); } finally { if (isearcher != null) { try { isearcher.close(); //关闭搜索器 } catch (IOException e) { e.printStackTrace(); } } if (directory != null) { try { directory.close(); //关闭索引存放目录 } catch (IOException e) { e.printStackTrace(); } } } }
7. 删除索引
IndexWriter提供deleteDocuments(Term term); //会删除索引文件里含有指定Term的所有Document。
IndexReader也提供了deleteDocuments(Term term);
8. 更新索引
IndexWriter提供updateDocument(Term term, Document doc); //实际上是先删除再创建索引。
9. 常用查询器
1) TermQuery : 按Term(关键字)查询。构造方法:TermQuery(Term t)
- Query query = new TermQuery( new Term( "contents" , keyword));
- isearcher = new IndexSearcher(FSDirectory.open(indexDir), true );
- TopDocs ts = isearcher.search(query, null , 100 );
- Query query = new TermQuery(new Term("contents", keyword));
- isearcher = new IndexSearcher(FSDirectory.open(indexDir), true);
- TopDocs ts = isearcher.search(query, null, 100);
Query query = new TermQuery(new Term("contents", keyword)); isearcher = new IndexSearcher(FSDirectory.open(indexDir), true); TopDocs ts = isearcher.search(query, null, 100);
2) BooleanQuery: 布尔查询。组合多个查询器。
- Query query1 = new TermQuery( new Term( "contents" , keyword));
- Query query2 = new TermQuery( new Term( "contents" , keyword2));
- BooleanQuery query = new BooleanQuery();
- query.add(query1, Occur.SHOULD);
- query.add(query2, Occur.SHOULD);
-
- isearcher = new IndexSearcher(directory, true );
-
- TopDocs ts = isearcher.search(query, null , 100 );
- Query query1 = new TermQuery(new Term("contents", keyword));
- Query query2 = new TermQuery(new Term("contents", keyword2));
- BooleanQuery query = new BooleanQuery();
- query.add(query1, Occur.SHOULD);
- query.add(query2, Occur.SHOULD);
- isearcher = new IndexSearcher(directory, true);
- TopDocs ts = isearcher.search(query, null, 100);
Query query1 = new TermQuery(new Term("contents", keyword)); Query query2 = new TermQuery(new Term("contents", keyword2)); BooleanQuery query = new BooleanQuery(); query.add(query1, Occur.SHOULD); query.add(query2, Occur.SHOULD); isearcher = new IndexSearcher(directory, true); TopDocs ts = isearcher.search(query, null, 100);
3) MultiFieldQueryParser: 多Field中查询。
- QueryParser parser = new MultiFieldQueryParser(Version.LUCENE_CURRENT, new String[]{ "path" , "contents" }, analyzer);
- Query query = parser.parse(keyword);
- isearcher = new IndexSearcher(FSDirectory.open(indexDir), true );
- TopDocs ts = isearcher.search(query, null , 100 );
- QueryParser parser = new MultiFieldQueryParser(Version.LUCENE_CURRENT, new String[]{"path", "contents"}, analyzer);
- Query query = parser.parse(keyword);
- isearcher = new IndexSearcher(FSDirectory.open(indexDir), true);
- TopDocs ts = isearcher.search(query, null, 100);
QueryParser parser = new MultiFieldQueryParser(Version.LUCENE_CURRENT, new String[]{"path", "contents"}, analyzer); Query query = parser.parse(keyword); isearcher = new IndexSearcher(FSDirectory.open(indexDir), true); TopDocs ts = isearcher.search(query, null, 100);
10. 高亮器Highlighter:在网页中对搜索结果予以高亮显示。
1) 在classpath添加contrib/highlighter/lucene-highlighter-2.9.1.jar
2) 示例伪代码
- SimpleHTMLFormatter shf = new SimpleHTMLFormatter( "<span style=" color:red " mce_style=" color:red ">" , "</span>" );
-
- Highlighter highlighter = new Highlighter(shf, new QueryScorer(query));
-
- highlighter.setTextFragmenter(new SimpleFragmenter(Integer.MAX_VALUE));
- String content = highlighter.getBestFragment(Analyzer, "fieldName" , "fieldValue" );
- SimpleHTMLFormatter shf = new SimpleHTMLFormatter("<span
- style="color:red" mce_style="color:red">", "</span>");
-
- highlighter = new Highlighter(shf, new QueryScorer(query));
- highlighter.setTextFragmenter(new SimpleFragmenter(Integer.MAX_VALUE));
- String content = highlighter.getBestFragment(Analyzer, "fieldName",
- "fieldValue");
SimpleHTMLFormatter shf = new SimpleHTMLFormatter("<span style="color:red" mce_style="color:red">", "</span>"); //默认是<b>..</b> // 构造高亮器:指定高亮的格式,指定查询计分器 Highlighter highlighter = new Highlighter(shf, new QueryScorer(query)); //设置块划分器 highlighter.setTextFragmenter(new SimpleFragmenter(Integer.MAX_VALUE)); String content = highlighter.getBestFragment(Analyzer, "fieldName", "fieldValue");
11. 优化
1) 使用IndexWriter须注意
修改索引后,需flush()或close()方能生效
非线程安全,任一时刻仅能有一个线程对其操作.
2) 使用IndexSearcher须注意
一旦打开,不会搜索到以后添加的索引
线程安全,多个线程仅需一个实例
3) 最佳实践
多个线程共享一个IndexSearcher, 只有当索引修改后才重新打开IndexSearcher
多个线程共享一个IndexWriter并严格同步
异步修改索引提高性能(JMS)
为每个Document创建单独的索引目录
12. 在emall项目中整合Lucene对产品的ID,名称和描述进行全文搜索。
13. 使用Compass简化Lucene操作。(未完待续)