Query有很多子类,完成不同类型的查询任务:
Instantiable subclasses are:
TermQuery
MultiTermQuery
BooleanQuery
WildcardQuery
PhraseQuery
PrefixQuery
MultiPhraseQuery
FuzzyQuery
TermRangeQuery
NumericRangeQuery
SpanQuery
不过使用我们系统的用户可并不乐意去了解这些看上去复杂的东西。
那么我们希望有一个工具——他能够理解用户的搜索意图,然后转换成lucene中合理的Query子类,提供给lucene进行检索,
甚至用户可以了解一些标准的检索语法(使用起来相对简单)进行高级搜索。
很好......
这就是lucene提供的QueryParser,他就能够理解用户,通过一系列复杂的过程构建一个合理的Query类型给Searcher进行搜索。
QueryParser是用javaCC生成的一个语法解析工具(应该也算是一种编译器)。
他最常用的方法当然是对“检索式”的解析了
Query |
parse(String query) Parses a query string, returning a Query . |
---------------------------------------------------------------
要想进行一些高级的搜索行为,我们必须了解QueryParser的语法规则。
这里直接贴出“官方给出的语法规则”
Although Lucene provides the ability to create your own queries through its API, it also provides a rich query language through the Query Parser, a lexer which interprets a string into a Lucene Query using JavaCC.
Generally, the query parser syntax may change from release to release. This page describes the syntax as of the current release. If you are using a different version of Lucene, please consult the copy of docs/queryparsersyntax.html that was distributed with the version you are using.
Before choosing to use the provided Query Parser, please consider the following:
A query is broken up into terms and operators. There are two types of terms: Single Terms and Phrases.
A Single Term is a single word such as "test" or "hello".
A Phrase is a group of words surrounded by double quotes such as "hello dolly".
Multiple terms can be combined together with Boolean operators to form a more complex query (see below).
Note: The analyzer used to create the index will be used on the terms and phrases in the query string. So it is important to choose an analyzer that will not interfere with the terms used in the query string.
Lucene supports fielded data. When performing a search you can either specify a field, or use the default field. The field names and default field is implementation specific.
You can search any field by typing the field name followed by a colon ":" and then the term you are looking for.
As an example, let's assume a Lucene index contains two fields, title and text and text is the default field. If you want to find the document entitled "The Right Way" which contains the text "don't go this way", you can enter:
title:"The Right Way" AND text:go
or
title:"Do it right" AND right
Since text is the default field, the field indicator is not required.
Note: The field is only valid for the term that it directly precedes, so the query
title:Do it right
Will only find "Do" in the title field. It will find "it" and "right" in the default field (in this case the text field).
Lucene supports modifying query terms to provide a wide range of searching options.
Lucene supports single and multiple character wildcard searches within single terms (not within phrase queries).
To perform a single character wildcard search use the "?" symbol.
To perform a multiple character wildcard search use the "*" symbol.
The single character wildcard search looks for terms that match that with the single character replaced. For example, to search for "text" or "test" you can use the search:
te?t
Multiple character wildcard searches looks for 0 or more characters. For example, to search for test, tests or tester, you can use the search:
test*
You can also use the wildcard searches in the middle of a term.
te*t
Note: You cannot use a * or ? symbol as the first character of a search.
Lucene supports fuzzy searches based on the Levenshtein Distance, or Edit Distance algorithm. To do a fuzzy search use the tilde, "~", symbol at the end of a Single word Term. For example to search for a term similar in spelling to "roam" use the fuzzy search:
roam~
This search will find terms like foam and roams.
Starting with Lucene 1.9 an additional (optional) parameter can specify the required similarity. The value is between 0 and 1, with a value closer to 1 only terms with a higher similarity will be matched. For example:
roam~0.8
The default that is used if the parameter is not given is 0.5.
Lucene supports finding words are a within a specific distance away. To do a proximity search use the tilde, "~", symbol at the end of a Phrase. For example to search for a "apache" and "jakarta" within 10 words of each other in a document use the search:
"jakarta apache"~10
Range Queries allow one to match documents whose field(s) values are between the lower and upper bound specified by the Range Query. Range Queries can be inclusive or exclusive of the upper and lower bounds. Sorting is done lexicographically.
mod_date:[20020101 TO 20030101]
This will find documents whose mod_date fields have values between 20020101 and 20030101, inclusive. Note that Range Queries are not reserved for date fields. You could also use range queries with non-date fields:
title:{Aida TO Carmen}
This will find all documents whose titles are between Aida and Carmen, but not including Aida and Carmen.
Inclusive range queries are denoted by square brackets. Exclusive range queries are denoted by curly brackets.
Lucene provides the relevance level of matching documents based on the terms found. To boost a term use the caret, "^", symbol with a boost factor (a number) at the end of the term you are searching. The higher the boost factor, the more relevant the term will be.
Boosting allows you to control the relevance of a document by boosting its term. For example, if you are searching for
jakarta apache
and you want the term "jakarta" to be more relevant boost it using the ^ symbol along with the boost factor next to the term. You would type:
jakarta^4 apache
This will make documents with the term jakarta appear more relevant. You can also boost Phrase Terms as in the example:
"jakarta apache"^4 "Apache Lucene"
By default, the boost factor is 1. Although the boost factor must be positive, it can be less than 1 (e.g. 0.2)
Boolean operators allow terms to be combined through logic operators. Lucene supports AND, "+", OR, NOT and "-" as Boolean operators(Note: Boolean operators must be ALL CAPS).
The OR operator is the default conjunction operator. This means that if there is no Boolean operator between two terms, the OR operator is used. The OR operator links two terms and finds a matching document if either of the terms exist in a document. This is equivalent to a union using sets. The symbol || can be used in place of the word OR.
To search for documents that contain either "jakarta apache" or just "jakarta" use the query:
"jakarta apache" jakarta
or
"jakarta apache" OR jakarta
The AND operator matches documents where both terms exist anywhere in the text of a single document. This is equivalent to an intersection using sets. The symbol && can be used in place of the word AND.
To search for documents that contain "jakarta apache" and "Apache Lucene" use the query:
"jakarta apache" AND "Apache Lucene"
The "+" or required operator requires that the term after the "+" symbol exist somewhere in a the field of a single document.
To search for documents that must contain "jakarta" and may contain "lucene" use the query:
+jakarta lucene
The NOT operator excludes documents that contain the term after NOT. This is equivalent to a difference using sets. The symbol ! can be used in place of the word NOT.
To search for documents that contain "jakarta apache" but not "Apache Lucene" use the query:
"jakarta apache" NOT "Apache Lucene"
Note: The NOT operator cannot be used with just one term. For example, the following search will return no results:
NOT "jakarta apache"
The "-" or prohibit operator excludes documents that contain the term after the "-" symbol.
To search for documents that contain "jakarta apache" but not "Apache Lucene" use the query:
"jakarta apache" -"Apache Lucene"
Lucene supports using parentheses to group clauses to form sub queries. This can be very useful if you want to control the boolean logic for a query.
To search for either "jakarta" or "apache" and "website" use the query:
(jakarta OR apache) AND website
This eliminates any confusion and makes sure you that website must exist and either term jakarta or apache may exist.
Lucene supports using parentheses to group multiple clauses to a single field.
To search for a title that contains both the word "return" and the phrase "pink panther" use the query:
title:(+return +"pink panther")
Lucene supports escaping special characters that are part of the query syntax. The current list special characters are
+ - && || ! ( ) { } [ ] ^ " ~ * ? : \
To escape these character use the \ before the character. For example to search for (1+1):2 use the query:
\(1\+1\)\:2
在网上应该可以找到翻译版~
注意事项:
1.修改QueryParser的默认布尔逻辑。
void |
setDefaultOperator(QueryParser.Operator op) Sets the boolean operator of the QueryParser. |
Enum Constant Summary | |
---|---|
AND |
|
OR |
2.关于对日期型的字段进行搜索。
虽然一些资料上强调使用DateTools,但是我在项目中采取了另外一种方式:
进入索引的日期字段统一成:yyyyMMdd 字符串形式
检索的时候也采取同样的表现形式,不管是单独对其进行检索,还是跨度进行检索,都没有发生异常。
(正因为我在项目中采取了字符串表示日期的方法,一直没有去过多了解DateTools的用意,望有使用经验的朋友指点!)
http://www.cnblogs.com/huangfox/archive/2010/10/19/1855371.html
package com.jiepu.lucene_23; import java.io.StringReader; import junit.runner.Version; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.Term; import org.apache.lucene.queryParser.QueryParser; import org.apache.lucene.queryParser.QueryParser.Operator; import org.apache.lucene.search.FilteredQuery; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.QueryFilter; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TopDocCollector; import org.apache.lucene.search.highlight.Highlighter; import org.apache.lucene.search.highlight.QueryScorer; import org.apache.lucene.search.highlight.SimpleFragmenter; import org.apache.lucene.search.highlight.SimpleHTMLFormatter; import org.apache.lucene.store.Directory; import org.apache.lucene.store.RAMDirectory; //http://my.oschina.net/lsw90/blog/186732 public class Example { public static void main(String[] args) throws Exception { testIndexAndSearchold(); } public static void testIndexAndSearchold() throws Exception { Analyzer analyzer = new StandardAnalyzer(); Directory directory = new RAMDirectory(); // Directory directory = FSDirectory.getDirectory("/tmp/testindex"); IndexWriter iwriter = new IndexWriter(directory, analyzer,true); Document doc = new Document(); //搜索引擎支持包含字符的检索,如用户输入“Add”,可检索出包含“Add”内容的词条;用户输入“dd”,也可检索出包含“Add”内容的词条。 doc.add(new Field("name", "text 麻痹的 addd dd ", Field.Store.YES,Field.Index.TOKENIZED)); doc.add(new Field("content", "我是内容1号 content text 麻痹的 addddd 你是猪 ", Field.Store.YES,Field.Index.TOKENIZED)); iwriter.addDocument(doc); Document doc2 = new Document(); String text = "This is the text add to be indexed. 你好啊 呵呵 内存索引"; doc2.add(new Field("name", text, Field.Store.YES,Field.Index.TOKENIZED)); doc2.add(new Field("content", "我是内容2号 content 麻痹的 addddd 你是猪 ", Field.Store.YES,Field.Index.TOKENIZED)); iwriter.addDocument(doc2); Document doc3 = new Document(); doc3.add(new Field("name", "add hello 测试的数据", Field.Store.YES,Field.Index.TOKENIZED)); doc3.add(new Field("content", "我是内容3号 content word addddd 你是猪 ", Field.Store.YES,Field.Index.TOKENIZED)); iwriter.addDocument(doc3); iwriter.optimize(); iwriter.close(); IndexSearcher isearcher = new IndexSearcher(directory); //新增MultiFieldQueryParser等,修复bug //修改QueryParser支持?通配符 //新增CachingWrapperFilter和PerFieldAnalyzerWrapper等 //新增ParallelMultiSearcher等 //新增MMapDirectory等 //新增FieldSelector等 //新增BoostingTermQuery等 //新增SpanQueryFilter等 //新增QueryAutoStopWordAnalyzer等 //新增FieldCacheRangeFilter等 //新增AttributeFactory等 //新增ReusableAnalyzerBase等,新增FieldValueFilter等 //新增RegexpQuery等,新增BloomFilteringPostingsFormat等 //新增AnalyzingSuggester和FuzzySuggester等 //新增AutomatonQuery ,用来返回所有Term匹配有限状态机文章列表 //当前版本Lucene 4.1已经实现了所有这些主流的检索模型。支持TFIDF相似度度量,最佳匹配Okapi BM25相似度度量,随机分歧DFR相似度度量,Dirichlet和JelinekMercer相似度度量,IB相似度度量。 //通配符查询,继承自MultiTermQuery 支持通配符:* ? ~ 说明:* 匹配任何字符, ? 匹配单一字符 //WildcardQuery query = new WildcardQuery(new Term("name","*dd*")); //模糊查询,参数:项+匹配度,增加"~"后缀实现模糊查询 //FuzzyQuery query=new FuzzyQuery(new Term("name", "add"),0.9f); /* SpanQuery按照词在文章中的距离或者查询几个相邻词的查询 SpanQuery包括以下几种 SpanTermQuery:词距查询的基础,结果和TermQuery相似,只不过是增加了查询结果中单词的距离信息。 SpanFirstQuery:在指定距离可以找到第一个单词的查询。 SpanNearQuery:查询的几个语句之间保持者一定的距离。 SpanOrQuery:同时查询几个词句查询。 SpanNotQuery:从一个词距查询结果中,去除一个词距查询。*/ //跨度查询,范围查询 //SpanTermQuery query=new SpanTermQuery(new Term("name", "add*")); //权重值查询? //BoostingTermQuery query=new BoostingTermQuery(new Term("name", "dd")); //项匹配查询=精确查询 //TermQuery query=new TermQuery(new Term("name", "add")); //多字段查询 //QueryParser parser=new MultiFieldQueryParser(new String[]{"content","name"}, analyzer); //Query query = parser.parse("add"); //QueryParser parser = new QueryParser("name",analyzer); //Query query = parser.parse("add"); //查询解析器的另一种用法,lucene支持的查询语法 QueryParser parser =new QueryParser("", analyzer); parser.setDefaultOperator(Operator.OR);//默认为AND 空格与操作 //Query query = parser.parse("name:add AND content:3号"); //Query query = parser.parse("name:add - content:3号"); //Query query = parser.parse("name:add AND content:我*"); //Query query = parser.parse("name:add AND content:wo?d"); //Query query = parser.parse("name:{ad TO adddd}"); Query query = parser.parse("+name:add content:3号"); //词组查询 //PhraseQuery phraseQuery=new PhraseQuery(); //phraseQuery.add(new Term("name", "hello")); //多短语查询? //MultiPhraseQuery query=new MultiPhraseQuery(); //query.add(new Term("name", "hello")); QueryFilter queryFilter=new QueryFilter(query); // CachingWrapperFilter filter = new CachingWrapperFilter(queryFilter); //二层过滤包装 FilteredQuery filteredQuery=new FilteredQuery(query,queryFilter); //RangeQuery query=new RangeQuery(new Term("name", "dd"), new Term("name", "add"), false); //多条件 结合查询 /* BooleanQuery booleanQuery=new BooleanQuery(); //BooleanClause clause=new BooleanClause(); booleanQuery.add(phraseQuery, BooleanClause.Occur.MUST); booleanQuery.add(query,BooleanClause.Occur.MUST); System.out.println(booleanQuery.toString()); */ //前缀搜索,相当于add* //PrefixQuery query=new PrefixQuery(new Term("name", "dd")); System.out.println(filteredQuery.toString()); TopDocCollector hits = new TopDocCollector(100); isearcher.search(filteredQuery, hits); //Hits hits2=isearcher.search(query); //hits2.doc(0).get("content"); System.out.println("TotalHits:"+hits.getTotalHits()); System.out.println("结果:"); ScoreDoc[] scoreDocs=hits.topDocs().scoreDocs;//该方法会影响hits.topDocs().scoreDocs[i].length /* for (int i = 0; i < scoreDocs.length; i++) { int doc_id=scoreDocs[i].doc; System.out.println("命中的文档编号id="+doc_id); Document hitDoc = isearcher.doc(doc_id); System.out.println("name="+hitDoc.get("name")+"-content="+hitDoc.get("content")); }*/ SimpleHTMLFormatter simpleHtmlFormatter = new SimpleHTMLFormatter("<font color='red'>","</font>");//设定高亮显示的格式,也就是对高亮显示的词组加上前缀后缀 Highlighter highlighter = new Highlighter(simpleHtmlFormatter,new QueryScorer(query)); highlighter.setTextFragmenter(new SimpleFragmenter(150));//设置每次返回的字符数.想必大家在使用搜索引擎的时候也没有一并把全部数据展示出来吧,当然这里也是设定只展示部分数据 for(int i=0;i<scoreDocs.length;i++){ Document document = isearcher.doc(scoreDocs[i].doc); TokenStream tokenStream = analyzer.tokenStream("",new StringReader(document.get("name"))); try { String str = highlighter.getBestFragment(tokenStream, document.get("name")); System.out.println(str+"-"+document.get("content")); } catch (Exception e) { e.printStackTrace(); } } isearcher.close(); directory.close(); } }