好久没有更新了,今天说说最近为这个项目加的一个新功能吧,即全文检索Lucene!至于Lucene到底是什么东西,大家可以在自己学习一下,我这里只说说是怎样将其配置到我的项目中的.大家如果对我这个项目不是很了解,可以先看看前面几个帖子.
其实说到Lucene,我也是第一次接触,以前听说过,但没有用过.搞了两天,先是看视频,看完了还是不会配到我的S2SH框架中,没办法,咱人笨.最后找项目,配合视频,终于搞定了.Lucene主要是操作其自己的索引,所以其实和数据库没有多少关系,但为了和我的项目框架保持协调性,我依然采用了dao-->service-->action这种模式,其实完全没有必要,我个人觉得有一个service层就足够了.好了,废话补多少了,直接上代码:
- public class SearchDaoImpl implements SearchDao {
-
- public void save(Document doc, IndexWriter indexWriter) {
- try {
- indexWriter.addDocument(doc);
- } catch (Exception e) {
- e.printStackTrace();
- }finally{
- try {
- indexWriter.optimize();
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
-
- }
-
- public void delete(Term term , IndexWriter indexWriter) {
- try {
- indexWriter.deleteDocuments(term);
- } catch (Exception e) {
- e.printStackTrace();
- }finally{
- try {
- indexWriter.close();
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- }
-
- public void update(Term term, Document doc, IndexWriter indexWriter) {
- try {
- indexWriter.updateDocument(term, doc);
- } catch (Exception e) {
- e.printStackTrace();
- }finally{
- try {
- indexWriter.close();
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- }
-
-
- public TopDocs search(Query query, IndexSearcher indexSearcher) throws Exception {
- Filter filter = null;
- return indexSearcher.search(query, filter, 10000);
- }
大家可以看到,我其实连过滤器都没有写,主要就是为了熟悉这个框架和S2SH项目的整合使用.
- public class ForumSearchServiceImpl implements ForumSearchService {
- private SearchDao searchDao;
-
-
- public void saveForumIndex(List forumList) throws Exception{
- File indexFile = new File("D:\\index");
- Analyzer analyzer = new PaodingAnalyzer();
- IndexWriter indexWriter = new IndexWriter(indexFile, analyzer, true, MaxFieldLength.LIMITED);
- List<Forum> list =forumList;
-
-
- for(Forum forum : list){
- Document doc = new Document();
- String id = forum.getId();
- Long bid = forum.getBoard().getId();
- String title = forum.getTitle();
- String detail = forum.getDetail();
- String postTime = BBSNSUtil.formatDateTime(new Date(forum.getPostTime()), Constant.formatDate);
-
- doc.add(new Field("title", title, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
-
- Parser parser = new Parser();
- parser.setInputHTML(detail);
- String strings = parser.parse(null).elementAt(0).toPlainTextString().trim();
- System.out.println("-------"+strings);
- if(!(strings.length()==0)){
- doc.add(new Field("detail", strings, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
- }else{
- String str = "内容为视频或者图片,不包含文字";
- doc.add(new Field("detail", str, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
- System.out.println(str);
- }
-
- doc.add(new Field("postTime", postTime, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.NO));
-
- doc.add(new Field("id" , id , Field.Store.YES , Field.Index.NO , Field.TermVector.NO));
- doc.add(new Field("bid" , bid.toString() , Field.Store.YES , Field.Index.NO , Field.TermVector.NO));
-
- searchDao.save(doc, indexWriter);
- }
- indexWriter.close();
- }
-
-
- public PageList searchFourm(String which, String keyWord, Pages pages) throws Exception{
-
- System.out.println("pages.getSpage()="+pages.getSpage());
-
- QueryResult queryResult = this.searchFourm(which, keyWord, pages.getSpage(), pages.getPerPageNum());
- PageList pl = new PageList();
-
- if(pages.getTotalNum()==-1){
- pages.setTotalNum(queryResult.getRecordCount());
- }
- pages.executeCount();
- queryResult = this.searchFourm(which, keyWord, pages.getSpage(), pages.getPerPageNum());
- System.out.println("queryResult.getRecordList()="+queryResult.getRecordList());
- pl.setObjectList(queryResult.getRecordList());
- pl.setPages(pages);
- return pl;
- }
-
-
- public QueryResult searchFourm(String which, String keyWord, int firstResult, int maxResult) throws Exception {
- File indexFile = new File("D:\\index");
- IndexReader reader = IndexReader.open(indexFile);
-
- Analyzer analyzer = new PaodingAnalyzer();
-
- QueryParser queryParser = new QueryParser(which, analyzer);
- IndexSearcher indexSearcher = new IndexSearcher(reader);
-
- Query query = queryParser.parse(keyWord);
-
- TopDocs topDocs = searchDao.search(query, indexSearcher);
- int recordCount = topDocs.totalHits;
-
- SimpleHTMLFormatter sHtmlF = new SimpleHTMLFormatter("<font color='red'>", "</font>");
-
- Highlighter highlighter = new Highlighter(sHtmlF, new QueryScorer(query));
-
- highlighter.setTextFragmenter(new SimpleFragmenter(100));
-
- List<ForumSearch> recordList = new ArrayList<ForumSearch>();
-
-
-
- int end = Math.min(firstResult + maxResult, topDocs.totalHits);
- for(int i = firstResult; i < end; i++){
- ScoreDoc scoreDoc = topDocs.scoreDocs[i];
- int docSn = scoreDoc.doc;
- Document doc = indexSearcher.doc(docSn);
-
-
- ForumSearch fs = new ForumSearch();
- String title = doc.get("title");
- String detail = doc.get("detail");
- String id = doc.get("id");
- String bid = doc.get("bid");
- String postTime = doc.get("postTime");
-
-
- if(which.equals("title")){
- String bestFragment = highlighter.getBestFragment(analyzer, which, title);
-
- fs.setTitle(bestFragment);
-
- if(detail.length()<100){
- fs.setDetail(detail);
- }else{
- fs.setDetail(detail.substring(0, 100));
- }
- }else{
-
- String bestFragment = highlighter.getBestFragment(analyzer, which, detail);
-
- fs.setDetail(bestFragment);
- fs.setTitle(title);
- }
-
- fs.setPostTime(postTime);
- fs.setId(id);
- fs.setBid(bid);
- recordList.add(fs);
- }
-
- return new QueryResult(recordCount, recordList);
- }
-
- public SearchDao getSearchDao() {
- return searchDao;
- }
- public void setSearchDao(SearchDao searchDao) {
- this.searchDao = searchDao;
- }
-
-
- }
ForumSearchServiceImpl只写了两个方法,一个就是创建索引,另一个就是搜索.其中为了返回数据方便,定义了一个QueryResult结构,里面只有两个变量,这里就不粘代码了,能看懂的人自然知道那两个变量是什么,注意,这里面的方法模仿了Hibernate分页,接下来将我的Action层的代码贴出来:
- public String createIndex() throws Exception{
- List<Forum> forumList = forumService.listForums();
- forumSearchService.saveForumIndex(forumList);
- return SUCCESS;
- }
-
- public String search() throws Exception {
- Pages pages = new Pages();
- pages.setPerPageNum(10);
- pages.setPage(this.getPage());
- pages.setFileName(basePath + "forumSearch.bbsns?action=search"+"&keyWord="+keyWord+"&which="+which);
- this.setPageList(forumSearchService.searchFourm(which, keyWord, pages));
- return "result";
- }
其实至于搜索和创建索引,大家可以完善很多功能,比如多关键字,多条件查询,方法只要实现好,创建好索引就行了,可以实现高级搜索等功能.最近有点懒,大概写了这么点,哎...
好了,大概差不多了,就到这里吧,如果各位有什么问题,欢迎留言探讨.
原创首发,谢谢支持!