hdfs-indexer /hbase-indexer向solr创建索引报错解决

我在使用hdfs-indexer向solr创建索引时候,mapreduce已经执行完了,但是到最后出现,其实下面这个报错是从网上扣的,大致报错信息是一致的。我是用hdfs-indexer向solr同步全量索引数据时候报的错。

org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://172.20.32.48:21101/solr: no segments* file found in NRTCachingDirectory(MMapDirectory@/opt/huawei/Bigdata/nodeagent/hdfs:/hacluster/user/solr/test/results/part-00000/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@537fd381; maxCacheMB=48.0 maxMergeSizeMB=4.0): files: []

at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:566)

at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:237)

at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)

at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)

at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:152)

at org.apache.solr.hadoop.ForkedGoLive$1.call(ForkedGoLive.java:125)

at org.apache.solr.hadoop.ForkedGoLive$1.call(ForkedGoLive.java:98)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

其实这个问题, 这个错误,solr存索引存储在本地磁盘。
解决办法也很简单.
进入华为集群平台管理中,有一个solr服务配置参数,index_stored_on_hdfs 这个配置参数调为true,然后同步配置,重启服务,就好了,这就是索引存放问题,需要打开允许存在hdfs上。

你可能感兴趣的:(solr,hadoop,hbase)