logstash写日志elaticsearch不响应

  在大量的解析日志并写入elasticsearch,在后端节点数据数量及磁盘性能等影响下,es不响应

问题描述:

[2018-04-12T17:02:16,861][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://x.x.x.x:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://x.x.x.x:9200/, :error_message=>"Elasticsearch Unreachable: [http://x.x.x.x:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
Attempted to send a bulk request  to elasticsearch, but no there are no living connections in the connection pool

解决办法:

  1)you should run logstash separate from ES cluster as both can use a lot of cpu resources. //lg与es分离,禁止放到一台机器上,lg解析消耗大量的CPU
  2)You should also have more than one node for ES cluster in which case logstash can use the other ES nodes when one node is not accessible //增加es数量-data节点的

  03)缓存日志队列换成kafka,控制消费队列,让elasticsearch稳定写入

 

你可能感兴趣的:(logstash写日志elaticsearch不响应)