failure in bulk execution: reason=[parent] Data too large, data for [indices:data/write/bulk[s]]

问题场景:

elasticsearch批量插入数据的时候出现下列异常

failure in bulk execution:
[4]: index [*******], type [_doc], id [61890005], message [ElasticsearchException[Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [indices:data/write/bulk[s]] would be [988794778/942.9mb], which is larger than the limit of [986932838/941.2mb], real usage: [982212384/936.7mb], new bytes reserved: [6582394/6.2mb], usages [request=0/0b, fielddata=9899078/9.4mb, in_flight_requests=20259132/19.3mb, accounting=452320/441.7kb]]]]
[6]: index [*******], type [_doc], id [61890007], message [ElasticsearchException[Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [indices:data/write/bulk[s]] would be [988794778/942.9mb], which is larger than the limit of [986932838/941.2mb], real usage: [982212384/936.7mb], new bytes reserved: [6582394/6.2mb], usages [request=0/0b, fielddata=9899078/9.4mb, in_flight_requests=20259132/19.3mb, accounting=452320/441.7kb]]]]

主要原因是Data too large和which is larger than the limit of [986932838/941.2mb], real usage: [982212384/936.7mb],就是我的数据太大了然后内存爆了OOM。

解决办法:

①及时清理缓存(不现实,每次都要去清理,谁会这么闲)

②增加es的启动内存:

1、6版本前的启动方式:

./elasticsearch -Xms10g -Xmx10g(最好不要超过机器总内存的50%)

  -Xms10g 表示JVM Heap(堆内存)最小为10g,初始化内存大小

  -Xmx10g表示应用JVM最大允许分配的堆内存,程序能够使用的最大内存数

这两个参数最好设置一样,可以避免每次GC后调整堆大小。

如果这样设置还会出问题可以在elasticsearch.yml配置indices.breaker.fielddata.limit,默认大小为60%,可以根据实际情况调整大小,修改完成重新启动集群。

还可以配置indices.fielddata.cache.size清除旧数据占用的filedata,让新的数据可以加载进来。避免在查询中查不到新插入的数据

2、6版本之后的启动方式

ES_JAVA_OPTS="-Xms2g -Xmx2g"./bin/elasticsearch

参考官网地址:https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

 

 

你可能感兴趣的:(Java,Springboot,elasticsearch,elasticsearch,java)