ELK5.x->6.x升级踩坑记录

背景

基于docker搭建的ELK系统,filebeat负责收集传到redis中。镜像升级从5.x->6.x,升级之后发现原来在kibana建立的索引全都失效了。最后在排查的过程中,发现filebeat的一个字段document_type被废弃,导致logstash的grok在过滤的过程中不能进行动态的添加index,以至于index在kibana中失效。

解决办法

把原来filebeat输出的字段

// filebeat.yml
#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true
  fields_under_root: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /mnt/docker-data/logstash/nginx/error.log
  document_type: nginx-error

更改为下面(即添加filed字段,fields_under_root: true 会把所有filed下面的字段提升到root节点)

// filebeat.yml update
filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true
  fields_under_root: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /mnt/docker-data/logstash/nginx/error.log
  fields:
    document_type: nginx-error



// logstash pipeline conf
output {
    elasticsearch {
            hosts => ["elasticsearch:9200"]
            manage_template => true
            index => "logstash-test-%{[document_type]}-%{+YYYY-MM-DD}"
    }

}

上述代码中的pipeline部分,对于添加索引的优化,之前全用if语句。。代码大量荣誉,目前这样之后配置文件更加整洁

你可能感兴趣的:(elk)