logstash对nginx日志进行解析

logstash对nginx日志进行解析过滤转换等操作;
此例中nginx日志提前配置为json格式
配置可以用于生产环境,架构为filebeat读取日志放入redis,logstash从redis读取日志后进行操作;
对user_agent和用户ip也进行了解析操作,便于统计;

input {
    redis {
        host => "192.168.1.109"
        port => 6379
        db => "0"
        data_type => "list"
        key => "test"
    }
}
filter{
    json {
        source => "message"
        remove_field => "message"
    }
    useragent {
        source => "agent"
        target => "agent"
        remove_field => ["[agent][build]","[agent][os_name]","[agent][device]","[agent][minor]","[agent][patch]"]
    }
    date {
        match => ["access_time", "dd/MMM/yyyy:HH:mm:ss Z"]
    }
    mutate {
        remove_field => ["beat","host","prospector","@version","offset","input","source","access_time"]
        convert => {"body_bytes_sent" => "integer"}
        convert => {"up_response_time" => "float"}
        convert => {"request_time" => "float"}

    }
    geoip {
        source => "remote_addr"
        target => "geoip"
        remove_field => ["[geoip][country_code3]","[geoip][location]","[geoip][longitude]","[geoip][latitude]","[geoip][region_code]"]
        add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
        add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
    }
    mutate {
        convert => ["[geoip][coordinates]","float"]
    }
}
output {
    if [tags][0] == "newvp" {
        elasticsearch {
                hosts  => ["192.168.1.110:9200","192.168.1.111:9200","192.168.1.112:9200"]
                index  => "%{type}-%{+YYYY.MM.dd}"
        }
        stdout {
                codec => rubydebug
        }
        #stdout用于调试,正式使用可以去掉
    }
}

filebeat读取日志的写法:

filebeat.inputs:
- type: log
  paths:
    - /var/log/nginx/access.log
  tags: ["newvp"]
  fields:
    type: newvp
  fields_under_root: true
output.redis:
  hosts: ["192.168.1.109"]
  key: "test"
  datatype: list

kibana展示:

转载于:https://blog.51cto.com/liuzhengwei521/2141244

你可能感兴趣的:(logstash对nginx日志进行解析)