ELK采集nginx错误日志

ELK采集nginx错误日志

一、filebeat采集配置

1、在nginx服务器上安装filebeat

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.1-x86_64.rpm
yum localinstall filebeat-6.3.1-x86_64.rpm

2、配置filebeat采集文件

vim /etc/filebeat/filebeat.yml

logging.level: info
logging.to_files: true
logging.files:
  path: /data/logs/filebeat
  name: filebeat.log
  keepfiles: 7
  permissions: 0644

filebeat.inputs:
- type: log
  enabled: true
  exclude_lines: ['\\x']
  fields:
    log-type: nginx-access-logs
  paths:
    - /data/logs_nginx/*.json.log

- type: log
  enabled: true
  fields:
    log-type: nginx-error-logs
  paths:
    - /data/logs_nginx/error.log

output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

  # message topic selection + partitioning
  topic: '%{[fields][log-type]}'
  partition.hash:
    reachable_only: false

  required_acks: 1
  compression: snappy
  max_message_bytes: 1000000

4、启动filebeat

 systemctl start filebeat

二、配置logstash过滤规则并存储到elasticsearch

1、添加nginx错误日志grok表达式

cd /usr/share/logstash/patterns/

vim nginx

NGINX_ERROR_LOG (?<timestamp>%{
     YEAR}[./-]%{
     MONTHNUM}[./-]%{
     MONTHDAY}[- ]%{
     TIME}) \[%{
     LOGLEVEL:severity}\] %{
     POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server}?)(?:, request: %{QS:request})?(?:, upstream: (?\"%{URI}\"|%{QS}))?(?:, host: %{QS:request_host})?(?:, referrer: \"%{URI:referrer}\")?

2、配置logstash过滤nginx日志规则

cd /etc/logstash/conf.d
vim nginx-error.conf
input{
     
    kafka{
     
        bootstrap_servers => ["kafka1:9092,kafka2:9092,kafka3:9092"]
        client_id => "nginx-error-logs"
        group_id => "logstash"
        auto_offset_reset => "latest"
        consumer_threads => 10
        decorate_events => true 
        topics => ["nginx-error-logs"] 
        type => "nginx-error-logs"
        codec => json {
     charset => "UTF-8"} 
    }
}


filter {
     
  if [fields][log-type] == "nginx-error-logs" {
     
   grok {
     
       match => [ "message" , "%{NGINX_ERROR_LOG}"]
    }
    geoip {
     
        database =>"/usr/share/logstash/GeoLite2-City/GeoLite2-City.mmdb"
        source => "clientip"
    }
    date {
     
      timezone => "Asia/Shanghai"
      match => ["timestamp","yyyy/MM/dd HH:mm:ss"]
    }

  }
}



output {
     

  if [fields][log-type] == "nginx-error-logs" {
     
    elasticsearch {
     
      hosts => ["http://es1:9200","http://es2:9200","http://es3:9200"]
      index => "nginx-error-%{+YYYY.MM.dd}"
    }
  }

}

3、重启logstash

systemctl restart logstash

你可能感兴趣的:(ELK)