FEK组合分析nginx日志(Fluentd Elasticsearch kibana)

文章目录

      • 容器分开创建
        • 容器启动步骤
        • fluentd配置
        • nginx日志json格式化
        • fluentd要点
      • docker-compose方法启动
        • troubleshooting

容器分开创建

容器启动步骤

# elasticsearch
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.1
# kibana
docker run -d --name kibana -p 5601:5601 docker.elastic.co/kibana/kibana:6.6.1
# fluentd
docker run -p 24224:24224 -v ./fluent.conf:/etc/fluent/fluent.conf forkdelta/fluentd-elasticsearch
#测试nginx容器
docker run -d --log-driver fluentd --log-opt fluentd-address=localhost:24224 --log-opt tag="nginx" --log-opt fluentd-async-connect --name nginx-test -p 8088:80 -v $PWD/nginx.conf:/etc/nginx/nginx.conf nginx

fluentd配置

vi fluent.conf


  @type forward
  port 24224



  @type parser
  format json
  key_name log



  @type elasticsearch
  host 192.168.71.128
  port 9200
  logstash_format true
  include_timestamp true
  logstash_prefix fluentd-${tag}
  
    flush_interval 10s # for testing
  

在使用此日志记录驱动程序之前,请启动Fluentd守护程序。日志记录驱动程序默认通过localhost:24224连接到此守护程序,地址相对于宿主机,而不是容器内部网络地址。使用fluentd-address选项连接到其他地址,例如:

fluentdhost:24224 or unix:///path/to/fluentd.sock

nginx日志json格式化

log_format access_json '{"@timestamp":"$time_iso8601",'
                           '"host":"$server_addr",'
                           '"clientip":"$remote_addr",'
                           '"size":$body_bytes_sent,'
                           '"responsetime":$request_time,' 
                           '"upstreamtime":"$upstream_response_time",'
                           '"upstreamhost":"$upstream_addr",'
                           '"http_host":"$host",'
                           '"url":"$uri",'
                           '"domain":"$host",'
                           '"xff":"$http_x_forwarded_for",'
                           '"referer":"$http_referer",'
                           '"status":"$status"}';

fluentd要点

  • fluentd收集的日志每一条记录都会存放在log字段中,我们还需要对log这个字段进行json格式化处理,方便我们对日志进行分析

  • fluentd根据标签来对日志进行分类处理,每一条日志发送到fluentd服务端的时候都会打上此标签,通过匹配处理

docker-compose方法启动

docker-compose代码如下,其他配置和上面一样:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
    container_name: es
    ports:
      - "9200:9200" #端口映射,前面为容器端口,后面为宿主机端口
      - "9300:9300"
    networks:
      - efk
  kibana:
    image: docker.elastic.co/kibana/kibana:6.6.1
    container_name: kibana
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - efk
  fluentd:
    image: forkdelta/fluentd-elasticsearch
    container_name: fluentd
    ports:
      - "24224:24224"
    depends_on:
      - elasticsearch
      - kibana
    volumes:
      - ./fluent.conf:/fluentd/etc/fluent.conf
    networks:
      - efk
  nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - 80:80
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    logging:
      driver: fluentd
      options:
        fluentd-address: "localhost:24224"
        tag: nginx
    depends_on:
      - fluentd
    networks:
      - efk
networks:
  efk:

troubleshooting

问题:elasticsearch 索引出现值冲突,客户端报400错误
错误:dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error=“400 - Rejected by Elasticsearch”
解决:设置logstash_format true

问题:在使用docker-compose创建容器时,docker容器nginx通过内部网络访问Fluentd时出现错误
错误:failed to initialize logging driver: dial tcp: i/o timeout
解决:通过端口映射到宿主机上,通过宿主机访问Fluentd

docker方式安装elasticsearch官方文档

你可能感兴趣的:(fluentd,docker-compose,FEK)