EFK收集docker日志

环境准备

添加es文件夹及配置文件

mkdir -p es/data
mkdir -p es/logs
cd es && vim es.yml

es.yml

network.host: 0.0.0.0
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
# xpack.security.transport.ssl.enabled: true
# xpack.security.transport.ssl.keystore.type: PKCS12
# xpack.security.transport.ssl.verification_mode: certificate
# xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
# xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
# xpack.security.transport.ssl.truststore.type: PKCS12

# xpack.security.audit.enabled: true

添加kibana文件夹及文件配置

mkdir kibana
cd kinaba && vim kibana.yml

kibana.yml

server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
elasticsearch.username: elastic
elasticsearch.password: WQabc123++

添加filebeat文件夹及配置

mkdir filebeats
cd filebeats && vim filebeats.docker.yml

filebeat.docker.yml

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.labels.collectLog: "true"
          config:
            - type: container
              paths:
                - /var/lib/docker/containers/${data.docker.container.id}/*.log

processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:elastic}'
  password: '${ELASTICSEARCH_PASSWORD:WQabc123++}'

编写docker-compose.yml

vim docker-compose.yml

version: "3"
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: es
    environment:
    - discovery.type=single-node
    - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    - bootstrap.memory_lock=true
    - ELASTIC_PASSWORD=WQabc123++
    - network.publish_host=0.0.0.0
    ports:
    - 9200:9200
    - 9300:9300
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
    - ./es/data:/usr/share/elasticsearch/data
    - ./es/logs:/usr/share/elasticsearch/logs
    - ./es/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    networks:
      - efk

  kibana:
    image: docker.elastic.co/kibana/kibana:7.8.0
    container_name: kibana
    ports:
    - 5601:5601
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200
    volumes:
    - ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
    depends_on: 
    - elasticsearch
    networks:
      - efk

  filebeat:
    image: docker.elastic.co/beats/filebeat:7.8.0
    container_name: filebeat
    environment:
    - output.elasticsearch.hosts=["elasticsearch:9200"]
    user: root
    volumes:
    - "./filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro"
    - "/var/lib/docker/containers:/var/lib/docker/containers:ro"
    - "/var/run/docker.sock:/var/run/docker.sock:ro"
    depends_on: 
    - elasticsearch
    networks:
      - efk
networks:
  efk:
    driver: bridge

如果是多服务filebeat可以拆开单独运行
运行docker-compose.yml up -d, 没有问题就可以访问了, http://ip:5601, 输入账号密码。

image.png

配置
  • 点击Stack Management -> Index patterns


    image.png
  • 点击Create....


    image.png
  • 完成后,返回首页就可以Discover页面就可以看到自己想要的数据了


    image.png

你可能感兴趣的:(EFK收集docker日志)