docker-compose搭建elk日志系统

1. docker-compose.yml


version: "2"

services:

  elasticsearch:

    image: docker.io/elasticsearch:5.6.5

    environment:

      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"

    volumes:

      - ./elasticsearch/data:/usr/share/elasticsearch/data

    container_name: elasticsearch565

    hostname: elasticsearch

    restart: always

    ports:

      - "9200:9200"

      - "9300:9300"

  kibana:

    image: docker.io/kibana:5.6.5

    environment:

      - ELASTICSEARCH_URL=http://elasticsearch:9200

    container_name: kibana565

    hostname: kibana

    depends_on:

      - elasticsearch

    restart: always

    ports:

      - "5601:5601"

  filebeat:

      image: docker.elastic.co/beats/filebeat:5.6.5

      volumes:

        - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml

        - ./log:/tmp

      container_name: filebeat565

      hostname: filebeat

      restart: always

      privileged: true

      depends_on:

        - elasticsearch

2. 当前路径下创建filebeat目录,并写入文件filebeat.yml


filebeat:

  prospectors:

    - input_type: log

      paths:  # 这里是容器内的path

          - /tmp/*

  registry_file: /usr/share/filebeat/data/registry/registry  # 这个文件记录日志读取的位置,如果容器重启,可以从记录的位置开始取日志

output:

  elasticsearch:  # 我这里是输出到elasticsearch,也可以输出到logstash

    index: "test_filebeat"  #  kibana中的索引

    hosts: ["elasticsearch:9200"] # elasticsearch地址

3. 进入kibana容器修改配置及汉化

将配置文件kibana.yml中下面一项的配置放开(#注释符去掉)即可:

kibana.index: ".kibana"

进行汉化操作:

wget https://github.com/anbai-inc/Kibana_Hanization/archive/master.zip

unzip master.zip

cd master

查看readme.md,对相应版本进行相应操作后重启

你可能感兴趣的:(docker-compose搭建elk日志系统)