docker搭建elk分布式日志收集

1.在应用服务器上安装日志收集器logstash。

    1.1 拉取镜像 docker pull docker.elastic.co/logstash/logstash:7.16.3

    1.2 创建/opt/software目录,在此目录下创建logstash目录和docker-compose.yaml文件,在logstash目录下创建conf和pipeline文件夹,分别存放logstash.yml和logstash.conf。

    1.2.1 docker-compose.yaml内容如下:

version: "3.5"

networks:

  mynet:

    driver: bridge

services:

  njydzq-logstash:

    image: docker.elastic.co/logstash/logstash:7.16.3

    container_name: njydzq-logstash

    restart: always

    ports:

      - 5044:5044

    environment:

      - LS_JAVA_OPTS=-Xms512m -Xmx512m

      - TZ=Asia/Shanghai

    command: bash -c "bin/logstash-plugin install logstash-filter-multiline && logstash -f"

    volumes:

      - ./logstash/conf/logstash.yml:/usr/share/logstash/config/logstash.yml

      - ./logstash/pipeline:/usr/share/logstash/pipeline

      - /opt/application/jar/logs:/logs

1.2.2  logstash.yml内容如下:

http.host: "0.0.0.0"

xpack.monitoring.elasticsearch.hosts: [ "http://36.134.146.187:9200" ]

1.2.3 logstash.conf内容如下:

input {

  file {

        path => "/logs/*.log"

        start_position => "end"

  }

}

filter {

    multiline {

      pattern => "^%{TIMESTAMP_ISO8601}"

      negate => true

      what => "previous"

    }

      grok {

        match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{DATA:thread} %{DATA:clazz} %{GREEDYDATA:err_msg}" }

        remove_field => ["message","_source","host","@version","path"]

      }

      mutate { add_field => { "[@metadata][target_index]" => "test-default-log" } }

}

output {

  elasticsearch {

      hosts => ["127.0.0.1:9200"]

      index => "%{[@metadata][target_index]}"

  }

  stdout { codec => rubydebug }

}

    1.3 使用docker-compose up -d 启动logstash。

2.安装elasticsearch和kibana。

2.1 拉取elasticsearch镜像,docker pull elasticsearch:7.16.3。

2.2 拉取kibana镜像,docker pull kibana:7.16.3

2.3创建/opt/software目录,在此目录下创建docker-compose.yaml,kibana目录,elasticsearch目录。

2.3.1 docker-compose.yaml内容如下
version: "3.5"

networks:

  mynet:

    driver: bridge

services:

  elasticsearch-log:

    image: elasticsearch:7.16.3

    container_name: elasticsearch-log

    restart: always

    ports:

      - 9200:9200

      - 9300:9300

    environment:

      - discovery.type=single-node

      - ES_JAVA_OPTS=-Xms512m -Xmx512m

    volumes:

      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      - ./elasticsearch/data:/usr/share/elasticsearch/data

      - ./elasticsearch/plugins:/usr/share/elasticsearch/plugins

  kibana:

    image: kibana:7.16.3

    container_name: kibana

    restart: always

    ports:

      - 5601:5601

#    environment:

#      - NODE_OPTIONS=--max-old-space-size=512

    links:

      - elasticsearch-log:elasticsearch

    depends_on:

      - elasticsearch-log

    volumes:

      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml

2.3.2 kibana.yml 内容如下:

elasticsearch.hosts: http://127.0.0.1:9200

server.host: "0.0.0.0"

server.name: kibana

xpack.monitoring.ui.container.elasticsearch.enabled: true

#node.max_old_space_size: 512

i18n.locale: zh-CN #中文

2.3.3 elasticsearch.yml内容如下:

http.host: 0.0.0.0

2.4 切换到/opt/software目录下执行docker-compose up -d启动es和kibana。

你可能感兴趣的:(docker搭建elk分布式日志收集)