docker-compose搭建ELK(原)

最近学习了Docker,在Centos7上试做了个ELK环境的搭建,试用两种方法实现了用ELK+filebeat收集LOG

第一种方法参考网址如下,按照参考链接做就可以
https://elk-docker.readthedocs.io/#installation
https://juejin.im/post/5ba4c8ef6fb9a05d082a1f53

第二种是自己写个简单的docker-compose,并修改配置文件。

1.编写docker-compose.yml


  1. version: '3' #版本号 https://www.docker.elastic.co/#
  2. services:
  3.   elasticsearch01: #服务名称(不是容器名,名称最好不要含有特殊字符,碰到过用下划线时运行出错)    
  4.     image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION} #使用的镜像 
  5.     container_name: elasticsearch01 #容器名称 
  6.     volumes: #挂载文件
  7.       - ./elasticsearch/logs/:/usr/share/logs/
  8.      # - ./elasticsearch/data:/usr/share/elasticsearch/data
  9.       - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
  10.     ports:
  11.       - "9200:9200" #暴露的端口信息和docker run -d -p 80:80一样
  12.       - "9300:9300" 
  13.     #restart: "always" #重启策略,能够使服务保持始终运行,生产环境推荐使用
  14.     environment: #设置镜像变量,它可以保存变量到镜像里面
  15.       ES_JAVA_OPTS: "-Xmx256m -Xms256m"
  16.     networks: #加入指定网络
  17.       - elk
  18.   logstash_test:
  19.     image: docker.elastic.co/logstash/logstash:${ELK_VERSION}
  20.     container_name: logstash01
  21.     volumes:
  22.       - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
  23.       - ./logstash/pipeline:/usr/share/logstash/pipeline
  24.     ports:
  25.       - "5044:5044"
  26.       - "9600:9600"
  27.     environment:
  28.       LS_JAVA_OPTS: "-Xmx256m -Xms256m"
  29.     networks:
  30.       - elk
  31.     depends_on: #标签解决了容器的依赖、启动先后的问题
  32.       - elasticsearch01
  33.   kibana_test:
  34.     image: docker.elastic.co/kibana/kibana:${ELK_VERSION}
  35.     container_name: kibana01
  36.     volumes:
  37.       - ./kibana/config/:/usr/share/kibana/config:ro
  38.     ports:
  39.       - "5601:5601"
  40.     networks:
  41.       - elk
  42.     depends_on:
  43.       - elasticsearch01
  44. networks:
  45.   elk:
  46.     driver: bridge

2.创建.evn文件指定ELK_VERSION,目前最新版是6.6.1(2019.2.22)


  1. ELK_VERSION=6.6.1

3.修改配置文件

     kibana.yml配置文件如下:


  1. server.name: kibana
  2. server.host: "0"
  3. elasticsearch.url: http://elasticsearch01:9200

     logstash.yml配置如下:


  1. http.host: "0.0.0.0"


     logstash.conf配置如下,参考方法一,用filebeat发送log给logstash。碰到问题是elasticsearch的host用IP地址,会提示【[Manticore::SocketException] No route to host (Host unreachable)"} 】


  1. input {
  2. beats {
  3. port => 5044
  4. }
  5. }
  6. ## Add your filters / logstash plugins configuration here
  7. output {
  8. elasticsearch {
  9. hosts => ["elasticsearch01:9200"]
  10. manage_template => false
  11. index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  12. document_type => "%{[@metadata][type]}"
  13. }
  14. }

      Elasticsearch.yml配置如下:


  1. cluster.name: "docker-cluster"
  2. network.host: 0.0.0.0
  3. discovery.zen.minimum_master_nodes: 1

4.安装Filebeat

没有使用docker filebeat,参考方法1的链接到官网查看最新版本直接下载:www.elastic.co/downloads/b…最新版安装了。

filebeat.yml配置信息如下,配置好后重启filebeat,查看filebeat日志 tail -f /var/log/filebeat/filebeat就能出日志。

  1. #=========================== Filebeat inputs =============================
  2. filebeat.inputs:
  3. # Each - is an input. Most options can be set at the input level, so
  4. # you can use different inputs for various configurations.
  5. # Below are the input specific configurations.
  6. - type: log
  7.   # Change to true to enable this input configuration.
  8.   enabled: true
  9.   # Paths that should be crawled and fetched. Glob based paths.
  10.   paths:
  11.     #- /var/log/*.log
  12.     - /var/lib/docker/containers/*/*.log
  13.   ### Multiline options
  14.   # Multiline can be used for log messages spanning multiple lines. This is common
  15.   # for Java Stack Traces or C-Line Continuation
  16.   # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  17.   #multiline.pattern: ^\[
  18.   multiline.pattern: ^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})
  19.   # Defines if the pattern set under pattern should be negated or not. Default is false.
  20.   #multiline.negate: false
  21.   multiline.negate: true
  22.   # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  23.   # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  24.   # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  25.   #multiline.match: after
  26.   multiline.match: after
  27.   multiline.max_lines: 1000
  28.   multiline.timeout: 30s
  29. #================================ Outputs =====================================
  30. # Configure what output to use when sending the data collected by the beat.
  31. #-------------------------- Elasticsearch output ------------------------------
  32. #output.elasticsearch:
  33.   # Array of hosts to connect to.
  34.  # hosts: ["172.28.104.235:9200"]
  35.   # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  36.   #ilm.enabled: false
  37.   # Optional protocol and basic auth credentials.
  38.   #protocol: "https"
  39.   #username: "elastic"
  40.   #password: "changeme"
  41. #----------------------------- Logstash output --------------------------------
  42. output.logstash:
  43.   # The Logstash hosts
  44.   hosts: ["172.28.104.235:5044"]
  45.   # Optional SSL. By default is off.
  46.   # List of root certificates for HTTPS server verifications
  47.   #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  48.   # Certificate for SSL client authentication
  49.   #ssl.certificate: "/etc/pki/client/cert.pem"
  50.   # Client Certificate Key
  51.   #ssl.key: "/etc/pki/client/cert.key"
     

enabled:filebeat 6.0后,enabled默认为关闭,必须要修改成true
paths:为你想要抓取分析的日志所在路径
multiline:如果不进行该合并处理操作的话,那么当采集的日志很长或是像输出xml格式等日志,就会出现采集不全或是被分割成多条的情况
pattern:配置的正则表达式,指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串),如果匹配不到的话,就进行合并行。
注释掉Elasticsearch output,开启Logstash output。
hosts:elk所在机器IP地址
如果直接将日志发送到Elasticsearc,请编辑此行:Elasticsearch output
如果直接将日志发送到Logstash,请编辑此行:Logstash output
只能使用一行输出,其它的注掉即可

你可能感兴趣的:(Docker,ELK,docker-compose)