1. kubernetes 日志管理方案介绍:

  在kubernetes集群中一般使用EFK日志解决方案,所谓的EFK分布代表了elasticsearch、fluentd、kibana.

  fluentd主要功能是日志收集客户端,主要用途是收集每一个pod的日志,同时也会收集操作系统层面的日志,比如kubelet组件、docker服务、apiserver等组件的日志;

因此flunetd组件一般是以DaemonSet的资源类型运行在k8s之上,包括k8s master节点都需要运行fluentd容器;

  elasticsearch是日志存储功能,主要用来存储收集来的日志生成索引。一般都是要创建成ES集群。标准的ES集群一般分为Client端、Master管理端、DATA数据端。Client端主要负责对外提供接口,也就是访问ES集群就是访问Client的服务请求。然后Client再把服务转发给Master服务器,最后由Master把数据写入到DATA服务器。

  kibana主要用途是日志展示和管理搜索等。将ES集群中的日志索引通过图形界面展示出来,可以实时查看日志的内容和根据关键字搜索日志;

  有的解决方案是这样的. fluentd--->logstash(日志格式转换等)--->Redis缓存--->ES集群存储--->kibana展示。其中用到的logstash是用来做日志的过滤和格式转换的,而redis用来做缓存,主要是因为ES的写入速度很慢,但是fluentd的发送日志速度很快,如果中间没有缓存的话,可能会造成数据的延迟或者丢失;

  在k8s容器中收集日志也有集中类型,一般fluentd的日志收集客户端只能收集STDOUT(标准输出)的日志, 什么是标准输出?就是你可以通过kubectl logs -f Pod_name 查询到的日志输出就是标准输出。但是有些JAVA应用程序并没有把业务日志输出到STDOUT,而是输出到一个文件里面。
大家应该知道容器里面是不能写入文件内容的,因为容器一旦重启内容就会消失,除非通过挂载共享存储的方式才能保存文件中的日志;但是文件中的日志fluentd检测不到怎么办,这个时候就要用到sidecar容器来实现把业务日志的文件中的内容输出到标准输出;或者也可以直接修改log4j配置文件将应用日志直接输出到ES;
边角容器这种方法也有一个明显的缺陷,就是日志不仅会在原容器文件中保留下来,还会通过 stdout 输出后占用磁盘空间,这样无形中就增加了一倍磁盘空间。

2. 使用Helm安装fluentd、ES、kibana:

  由于前面几篇博文已经详细描述过helm怎么安装Redis、RabbitMQ、Mysql、Jenkins等步骤,这里就不详细描述Helm的使用方法了。
  反正大家只要记住几点就行:

  1. helm的官方网站是: https://github.com/helm/charts/tree/master/stable/ 你需要的应用都在这个目录下面;
  2. 一般的流程就是helm search 应用名---> helm fetch 应用名--->根据官方文档配置value.yaml--->启动helm应用;

  接下来我主要是把这几个应用的value.yaml文件给大家重点介绍一下:

image:
  repository: k8s.harbor.maimaiti.site/system/fluentd
## Specify an imagePullPolicy (Required)
## It's recommended to change this to 'Always' if the image tag is 'latest'
## ref: http://kubernetes.io/docs/user-guide/images/#updating-images
  tag: v2.5.1
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistrKeySecretName

## If using AWS Elasticsearch, all requests to ES need to be signed regardless of whether
## one is using Cognito or not. By setting this to true, this chart will install a sidecar
## proxy that takes care of signing all requests being sent to the AWS ES Domain.
awsSigningSidecar:
  enabled: false
  image:
    repository: abutaha/aws-es-proxy
    tag: 0.9

# Specify to use specific priorityClass for pods
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority
# Pods to make scheduling of the pending Pod possible.
priorityClassName: ""

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
  # limits:
  #   cpu: 100m
  #   memory: 500Mi
  # requests:
  #   cpu: 100m
  #   memory: 200Mi

elasticsearch:
  host: 'elasticsearch-client.kube-system'
  port: 9200
  scheme: 'http'
  ssl_version: TLSv1_2
  user: ""
  password: ""
  buffer_chunk_limit: 2M
  buffer_queue_limit: 8
  logstash_prefix: 'logstash'

# If you want to add custom environment variables, use the env dict
# You can then reference these in your config file e.g.:
#     user "#{ENV['OUTPUT_USER']}"
env:
  # OUTPUT_USER: my_user
  # LIVENESS_THRESHOLD_SECONDS: 300
  # STUCK_THRESHOLD_SECONDS: 900

# If you want to add custom environment variables from secrets, use the secret list
secret:
# - name: ELASTICSEARCH_PASSWORD
#   secret_name: elasticsearch
#   secret_key: password

rbac:
  create: true

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

## Specify if a Pod Security Policy for node-exporter must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
##
podSecurityPolicy:
  enabled: false
  annotations: {}
    ## Specify pod annotations
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
    ##
    # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
    # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
    # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'

livenessProbe:
  enabled: true

annotations: {}

podAnnotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "24231"

## DaemonSet update strategy
## Ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
updateStrategy:
  type: RollingUpdate

tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule

affinity: {}
  # nodeAffinity:
  #   requiredDuringSchedulingIgnoredDuringExecution:
  #     nodeSelectorTerms:
  #     - matchExpressions:
  #       - key: node-role.kubernetes.io/master
  #         operator: DoesNotExist

nodeSelector: {}

service:
  type: ClusterIP
  ports:
    - name: "monitor-agent"
      port: 24231

serviceMonitor:
  ## If true, a ServiceMonitor CRD is created for a prometheus operator
  ## https://github.com/coreos/prometheus-operator
  ##
  enabled: false
  interval: 10s
  path: /metrics
  labels: {}

prometheusRule:
  ## If true, a PrometheusRule CRD is created for a prometheus operator
  ## https://github.com/coreos/prometheus-operator
  ##
  enabled: false
  prometheusNamespace: monitoring
  labels: {}
  #  role: alert-rules

configMaps:
  system.conf: |-
    
      root_dir /tmp/fluentd-buffers/
    
  containers.input.conf: |-
    # This configuration file for Fluentd / td-agent is used
    # to watch changes to Docker log files. The kubelet creates symlinks that
    # capture the pod name, namespace, container name & Docker container ID
    # to the docker logs for pods in the /var/log/containers directory on the host.
    # If running this fluentd configuration in a Docker container, the /var/log
    # directory should be mounted in the container.
    #
    # These logs are then submitted to Elasticsearch which assumes the
    # installation of the fluent-plugin-elasticsearch & the
    # fluent-plugin-kubernetes_metadata_filter plugins.
    # See https://github.com/uken/fluent-plugin-elasticsearch &
    # https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
    # more information about the plugins.
    #
    # Example
    # =======
    # A line in the Docker log file might look like this JSON:
    #
    # {"log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #  "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z"}
    #
    # The time_format specification below makes sure we properly
    # parse the time format produced by Docker. This will be
    # submitted to Elasticsearch and should appear like:
    # $ curl 'http://elasticsearch-logging:9200/_search?pretty'
    # ...
    # {
    #      "_index" : "logstash-2014.09.25",
    #      "_type" : "fluentd",
    #      "_id" : "VBrbor2QTuGpsQyTCdfzqA",
    #      "_score" : 1.0,
    #      "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",
    #                 "stream":"stderr","tag":"docker.container.all",
    #                 "@timestamp":"2014-09-25T22:45:50+00:00"}
    #    },
    # ...
    #
    # The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
    # record & add labels to the log record if properly configured. This enables users
    # to filter & search logs on any metadata.
    # For example a Docker container's logs might be in the directory:
    #
    #  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
    #
    # and in the file:
    #
    #  997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # where 997599971ee6... is the Docker ID of the running container.
    # The Kubernetes kubelet makes a symbolic link to this file on the host machine
    # in the /var/log/containers directory which includes the pod name and the Kubernetes
    # container name:
    #
    #    synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #    ->
    #    /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # The /var/log directory on the host is mapped to the /var/log directory in the container
    # running this instance of Fluentd and we end up collecting the file:
    #
    #   /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # This results in the tag:
    #
    #  var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
    # which are added to the log message as a kubernetes field object & the Docker container ID
    # is also added under the docker field object.
    # The final tag is:
    #
    #   kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # And the final log record look like:
    #
    # {
    #   "log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #   "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z",
    #   "kubernetes": {
    #     "namespace": "default",
    #     "pod_name": "synthetic-logger-0.25lps-pod",
    #     "container_name": "synth-lgr"
    #   },
    #   "docker": {
    #     "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"
    #   }
    # }
    #
    # This makes it easier for users to search for logs by pod name or by
    # the name of the Kubernetes container regardless of how many times the
    # Kubernetes pod has been restarted (resulting in a several Docker container IDs).
    # Json Log Example:
    # {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
    # CRI Log Example:
    # 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
    
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      
        @type multi_format
        
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        
        
          format /^(?
      
    
    
      @id nginxtest1.log
      @type tail
      path /var/log/containers/nginxtest1-*.log
      pos_file /var/log/nginxtest1.log.pos
      tag nginxtest1
      read_from_head true
      
        @type multi_format
        
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        
        
          format /^(?
      
    
    
      @id httpdtest1.log
      @type tail
      path /var/log/containers/httpdtest1-*.log
      pos_file /var/log/httpdtest1.log.pos
      tag httpdtest1
      read_from_head true
      
        @type multi_format
        
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        
        
          format /^(?
      
    

    # Detect exceptions in the log output and forward them as one log entry.
    
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    

    # Concatenate multi-line logs
    
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    

    # Enriches records with Kubernetes metadata
    
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    

    # Fixes json fields in Elasticsearch
    
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      
        @type multi_format
        
          format json
        
        
          format none
        
      
    

  system.input.conf: |-
    # Example:
    # 2015-12-21 23:17:22,066 [salt.state       ][INFO    ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
    
      @id minion
      @type tail
      format /^(?

  fluentd的配置是最关键也是最复杂的,能不能日志获取成功主要看fluentd配置的是否正确;配置的重点包括如下:

  1. 按照惯例将所有的镜像改成私服镜像;
  2. 连接Es集群的配置段,重点是这里连接的是client的service;
  3. 定义了fluentd的service
  4. 重点在configMap的配置,也就是fluentd的配置文件:定义缓存目录,因为fluentd也有一个缓存文件,日志先是写入到缓存文件然后再发送给ES,如果缓存文件设置的太小容易造成堵塞;
  5. fluentd默认的配置里面已经包含了容器的日志收集/var/log/containers/*.log 因为我们的容器只要是STDOUT输出的日志,默认都会在宿主机的/var/log/containers/Pod名称开始的日志名
    只需要将每个宿主机的这个目录挂载到fluentd容器里面,fluentd容器就会采集到宿主机上面运行的所有的容器日志
  6. 如果使用fluentd的默认配置的话,所有的日志都会收集到一个索引文件,也就是默认名称为logstash-年月日的索引文件,在Kibana上面创建索引后,所有的日志都汇总在一起,如果需要
    查看单独的POD的日志,就需要自己输入查询的条件,比如按照POD名或者容器名查询。
  7. 所以我们做了一个测试,我的k8s容器里面运行了nginx应用和apahce httpd应用。然后两个应用分布设置了两个不同的索引,使用不同的TAG;这个也是我们这个EFK日志解决方案的关键之处;
  8. 除了pod的日志,fluentd的配置文件中也默认收集了kube-controller-manager、 kube-scheduler、 kube-apiserver、kube-proxy、kubelet、etcd、docker等服务的日志;
  9. output.conf文件主要针对不同的源(比如nginx和httpd两个源)做了不同的索引前缀logstash_prefix,默认的logstash_prefix就是logstash
  10. 定义输出的时候还有几个参数可能需要优化,如flush_thread_count flush_interval chunk_limit_size等关于fluentd本地缓存的参数;调整的好就不至于日志d
LAST DEPLOYED: Tue Apr 30 17:55:30 2019
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                   DATA  AGE
fluentd-elasticsearch  6     2d

==> v1/ServiceAccount
NAME                   SECRETS  AGE
fluentd-elasticsearch  1        2d

==> v1/ClusterRole
NAME                   AGE
fluentd-elasticsearch  2d

==> v1/ClusterRoleBinding
NAME                   AGE
fluentd-elasticsearch  2d

==> v1/Service
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)    AGE
fluentd-elasticsearch  ClusterIP  10.200.108.50         24231/TCP  2d

==> v1/DaemonSet
NAME                   DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
fluentd-elasticsearch  8        8        8      8           8                   2d

==> v1/Pod(related)
NAME                         READY  STATUS   RESTARTS  AGE
fluentd-elasticsearch-2trp8  1/1    RunningE0502 18:09:33.641998   29738 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:39598->127.0.0.1:43154: write tcp4 127.0.0.1:39598->127.0.0.1:43154: write: broken pipe
  0         2d
fluentd-elasticsearch-2xgtb  1/1    Running  0         2d
fluentd-elasticsearch-589jc  1/1    Running  0         2d
fluentd-elasticsearch-ctkv8  1/1    Running  0         2d
fluentd-elasticsearch-d5dvz  1/1    Running  0         2d
fluentd-elasticsearch-kgdxp  1/1    Running  0         2d
fluentd-elasticsearch-r2c8h  1/1    Running  0         2d
fluentd-elasticsearch-z8p7b  1/1    Running  0         2d

NOTES:
1. To verify that Fluentd has started, run:

  kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=fluentd-elasticsearch,app.kubernetes.io/instance=fluentd-elasticsearch"

THIS APPLICATION CAPTURES ALL CONSOLE OUTPUT AND FORWARDS IT TO elasticsearch . Anything that might be identifying,
including things like IP addresses, container images, and object names will NOT be anonymized.
2. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace kube-system -l "app.kubernetes.io/name=fluentd-elasticsearch,app.kubernetes.io/instance=fluentd-elasticsearch" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80

  下面的ES集群的value.yaml配置

# Default values for elasticsearch.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
appVersion: "6.7.0"

## Define serviceAccount names for components. Defaults to component's fully qualified name.
##
serviceAccounts:
  client:
    create: true
    name:
  master:
    create: true
    name:
  data:
    create: true
    name:

## Specify if a Pod Security Policy for node-exporter must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
##
podSecurityPolicy:
  enabled: false
  annotations: {}
    ## Specify pod annotations
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
    ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
    ##
    # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
    # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
    # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'

image:
  # repository: "k8s.harbor.maimaiti.site/system/elasticsearch-oss"
  repository: "k8s.harbor.maimaiti.site/system/elasticsearch"
  tag: "6.7.0"
  pullPolicy: "IfNotPresent"
  # If specified, use these secrets to access the image
  # pullSecrets:
  #   - registry-secret

testFramework:
  image: "dduportal/bats"
  tag: "0.4.0"

initImage:
  repository: "busybox"
  tag: "latest"
  pullPolicy: "Always"

cluster:
  name: "elasticsearch"
  # If you want X-Pack installed, switch to an image that includes it, enable this option and toggle the features you want
  # enabled in the environment variables outlined in the README
  xpackEnable: true
  # Some settings must be placed in a keystore, so they need to be mounted in from a secret.
  # Use this setting to specify the name of the secret
  # keystoreSecret: eskeystore
  config: {}
  # Custom parameters, as string, to be added to ES_JAVA_OPTS environment variable
  additionalJavaOpts: ""
  # Command to run at the end of deployment
  bootstrapShellCommand: ""
  env:
    # IMPORTANT: https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#minimum_master_nodes
    # To prevent data loss, it is vital to configure the discovery.zen.minimum_master_nodes setting so that each master-eligible
    # node knows the minimum number of master-eligible nodes that must be visible in order to form a cluster.
    MINIMUM_MASTER_NODES: "2"
  # List of plugins to install via dedicated init container
  plugins: []
    # - ingest-attachment
    # - mapper-size

client:
  name: client
  replicas: 2
  serviceType: ClusterIP
  ## If coupled with serviceType = "NodePort", this will set a specific nodePort to the client HTTP port
  # httpNodePort: 30920
  loadBalancerIP: {}
  loadBalancerSourceRanges: {}
## (dict) If specified, apply these annotations to the client service
#  serviceAnnotations:
#    example: client-svc-foo
  heapSize: "512m"
  # additionalJavaOpts: "-XX:MaxRAM=512m"
  antiAffinity: "soft"
  nodeAffinity: {}
  nodeSelector: {}
  tolerations: []
  initResources: {}
    # limits:
    #   cpu: "25m"
    #   # memory: "128Mi"
    # requests:
    #   cpu: "25m"
    #   memory: "128Mi"
  resources:
    limits:
      cpu: "1"
      # memory: "1024Mi"
    requests:
      cpu: "25m"
      memory: "512Mi"
  priorityClassName: ""
  ## (dict) If specified, apply these annotations to each client Pod
  # podAnnotations:
  #   example: client-foo
  podDisruptionBudget:
    enabled: false
    minAvailable: 1
    # maxUnavailable: 1
  ingress:
    enabled: false
    # user: NAME
    # password: PASSWORD
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    path: /
    hosts:
      - chart-example.local
    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - chart-example.local

master:
  name: master
  exposeHttp: false
  replicas: 3
  heapSize: "512m"
  # additionalJavaOpts: "-XX:MaxRAM=512m"
  persistence:
    enabled: false
    accessMode: ReadWriteOnce
    name: data
    size: "4Gi"
    storageClass: "dynamic"
  readinessProbe:
    httpGet:
      path: /_cluster/health?local=true
      port: 9200
    initialDelaySeconds: 5
  antiAffinity: "soft"
  nodeAffinity: {}
  nodeSelector: {}
  tolerations: []
  initResources: {}
    # limits:
    #   cpu: "25m"
    #   # memory: "128Mi"
    # requests:
    #   cpu: "25m"
    #   memory: "128Mi"
  resources:
    limits:
      cpu: "1"
      # memory: "1024Mi"
    requests:
      cpu: "25m"
      memory: "512Mi"
  priorityClassName: ""
  ## (dict) If specified, apply these annotations to each master Pod
  # podAnnotations:
  #   example: master-foo
  podManagementPolicy: OrderedReady
  podDisruptionBudget:
    enabled: false
    minAvailable: 2  # Same as `cluster.env.MINIMUM_MASTER_NODES`
    # maxUnavailable: 1
  updateStrategy:
    type: OnDelete

data:
  name: data
  exposeHttp: false
  replicas: 2
  heapSize: "1536m"
  # additionalJavaOpts: "-XX:MaxRAM=1536m"
  persistence:
    enabled: false
    accessMode: ReadWriteOnce
    name: data
    size: "30Gi"
    storageClass: "dynamic"
  readinessProbe:
    httpGet:
      path: /_cluster/health?local=true
      port: 9200
    initialDelaySeconds: 5
  terminationGracePeriodSeconds: 3600
  antiAffinity: "soft"
  nodeAffinity: {}
  nodeSelector: {}
  tolerations: []
  initResources: {}
    # limits:
    #   cpu: "25m"
    #   # memory: "128Mi"
    # requests:
    #   cpu: "25m"
    #   memory: "128Mi"
  resources:
    limits:
      cpu: "1"
      # memory: "2048Mi"
    requests:
      cpu: "25m"
      memory: "1536Mi"
  priorityClassName: ""
  ## (dict) If specified, apply these annotations to each data Pod
  # podAnnotations:
  #   example: data-foo
  podDisruptionBudget:
    enabled: false
    # minAvailable: 1
    maxUnavailable: 1
  podManagementPolicy: OrderedReady
  updateStrategy:
    type: OnDelete
  hooks:  # post-start and pre-stop hooks
    drain:  # drain the node before stopping it and re-integrate it into the cluster after start
      enabled: true

## Sysctl init container to setup vm.max_map_count
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
sysctlInitContainer:
  enabled: true
## Additional init containers
extraInitContainers: |

  关于ES集群的配置,主要包含如下:

  1. 经镜像都改为私服镜像;
  2. 定义ES集群的名字;
  3. 定义client服务Pod的名字,副本数,jvm内存使用量;
  4. 定义master服务Pod的名字,副本数,master一般为奇数,3或者5,最好开启持久化存储,使用ceph RBD的sc
  5. 定义data节点的POD的名字,副本数,最好开启持久化存储;注意数据节点这里开启了hooks。就是数据节点
    的停止和启动,需要有顺序,并且在脱离集群之前需要做一些配置;
    6、 kibana和fluented连接的都是elasticsearch-client的svc;9200是提供服务的端口,9300是集群端口。还有一个
    elasticsearch-discovery的无头服务的用途就是每个POD的名称需要固定下来,及时重启了ES节点的名字也不会变化。
[root@master-01 fluentd-elasticsearch]# helm status elasticsearch
LAST DEPLOYED: Tue Apr 30 17:17:13 2019
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                DATA  AGE
elasticsearch       4     2d
elasticsearch-test  1     2d

==> v1/ServiceAccount
NAME                  SECRETS  AGE
elasticsearch-client  1        2d
elasticsearch-data    1        2d
elasticsearch-master  1        2d

==> v1/Service
NAME                     TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
elasticsearch-client     ClusterIP  10.200.180.10         9200/TCP  2d
elasticsearch-discovery  ClusterIP  None                  9300/TCP  2d

==> v1beta1/Deployment
NAME                  DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
elasticsearch-client  2        2        2           2          2d

==> v1beta1/StatefulSet
NAME                  DESIRED  CURRENT  AGE
elasticsearch-data    2        2        2d
elasticsearch-master  3        3        2d

==> v1/Pod(related)
NAME                                   READY  STATUS   RESTARTS  AGE
elasticsearch-client-6bb89766f9-wfbxh  1/1    Running  0         2d
elasticsearch-client-6bb89766f9-xvz6c  1/1    Running  0         2d
elasticsearch-data-0                   1/1    Running  0         2d
elasticsearch-data-1                   1/1    Running  0         2d
elasticsearch-master-0                 1/1    Running  0         2d
elasticsearch-master-1                 1/1    Running  0         2d
elasticsearch-master-2                 1/1    Running  0         2d

NOTES:
The elasticsearch cluster has been installed.

Elasticsearch can be accessed:

  * Within your cluster, at the following DNS name at port 9200:

    elasticsearch-client.kube-system.svc

  * From outside the cluster, run these commands in the same shell:

    export POD_NAME=$(kubectl get pods --namespace kube-system -l "app=elasticsearch,component=client,release=elasticsearch" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
    kubectl port-forward --namespace kube-system $POD_NAME 9200:9200
image:
  # repository: "k8s.harbor.maimaiti.site/system/kibana-oss"
  repository: "k8s.harbor.maimaiti.site/system/kibana"
  tag: "6.7.0"
  pullPolicy: "IfNotPresent"

testFramework:
  image: "dduportal/bats"
  tag: "0.4.0"

commandline:
  args: []

env: {}
  # All Kibana configuration options are adjustable via env vars.
  # To adjust a config option to an env var uppercase + replace `.` with `_`
  # Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
  #
  # ELASTICSEARCH_URL: http://elasticsearch-client:9200
  # SERVER_PORT: 5601
  # LOGGING_VERBOSE: "true"
  # SERVER_DEFAULTROUTE: "/app/kibana"

files:
  kibana.yml:
    ## Default Kibana configuration from kibana-docker.
    server.name: kibana
    server.host: "0"
    elasticsearch.url: http://elasticsearch-client.kube-system:9200

    ## Custom config properties below
    ## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
    # server.port: 5601
    # logging.verbose: "true"
    # server.defaultRoute: "/app/kibana"

deployment:
  annotations: {}

service:
  type: NodePort
  nodePort: 30001
  # clusterIP: None
  # portName: kibana-svc
  externalPort: 443
  internalPort: 5601
  # authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
  ## External IP addresses of service
  ## Default: nil
  ##
  # externalIPs:
  # - 192.168.0.1
  #
  ## LoadBalancer IP if service.type is LoadBalancer
  ## Default: nil
  ##
  # loadBalancerIP: 10.2.2.2
  annotations: {}
    # Annotation example: setup ssl with aws cert when service.type is LoadBalancer
    # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
  labels: {}
    ## Label example: show service URL in `kubectl cluster-info`
    # kubernetes.io/cluster-service: "true"
  ## Limit load balancer source ips to list of CIDRs (where available)
  # loadBalancerSourceRanges: []
  selector: {}

ingress:
  enabled: false
  # hosts:
    # - kibana.localhost.localdomain
    # - localhost.localdomain/kibana
  # annotations:
  #   kubernetes.io/ingress.class: nginx
  #   kubernetes.io/tls-acme: "true"
  # tls:
    # - secretName: chart-example-tls
    #   hosts:
    #     - chart-example.local

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  # If set and create is false, the service account must be existing
  name:

livenessProbe:
  enabled: false
  path: /status
  initialDelaySeconds: 30
  timeoutSeconds: 10

readinessProbe:
  enabled: false
  path: /status
  initialDelaySeconds: 30
  timeoutSeconds: 10
  periodSeconds: 10
  successThreshold: 5

# Enable an authproxy. Specify container in extraContainers
authProxyEnabled: false

extraContainers: |
# - name: proxy
#   image: quay.io/gambol99/keycloak-proxy:latest
#   args:
#     - --resource=uri=/*
#     - --discovery-url=https://discovery-url
#     - --client-id=client
#     - --client-secret=secret
#     - --listen=0.0.0.0:5602
#     - --upstream-url=http://127.0.0.1:5601
#   ports:
#     - name: web
#       containerPort: 9090

extraVolumeMounts: []

extraVolumes: []

resources: {}
  # limits:
  #   cpu: 100m
  #   memory: 300Mi
  # requests:
  #   cpu: 100m
  #   memory: 300Mi

priorityClassName: ""

# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}

# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}

podAnnotations: {}
replicaCount: 1
revisionHistoryLimit: 3

# Custom labels for pod assignment
podLabels: {}

# To export a dashboard from a running Kibana 6.3.x use:
# curl --user : -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard= > my-dashboard.json
# A dashboard is defined by a name and a string with the json payload or the download url
dashboardImport:
  enabled: false
  timeout: 60
  xpackauth:
    enabled: true
    username: myuser
    password: mypass
  dashboards: {}
    # k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json

# List of plugins to install using initContainer
# NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
plugins:
  # set to true to enable plugins installation
  enabled: false
  # set to true to remove all kibana plugins before installation
  reset: false
  # Use  to add/upgrade plugin
  values:
    # - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
    # - logtrail,0.1.31,https://github.com/sivasamyk/logtrail/releases/download/v0.1.31/logtrail-6.6.0-0.1.31.zip
    # - other_plugin

persistentVolumeClaim:
  # set to true to use pvc
  enabled: false
  # set to true to use you own pvc
  existingClaim: false
  annotations: {}

  accessModes:
    - ReadWriteOnce
  size: "5Gi"
  ## If defined, storageClassName: 
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"

# default security context
securityContext:
  enabled: false
  allowPrivilegeEscalation: false
  runAsUser: 1000
  fsGroup: 2000

extraConfigMapMounts: []
  # - name: logtrail-configs
  #   configMap: kibana-logtrail
  #   mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
  #   subPath: logtrail.json

# Add your own init container or uncomment and modify the given example.
initContainers: {}
  ## Don't start kibana till Elasticsearch is reachable.
  ## Ensure that it is available at http://elasticsearch:9200
  ##
  # es-check:  # <- will be used as container name
  #   image: "appropriate/curl:latest"
  #   imagePullPolicy: "IfNotPresent"
  #   command:
  #     - "/bin/sh"
  #     - "-c"
  #     - |
  #       is_down=true
  #       while "$is_down"; do
  #         if curl -sSf --fail-early --connect-timeout 5 http://elasticsearch:9200; then
  #           is_down=false
  #         else
  #           sleep 5
  #         fi
  #       done

  关于kibana的配置就比较简单了,主要就是:

  1. 镜像换成私有仓库地址;
  2. kibana的配置文件中配置连接ES的地址,注意是连接elasticsearch-client的9200端口;
  3. kibana就部署一个deployment无状态应用就行。前面提到的ES就一定要是StatefulSet资源类型,fluentd一定要是DaemonSet资源类型;
  4. 可以定义一个ingress,因为kibana才是给用户提供访问的;
    kubernetes EFK日志管理系统_第1张图片
    kubernetes EFK日志管理系统_第2张图片
    kubernetes EFK日志管理系统_第3张图片
    kubernetes EFK日志管理系统_第4张图片
    kubernetes EFK日志管理系统_第5张图片
    kubernetes EFK日志管理系统_第6张图片
    kubernetes EFK日志管理系统_第7张图片
    kubernetes EFK日志管理系统_第8张图片

推荐关注我的个人微信公众号 “云时代IT运维”,周期性更新最新的应用运维类技术文档。关注虚拟化和容器技术、CI/CD、自动化运维等最新前沿运维技术和趋势;

kubernetes EFK日志管理系统_第9张图片