kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana

一,环境

kubernetes:v1.14.3-tke.4

elasticsearch:7.4.2 

fluentd:2.7.0

filebeat:7.4.2

kibana:7.4.2

参考kubernetes的efk:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

 

 

二,创建tke-StorageClass.yaml

StorageClass给elasticsearch提供持久化存储

tke-StorageClass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  # annotations:
  #   storageclass.beta.kubernetes.io/is-default-class: "true"
  #   如果有这一条,则会成为 default-class,创建 PVC 时不指定类型则自动使用此类型
  name: cloud-efk-data
provisioner: cloud.tencent.com/qcloud-cbs ## TKE 集群自带的 provisioner
parameters:
  type: CLOUD_PREMIUM
  # 支持 CLOUD_BASIC,CLOUD_PREMIUM,CLOUD_SSD  如果不识别则当做 CLOUD_BASIC
  paymode: POSTPAID
  # paymode为云盘的计费模式,PREPAID模式(包年包月:仅支持Retain保留的回收策略),默认是 POSTPAID(按量计费:支持 Retain 保留和 Delete 删除策略,Retain 仅在高于1.8的集群版本生效)
  # aspid:asp-123
  # 支持指定快照策略,创建云盘后自动绑定此快照策略,绑定失败不影响创建
reclaimPolicy: Retain

创建 tke-StorageClass.yaml

kubectl create -f tke-StorageClass.yaml 

 kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第1张图片

使用tke集群创建tke的StorageClass,创建成功后,等待elasticsearch创建时候使用
创建elasticsearch后,再次查看StorageClass、pvc、pv等信息

 

 

kubernetes官方持久化介绍:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#
                                                https://kubernetes.io/zh/docs/concepts/storage/volumes/

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第2张图片

 下图是一些常用的 Volume 插件支持的访问模式:

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第3张图片

 

状态

一个 PV 的生命周期中,可能会处于4中不同的阶段:

  • Available(可用):表示可用状态,还未被任何 PVC 绑定
  • Bound(已绑定):表示 PVC 已经被 PVC 绑定
  • Released(已释放):PVC 被删除,但是资源还未被集群重新声明
  • Failed(失败): 表示该 PV 的自动回收失败

 

 

三,创建es-service.yaml

es-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-discovery
  namespace: default
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  publishNotReadyAddresses: true
  ports:
  - name: transport
    port: 9300
    targetPort: 9300
  selector:
    k8s-app: elasticsearch-logging
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: default
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch-logging

创建es-service.yaml

kubectl create -f es-service.yaml 
kubectl get svc | egrep   'elasticsearch|fluentd|filebeat|kibana'

 

 kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第4张图片

 

 

四,创建es-statefulset.yaml

es-statefulset.yaml

# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elasticsearch-logging
  namespace: default
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "services"
  - "namespaces"
  - "endpoints"
  verbs:
  - "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  namespace: default
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: elasticsearch-logging
  namespace: default
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: elasticsearch-logging
  apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: default
  labels:
    k8s-app: elasticsearch-logging
    version: v7.4.2
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  serviceName: elasticsearch-logging
  replicas: 3
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
      version: v7.4.2
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v7.4.2
    spec:
      serviceAccountName: elasticsearch-logging
      containers:
      - image: ccr.ccs.tencentyun.com/lvvimage/elasticsearch:7.4.2
        name: elasticsearch-logging
        imagePullPolicy: Always
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
            memory: 2048Mi
          requests:
            cpu: 500m
            memory: 1024Mi
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-logging-data
          mountPath: /usr/share/elasticsearch/data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: cluster.name
          value: k8s-logs
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: TZ
          value: Asia/Shanghai
        - name: cluster.name
          value: "es_cluster"
        - name: node.master
          value: "true"
        - name: discovery.seed_hosts # 旧版本使用 discovery.zen.ping.unicast.hosts
          value: "elasticsearch-discovery" # Disvocery Service
        - name: cluster.initial_master_nodes # 初始化的 master 节点,旧版本相关配置 discovery.zen.minimum_master_nodes
          value: "elasticsearch-logging-0,elasticsearch-logging-1,elasticsearch-logging-2"
        - name: ES_JAVA_OPTS
          value: -Xmx1000m -Xms1000m
        - name: xpack.security.enabled
          value: "false"
      initContainers:
      # Elasticsearch requires permissions to be at least elasticsearch.
      # If your OS already sets up this number to a higher value, feel free
      # to remove this init container.
      - name: fix-permissions
        image: alpine:3.6
        command: ["/bin/sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: elasticsearch-logging-data
          mountPath: /usr/share/elasticsearch/data
      # Elasticsearch requires vm.max_map_count to be at least 262144.
      # If your OS already sets up this number to a higher value, feel free
      # to remove this init container.
      - image: alpine:3.6
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
        name: elasticsearch-logging-init-vm
        securityContext:
          privileged: true
      # Elasticsearch requires ulimit to be at least 65536.
      # If your OS already sets up this number to a higher value, feel free
      # to remove this init container.
      - name: elasticsearch-logging-init-ulimit
        image: alpine:3.6
        command: ["/bin/sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-logging-data
      labels:
        app: elasticsearch-logging
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: cloud-efk-data
      resources:
        requests:
          storage: 10Gi

关于初始化的配置
elasticsearch官方介绍:https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html 

elasticsearch设置安全性:https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-security.html
如果启用elasticsearch安全性,需要在fluentd、filebeat、kibana都增加安全性配置

 

创建 es-statefulset.yaml

kubectl create -f es-statefulset.yaml
kubectl get pods | egrep   'elasticsearch|fluentd|filebeat|kibana'

 

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第5张图片

再次查看刚刚tke-StorageClass的信息

kubectl get sc
kubectl get pv
kubectl get pvc

 

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第6张图片

 

查看elasticsearch是否健康状态

curl $(kubectl get pods -o wide | grep elasticsearch-logging-0 | awk '{print $6}'):9200/_cat/health?v

 

 

五,创建fluentd-es-configmap.yaml

fluentd-es-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-es-config-v0.2.0
  namespace: default
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    
      root_dir /tmp/fluentd-buffers/
    

  containers.input.conf: |-
    # This configuration file for Fluentd / td-agent is used
    # to watch changes to Docker log files. The kubelet creates symlinks that
    # capture the pod name, namespace, container name & Docker container ID
    # to the docker logs for pods in the /var/log/containers directory on the host.
    # If running this fluentd configuration in a Docker container, the /var/log
    # directory should be mounted in the container.
    #
    # These logs are then submitted to Elasticsearch which assumes the
    # installation of the fluent-plugin-elasticsearch & the
    # fluent-plugin-kubernetes_metadata_filter plugins.
    # See https://github.com/uken/fluent-plugin-elasticsearch &
    # https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
    # more information about the plugins.
    #
    # Example
    # =======
    # A line in the Docker log file might look like this JSON:
    #
    # {"log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #  "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z"}
    #
    # The time_format specification below makes sure we properly
    # parse the time format produced by Docker. This will be
    # submitted to Elasticsearch and should appear like:
    # $ curl 'http://elasticsearch-logging:9200/_search?pretty'
    # ...
    # {
    #      "_index" : "logstash-2014.09.25",
    #      "_type" : "fluentd",
    #      "_id" : "VBrbor2QTuGpsQyTCdfzqA",
    #      "_score" : 1.0,
    #      "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",
    #                 "stream":"stderr","tag":"docker.container.all",
    #                 "@timestamp":"2014-09-25T22:45:50+00:00"}
    #    },
    # ...
    #
    # The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
    # record & add labels to the log record if properly configured. This enables users
    # to filter & search logs on any metadata.
    # For example a Docker container's logs might be in the directory:
    #
    #  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
    #
    # and in the file:
    #
    #  997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # where 997599971ee6... is the Docker ID of the running container.
    # The Kubernetes kubelet makes a symbolic link to this file on the host machine
    # in the /var/log/containers directory which includes the pod name and the Kubernetes
    # container name:
    #
    #    synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #    ->
    #    /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # The /var/log directory on the host is mapped to the /var/log directory in the container
    # running this instance of Fluentd and we end up collecting the file:
    #
    #   /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # This results in the tag:
    #
    #  var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
    # which are added to the log message as a kubernetes field object & the Docker container ID
    # is also added under the docker field object.
    # The final tag is:
    #
    #   kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # And the final log record look like:
    #
    # {
    #   "log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #   "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z",
    #   "kubernetes": {
    #     "namespace": "default",
    #     "pod_name": "synthetic-logger-0.25lps-pod",
    #     "container_name": "synth-lgr"
    #   },
    #   "docker": {
    #     "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"
    #   }
    # }
    #
    # This makes it easier for users to search for logs by pod name or by
    # the name of the Kubernetes container regardless of how many times the
    # Kubernetes pod has been restarted (resulting in a several Docker container IDs).

    # Json Log Example:
    # {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
    # CRI Log Example:
    # 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
    
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      
        @type multi_format
        
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        
        
          format /^(?
      
    

    # Detect exceptions in the log output and forward them as one log entry.
    
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    

    # Concatenate multi-line logs
    
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    

    # Enriches records with Kubernetes metadata
    
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    

    # Fixes json fields in Elasticsearch
    
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      
        @type multi_format
        
          format json
        
        
          format none
        
      
    

  system.input.conf: |-
    # Example:
    # 2015-12-21 23:17:22,066 [salt.state       ][INFO    ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
    
      @id minion
      @type tail
      format /^(?

创建fluentd-es-configmap.yaml 

kubectl create -f fluentd-es-configmap.yaml
kubectl get cm | egrep   'elasticsearch|fluentd|filebeat|kibana'

 

 

 

六,创建fluentd-es-ds.yaml

fluentd-es-ds.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: default
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: default
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es-v2.7.0
  namespace: default
  labels:
    k8s-app: fluentd-es
    version: v2.7.0
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v2.7.0
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        version: v2.7.0
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-node-critical
      serviceAccountName: fluentd-es
      containers:
      - name: fluentd-es
        image: ccr.ccs.tencentyun.com/lvvimage/fluentd:v2.7.0
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-es-config-v0.2.0

 创建fluentd-es-ds.yaml

kubectl create -f fluentd-es-ds.yaml
kubectl get pods | egrep   'elasticsearch|fluentd|filebeat|kibana'

 

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第7张图片

 

 

七,创建 filebeat-kubernetes.yaml

 

filebeat-kubernetes.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: default
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      host: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log

    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username:
      password:
    #  username: ${ELASTICSEARCH_USERNAME}
    #  password: ${ELASTICSEARCH_PASSWORD}

    setup.kibana:
      host: ['${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}']
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: default
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: ccr.ccs.tencentyun.com/lvvimage/filebeat:7.4.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-logging
        - name: ELASTICSEARCH_PORT
          value: "9200"
#        - name: ELASTICSEARCH_USERNAME
#          value: elastic
#        - name: ELASTICSEARCH_PASSWORD
#          value: 
        - name: KIBANA_HOST
          value: kibana
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: default
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: default
  labels:
    k8s-app: filebeat
---

创建 filebeat-kubernetes.yaml

 

kubectl create -f filebeat-kubernetes.yaml 
kubectl get pods | egrep   'elasticsearch|fluentd|filebeat|kibana'

 

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第8张图片

 

 

八,创建kibana-service.yaml

kibana-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: default
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
spec:
  type: NodePort
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
    nodePort: 32369
  selector:
    k8s-app: kibana-logging

创建 kibana-service.yaml

kubectl create -f kibana-service.yaml
kubectl get svc | egrep   'elasticsearch|fluentd|filebeat|kibana'

 

 kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第9张图片

 

 

九,创建kibana-configmap.yaml

kibana-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: kibana-configmap-7.4.2
data:
  kibana.yml: |
    server.name: kibana
    server.host: "0"
    elasticsearch.hosts: ["http://elasticsearch-logging.default.svc.cluster.local:9200"]
    i18n.locale: "zh-CN"
    #elasticsearch.username: "elastic"
    #elasticsearch.password: "nafd0IdsajmVO3cL1dsazc32a"

创建kibana-configmap.yaml

kubectl create -f kibana-configmap.yaml
kubectl get cm | egrep   'elasticsearch|fluentd|filebeat|kibana'

 

 

 

十,创建kibana-deployment.yaml

kibana-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: default
  labels:
    k8s-app: kibana-logging
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      containers:
      - name: kibana-logging
        image: ccr.ccs.tencentyun.com/lvvimage/kibana:7.4.2
        imagePullPolicy: IfNotPresent
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
            memory: 2024Mi
          requests:
            cpu: 100m
            memory: 1024Mi
        env:
        - name: TZ
          value: Asia/Shanghai
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
        volumeMounts:
        - name: config-volume
          mountPath: /usr/share/kibana/config
      volumes:
      - name: config-volume
        configMap:
          name: kibana-configmap-7.4.2
          items:
            - key: "kibana.yml"
              path: "kibana.yml"

 创建kibana-deployment.yaml

kubectl create -f kibana-deployment.yaml
kubectl get pods | egrep   'elasticsearch|fluentd|filebeat|kibana'

 

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第10张图片

查看kibana是否启动成功

kubectl logs $(kubectl get pods | grep 'kibana' | awk '{print $1}') | grep http://0:5601

 

 

 十一,访问kibana

使用,IP:32369访问 使用NodePort端口

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第11张图片

 

 

十二,创建索引

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第12张图片

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第13张图片

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第14张图片

 

十三,使用logs

kubernetes搭建持久化高可用elasticsearch+fluentd+filebeat+kibana_第15张图片

kibana官网文档:https://www.elastic.co/guide/en/kibana/current/index.html 

你可能感兴趣的:(kubernetes,elasticsearch,fluentd,filebeat,kibana)