kubernetes(二) 日志采集

概述

上文我们搭建了高可用K8S集群,现在我们要在此基础上搭建日志采集架构。整体架构图如下所示:

3.png

工具及版本

工具 版本
docker 18.03.1.ce-1.el7.centos
centos 7.x
Kubernetes v1.18.0
kubeadm、kubelet、kubectl 1.18.3-0
quay.io/coreos/flannel v0.14.0
kubernetesui/dashboard v2.0.0-rc7
registry.aliyuncs.com/google_containers/etcd 3.4.3-0
k8s.gcr.io/coredns 1.6.7
k8s.gcr.io/pause 3.2
Filebeat 7.2.0
Elastic、Kibana 7.2.0

安装

  • 在K8S集群中安装Filebeat

    • 规划了/data/work/nfs-share为统一的日志目录,这里我们对K8S环境做了NFS的共享。

    • 新建Filebeat.yml的yaml文件,内容如下:

      ---
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: filebeat-config
        namespace: auto-paas
        labels:
          k8s-app: filebeat
      data:
        filebeat.yml: |-
          #====================== input =================
          filebeat.inputs:
          # auto
          - type: log
            enabled: true
            paths:
              - /data/work/nfs-share/auto/logs/*/info.log
            tags: ["auto-info"]
            multiline:
              pattern: '^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})'   # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
              negate: true                                # 是否匹配到
              match: after                                # 合并到上一行的末尾
              max_lines: 1000                             # 最大的行数
              timeout: 30s                                # 如果在规定的时候没有新的日志事件就不等待后面的日志
            fields:
              index: "auto-info"
          - type: log
            enabled: true
            paths:
              - /data/work/nfs-share/auto/logs/*/sql.log
            tags: ["auto-sql"]
            multiline:
              pattern: '^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})'   # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
              negate: true                                # 是否匹配到
              match: after                                # 合并到上一行的末尾
              max_lines: 1000                             # 最大的行数
              timeout: 30s                                # 如果在规定的时候没有新的日志事件就不等待后面的日志
            fields:
              index: "auto-sql"
          - type: log
            enabled: true
            paths:
              - /data/work/nfs-share/auto/logs/*/monitor-*.log
            tags: ["auto-monitor"]
            multiline:
              pattern: '^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})'   # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
              negate: true                                # 是否匹配到
              match: after                                # 合并到上一行的末尾
              max_lines: 1000                             # 最大的行数
              timeout: 30s                                # 如果在规定的时候没有新的日志事件就不等待后面的日志
            fields:
              index: "auto-monitor"
          #================ output =====================
          output.elasticsearch:
            hosts: ["http://es:9200", "http://es:9500", "http://es:9600"]
            indices:
              - index: "auto-info-%{+yyyy.MM.dd}"
                when.contains:
                  fields:
                    index: "auto-info"
              - index: "auto-sql-%{+yyyy.MM.dd}"
                when.contains:
                  fields:
                    index: "auto-sql"
              - index: "auto-monitor-%{+yyyy.MM.dd}"
                when.contains:
                  fields:
                    index: "auto-monitor"
          #============== Elasticsearch template setting ==========
          setup.ilm.enabled: false
          setup.template.name: 'k8s-logs'
          setup.template.pattern: 'k8s-logs-*'
          processors:
            - drop_fields:
                fields: ["agent","kubernetes.labels","input.type","log","ecs.version","host.name","kubernetes.replicaset.name","kubernetes.pod.uid","kubernetes.pod.uid","tags","stream","kubernetes.container.name"]
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: filebeat
        namespace: auto-paas
        labels:
          k8s-app: filebeat
      spec:
        selector:
          matchLabels:
            k8s-app: filebeat
        template:
          metadata:
            labels:
              k8s-app: filebeat
          spec:
            serviceAccountName: filebeat
            terminationGracePeriodSeconds: 30
            hostNetwork: true
            dnsPolicy: ClusterFirstWithHostNet
            tolerations:
            - effect: NoSchedule
              operator: Exists
            containers:
            - name: filebeat
              image: 192.168.3.234:8089/component/filebeat:7.2.0
              args: [
                "-c", "/etc/filebeat.yml",
                "-e",
              ]
              env:
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
              securityContext:
                runAsUser: 0
                # If using Red Hat OpenShift uncomment this:
                #privileged: true
              resources:
                limits:
                  memory: 200Mi
                requests:
                  cpu: 100m
                  memory: 100Mi
              volumeMounts:
              - name: config
                mountPath: /etc/filebeat.yml
                readOnly: true
                subPath: filebeat.yml
              - name: data
                mountPath: /data/work/nfs-share/
              - name: varlibdockercontainers
                mountPath: /data/docker/containers
                readOnly: true
              - name: varlog
                mountPath: /var/log
                readOnly: true
            volumes:
            - name: config
              configMap:
                defaultMode: 0600
                name: filebeat-config
            - name: varlibdockercontainers
              hostPath:
                path: /data/work/docker/containers
            - name: varlog
              hostPath:
                path: /var/log
            # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
            - name: data
              hostPath:
                path: /data/work/nfs-share/
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: filebeat
      subjects:
      - kind: ServiceAccount
        name: filebeat
        namespace: auto-paas
      roleRef:
        kind: ClusterRole
        name: filebeat
        apiGroup: rbac.authorization.k8s.io
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: filebeat
        labels:
          k8s-app: filebeat
      rules:
      - apiGroups: [""] # "" indicates the core API group
        resources:
        - namespaces
        - pods
        verbs:
        - get
        - watch
        - list
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: filebeat
        namespace: auto-paas
        labels:
          k8s-app: filebeat
      ---
      
    • 在K8S集群上执行命令,运行如下图所示:

      kubectl apply -f filebeat.yml  -n kube-system
      
      5.png
  • 安装Elastic、Kibana存储日志,展示查看日志。

    • 这套环境我采用docker-compose方式在K8S集群外搭建

    • 编写dockers-compose.yaml文件,内容如下:

      version: '2.2'
      services:
        kibana:
          image: 192.168.3.234:8089/component/kibana:7.2.0
          container_name: kibana7
          restart: always
          environment:
            - I18N_LOCALE=zh-CN
            - XPACK_GRAPH_ENABLED=true
            - TIMELION_ENABLED=true
            - XPACK_MONITORING_COLLECTION_ENABLED="true"
          volumes:
            - /etc/localtime:/etc/localtime
          ports:
            - "5601:5601"
          networks:
            - efkuation_network
        elasticsearch:
          image: 192.168.3.234:8089/component/elasticsearch:7.2.0
          container_name: es01
          restart: always
          environment:
            - cluster.name=efk
            - node.name=es01
            - bootstrap.memory_lock=true
            - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
            - discovery.seed_hosts=es01,es02,es03
            - cluster.initial_master_nodes=es01,es02,es03
          ulimits:
            memlock:
              soft: -1
              hard: -1
          volumes:
            - /etc/localtime:/etc/localtime
          ports:
            - 9200:9200
          networks:
            - efkuation_network
        elasticsearch2:
          image: 192.168.3.234:8089/component/elasticsearch:7.2.0
          container_name: es02
          restart: always
          environment:
            - cluster.name=efk
            - node.name=es02
            - bootstrap.memory_lock=true
            - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
            - discovery.seed_hosts=es01,es02,es03
            - cluster.initial_master_nodes=es01,es02,es03
          ulimits:
            memlock:
              soft: -1
              hard: -1
          volumes:
            - /etc/localtime:/etc/localtime
          ports:
            - 9600:9200
          networks:
            - efkuation_network
        elasticsearch3:
          image: 192.168.3.234:8089/component/elasticsearch:7.2.0
          container_name: es03
          restart: always
          environment:
            - cluster.name=efk
            - node.name=es03
            - bootstrap.memory_lock=true
            - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
            - discovery.seed_hosts=es01,es02,es03
            - cluster.initial_master_nodes=es01,es02,es03
          ulimits:
            memlock:
              soft: -1
              hard: -1
          volumes:
            - /etc/localtime:/etc/localtime
          ports:
            - 9500:9200
          networks:
            - efkuation_network
      networks:
        efkuation_network:
          external: true
      
    • 登陆http://192.../ Kibana环境,查看日志收集成功收集到日志,如下图所示:

      6.png

你可能感兴趣的:(kubernetes(二) 日志采集)