日志采集(11)

在Kubernetes等平台上进行任何大型部署时,日志记录都是主要的挑战之一,但是配置和维护用于日志收集的中央存储库可以简化日常操作。为此,Fluentd、Elasticsearch和Kibana的组合可以在Kubernetes集群上创建一个强大的日志记录层

1. 部署es集群

由于前面部署的应用大部分都部署到k8s中的,在将es部署在k8s集群中电脑有点扛不住了,所以这里单独起三台虚拟机来部署es集群(一般也是单独部署)。

  1. 准备
    1. vm配置:1核 1G 20G硬盘

    2. 网络:

       192.168.241.150 es-node1
       192.168.241.151 es-node2
       192.168.241.152 es-node3
      
    3. 创建一个新的用户,es不能使用root用户启动

       adduser hadoop 
      
  2. 基础配置(Linux限制设置)
    1. 修改vm.max_map_count参数

       sudo vi /etc/sysctl.conf
      
       # elasticsearch config start
       vm.max_map_count=262144
       # elasticsearch config end
      
    2. 修改文件限制和最大线程数限制

       sudo vi /etc/security/limits.conf
       
       # elasticsearch config start
       * soft nofile 65536
       * hard nofile 131072
       * soft nproc 2048
       * hard nproc 4096
       # elasticsearch config end
      
  3. 下载安装es
    1. 下载解压

       curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
       sudo tar -zxvf elasticsearch-7.2.0-linux-x86_64.tar.gz -C /opt/
      
    2. 配置es

       vi config/elasticsearch.yml
      
       # ---------------------------------- Cluster -----------------------------------
       #
       # Use a descriptive name for your cluster: 设置集群的名称
       #
       cluster.name: cluster
       #
       # ------------------------------------ Node ------------------------------------
       #
       # Use a descriptive name for the node: 设置当前节点的名称,其他节点需要修改
       #
       node.name: node-1
       # ----------------------------------- Paths ------------------------------------
       #
       # Path to directory where to store the data (separate multiple locations by comma): 设置数据目录
       #
       path.data: /opt/elasticsearch/data
       #
       # Path to log files: 设置日志目录
       #
       path.logs: /opt/elasticsearch/logs
       # ---------------------------------- Network -----------------------------------
       #
       # Set the bind address to a specific IP (IPv4 or IPv6): 设置暴露的IP地址,本机IP地址,其他节点需要修改
       #
       network.host: 192.168.241.150
       #
       # Set a custom port for HTTP:
       #
       #http.port: 9200
       #
       # For more information, consult the network module documentation.
       #
       # --------------------------------- Discovery ----------------------------------
       #
       # Pass an initial list of hosts to perform discovery when this node is started: 设置集群中的其它节点(想对于当前节点而言),单播列表,并不需要配置集群所有节点
       # The default list of hosts is ["127.0.0.1", "[::1]"]
       #
       discovery.seed_hosts: ["es-node2", "es-node3"]
       #
       # Bootstrap the cluster using an initial set of master-eligible nodes: 候选master节点,需要和node.name属性相同
       #
       cluster.initial_master_nodes: ["node-1", "node-2","node-3"]
       #
       # For more information, consult the discovery and cluster formation module documentation.
      
    3. 配置JVM参数

       vi config/jvm.options
       # Xms represents the initial size of total heap space
       # Xmx represents the maximum size of total heap space
       # Set Xmx and Xms to no more than 50% of your physical RAM.
       
       -Xms512m
       -Xmx512m
      
    4. 启动

       bin/elasticsearch -d # 后台启动
       # 记录PID并且后台启动
       ./bin/elasticsearch -p /tmp/elasticsearch-pid -d
      
    5. 编写启动脚本,方便使用

       #!/bin/bash
      
       ES_HOME=/opt/elasticsearch
       ES_PID_FILE=/tmp/elasticsearch-pid
       
       action=$1
       
       case $action in
          'start')
               if [ -e $ES_PID_FILE ]
               then
                   echo 'es already started!'
               else
                   echo 'starting es begin......'
       
                   $ES_HOME/bin/elasticsearch -p /tmp/elasticsearch-pid -d
                   sleep 2
                   echo 'start es success.......'
               fi
          ;;
          'stop')
               if [ -e $ES_PID_FILE ]
               then
                   echo 'stopping es begin......'
       
                   es_pid=`cat /tmp/elasticsearch-pid`
                   kill -9 $es_pid
                   sleep 3
                   rm /tmp/elasticsearch-pid
                   echo 'stop es success........'
               else
                   echo "es dosen't start!!"
               fi
          ;;
          *)
               echo 'not support!!'
          ;;
       esac
      

2. 部署kibana

  1. 下载解压

     curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz
     sudo tar -zxvf kibana-7.2.0-linux-x86_64.tar.gz -C /opt/
    
  2. 配置

     vi kibana/config/kibana.yml
    
     # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
     # The default is 'localhost', which usually means remote machines will not be able to connect.
     # To allow connections from remote users, set this parameter to a non-loopback address.
     server.host: "192.168.241.150"
    
     # The URLs of the Elasticsearch instances to use for all your queries.
     elasticsearch.hosts: ["http://es-node1:9200","http://es-node2:9200","http://es-node3:9200"]
    
  3. 启动

     ./bin/kibana
    
  4. 停止

     # 先获取PID
     ps -ef | grep java
     # 暂停
     kill -9 PID
    
  5. 访问:http://kibana.tlh.com:5601

    日志采集(11)_第1张图片
    1.png

3. 再k8s中部署fluentd

  1. 关于k8s中日志采集的方案可以参照官方文档说明,这里采用node loggin agent的方式来处理。

    日志采集(11)_第2张图片
    2.png
  2. Fulentd说明

    1. 对容器化的应用日志处理建议

      1. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
      2. Logs should have a separate storage and lifecycle independent of nodes, pods, or containers.
    2. 日志采集流程

       in_tail -> filter_grep -> out_stdout
      
    3. Event数据结构说明:

      1. tag:消息从哪里来的
      2. time:时间
      3. record:log的内容
  3. 下载

     git clone https://github.com/fluent/fluentd-kubernetes-daemonset
    
  4. 修改es的链接配置,修改fluentd-kubernetes-daemonset文件夹中的fluentd-daemonset-elasticsearch-rbac.yaml的deployment

     spec:
       selector:
         matchLabels:
           k8s-app: fluentd-logging
           version: v1
       template:
         metadata:
           labels:
             k8s-app: fluentd-logging
             version: v1
         spec:
           serviceAccount: fluentd
           serviceAccountName: fluentd
           tolerations:
           - key: node-role.kubernetes.io/master
             effect: NoSchedule
           containers:
           - name: fluentd
             image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
             env:
               - name:  FLUENT_ELASTICSEARCH_HOST
                 value: "192.168.241.150"            # 修改为es的主机名或者IP地址
               - name:  FLUENT_ELASTICSEARCH_PORT
                 value: "9200"
               - name: FLUENT_ELASTICSEARCH_SCHEME
                 value: "http"
               # X-Pack Authentication
               # =====================
               - name: FLUENT_ELASTICSEARCH_USER
                 value: "elastic"
               - name: FLUENT_ELASTICSEARCH_PASSWORD
                 value: "changeme"
    
  5. 安装

     kubectl apply -f fluentd-daemonset-elasticsearch-rbac.yaml
    

4.通过kibanna创建index pattern对采集到的日志进行索引

  1. 登陆kibana--->management--->Index Patterns点击创建

    日志采集(11)_第3张图片
    3.png
  2. 查看

    日志采集(11)_第4张图片
    4.png

你可能感兴趣的:(日志采集(11))