Kubernetes 记录

环境

  • 7台虚拟机,三master三node,一台作为Docker私库
  • VIP
    • 192.168.11.222
  • master
    • 192.168.11.31
    • 192.168.11.32
    • 192.168.11.33
  • node
    • 192.168.11.34
    • 192.168.11.35
    • 192.168.11.36
  • Harbor registry
    • 192.168.11.200

系统配置

  • 启动SSH并开机自启动
service sshd start
systemctl enable sshd.service
  • 关闭防火墙和selinux
systemctl stop firewalld  && systemctl disable firewalld
vim /etc/selinux/config
修改SELINUX=disabled
setenforce 0
getenforce
  • 关闭swap
swapoff -a && sed -i '/swap/d' /etc/fstab
  • 配置系统路由参数,防止kubeadm报路由警告
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
cat  /etc/sysctl.conf 确认
sysctl -p
  • 修改节点名
hostnamectl set-hostname k8s1
hostnamectl set-hostname k8s2
hostnamectl set-hostname k8s3
hostnamectl set-hostname k8s4
hostnamectl set-hostname k8s5
hostnamectl set-hostname k8s6
hostnamectl set-hostname k8s-deploy
  • 修改/etc/hosts文件
192.168.11.31   k8s1
192.168.11.32   k8s2
192.168.11.33   k8s3
192.168.11.34   k8s4
192.168.11.35   k8s5
192.168.11.36   k8s6
192.168.11.200  k8s-deploy

192.30.253.113 github.com
192.30.252.131 github.com
185.31.16.185 github.global.ssl.fastly.net
74.125.237.1 dl-ssl.google.com
173.194.127.200 groups.google.com
192.30.252.131 github.com
185.31.16.185 github.global.ssl.fastly.net
74.125.128.95 ajax.googleapis.com
  • 配置免密钥登录

每个点都执行

ssh-keygen
ssh-copy-id k8s1
ssh-copy-id k8s2
ssh-copy-id k8s3
ssh-copy-id k8s4
ssh-copy-id k8s5
ssh-copy-id k8s6
ssh-copy-id k8s-deploy

安装 keepalived(只对master节点操作)

安装

yum install -y keepalived

修改配置文件 /etc/keepalived/keepalived.conf

备份后每个节点修改相应内容

global_defs {
   router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.11.222:6443"        //VIP的IP
    interval 3
    timeout 9
    fall 2
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface eno16777736                   //ifconfig上面的对应IP项,见下图
    virtual_router_id 61
    priority 120
    advert_int 1
    mcast_src_ip 192.168.11.31              //本机的IP
    nopreempt
    authentication {
        auth_type PASS
        auth_pass sqP05dQgMSlzrxHj
    }
    unicast_peer {
        #192.168.11.31                      //master集群的,将本机的注释掉
        192.168.11.32
        192.168.11.33
    }
    virtual_ipaddress {
        192.168.11.222/24                       //VIP的IP
    }
    track_script {
        CheckK8sMaster
    }
}

依次启动

systemctl enable keepalived && systemctl restart keepalived
systemctl status keepalived

修改相应的IP配置,启动起来,应该能看到两个节点是BACKUP STATE

建立install目录,将准备的文件都拷贝到这个路径下,接下来安装都以install目录内文件为准

安装Etcd https(只对master节点操作)

  • k8s1 执行
export NODE_NAME=k8s1
export NODE_IP=192.168.11.31
export NODE_IPS="192.168.11.31 192.168.11.32 192.168.11.33"
export ETCD_NODES=k8s1=https://192.168.11.31:2380,k8s2=https://192.168.11.32:2380,k8s3=https://192.168.11.33:2380
  • k8s2 执行
export NODE_NAME=k8s2
export NODE_IP=192.168.11.32
export NODE_IPS="192.168.11.31 192.168.11.32 192.168.11.33"
export ETCD_NODES=k8s1=https://192.168.11.31:2380,k8s2=https://192.168.11.32:2380,k8s3=https://192.168.11.33:2380
  • k8s3 执行
export NODE_NAME=k8s3
export NODE_IP=192.168.11.33
export NODE_IPS="192.168.11.31 192.168.11.32 192.168.11.33"
export ETCD_NODES=k8s1=https://192.168.11.31:2380,k8s2=https://192.168.11.32:2380,k8s3=https://192.168.11.33:2380

Etcd 证书

  • 创建 CA 证书和秘钥

安装cfssl, CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件
如果不希望将cfssl工具安装到部署主机上,可以在其他的主机上进行该步骤,生成以后将证书拷贝到部署etcd的主机上即可。

chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
  • 生成 ETCD 的 TLS 秘钥和证书

ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;
signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证;
client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证;

为了保证通信安全,客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的通信需要使用 TLS 加密,本节创建 etcd TLS 加密所需的证书和私钥。
创建 CA 配置文件:

cat >  ca-config.json <
cat >  ca-csr.json <

"CN":Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
"O":Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

  • 生成 CA 证书和私钥:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
  • 创建 etcd 证书签名请求:
cat > etcd-csr.json <

hosts:指定授权使用该证书的 etcd 节点 IP;每个节点IP 都要在里面 或者 每个机器申请一个对应IP的证书,此处选择配置所有IP进来

  • 生成 etcd 证书和私钥:
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/

接下来需要将文件拷贝到其它master节点:

# 先在其它主节点将该目录删除,再进行拷贝
rm -rf /etc/etcd/ssl/*
scp -r /etc/etcd/ssl/ 192.168.11.31:/etc/etcd/
scp -r /etc/etcd/ssl/ 192.168.11.32:/etc/etcd/

三个点一起执行

cd /root/install
tar -xvf etcd-v3.1.10-linux-amd64.tar.gz
mv etcd-v3.1.10-linux-amd64/etcd* /usr/local/bin

创建 etcd 的 systemd unit 文件

mkdir -p /var/lib/etcd
cd /var/lib/etcd

cat > etcd.service <
  • 指定 etcd 的工作目录和数据目录为 /var/lib/etcd,需在启动服务前创建这个目录;
  • 为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书> (peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);
  • --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
  • 启动 etcd 服务
mv etcd.service /etc/systemd/system/
systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd

# 若出错
systemctl status etcd.service
journalctl -xe
  • 验证服务
 etcdctl \
  --endpoints=https://${NODE_IP}:2379  \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  cluster-health

安装 Docker

# 上传docker目录,先清除之前有的docker
cd /root/install
yum -y remove docker docker-common

cd docker
yum -y localinstall docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum -y localinstall docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

# 启动docker
systemctl start docker && systemctl enable docker
# 检查状态
docker ps

# 配置镜像库、私库,编辑 /etc/docker/daemon.json 添加下面几项(推荐这种方式修改 docker 配置)
{
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 5,
"registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com/"],
"insecure-registries": ["192.168.11.200","192.168.11.218","192.168.11.188","192.168.11.230","192.168.11.112"]
}

安装 kubelet、kubectl、kubeadm、kubecni

切换到k8s目录,主从节点同时执行,从节点需要安装 kubeadm 从而后期可用 kubeadm join 加入k8s集群

cd ../k8s
yum -y install *.rpm
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
systemctl enable docker && systemctl restart docker
systemctl daemon-reload && systemctl restart kubelet

# 切换到docker_images目录
cd ../docker_images
for i in `ls`;do docker load -i $i;done

在所有master节点创建 config.yaml 文件

cat < config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - https://192.168.11.31:2379
  - https://192.168.11.32:2379
  - https://192.168.11.33:2379
  caFile: /etc/etcd/ssl/ca.pem
  certFile: /etc/etcd/ssl/etcd.pem
  keyFile: /etc/etcd/ssl/etcd-key.pem
  dataDir: /var/lib/etcd
networking:
  podSubnet: 10.244.0.0/16
kubernetesVersion: 1.9.0
api:
  advertiseAddress: "192.168.11.222"
token: "b99a00.a144ef80536d4344"
tokenTTL: "0s"
apiServerCertSANs:
- "k8s1"
- "k8s2"
- "k8s3"
- 192.168.11.31
- 192.168.11.32
- 192.168.11.33
- 192.168.11.222
featureGates:
  CoreDNS: true
EOF

在VIP所在master节点执行以下命令:

kubeadm init --config config.yaml
# 记录生成的join命令
kubeadm join --token b99a00.a144ef80536d4344 192.168.11.222:6443 --discovery-token-ca-cert-hash sha256:cf68966ae386e10c0233e008f21597d1d54a60ea9202d0c360a4b19fa8443328

kubectl apply -f kubeadm-kuberouter.yaml
kubectl get pod --all-namespaces

在其它主节点执行

systemctl enable kubelet && systemctl start kubelet

启动之后,拷贝VIP所在master节点上的配置到其他master

# 先建立 mkdir /etc/kubernetes/pki/
scp /etc/kubernetes/pki/* 192.168.11.31:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/* 192.168.11.32:/etc/kubernetes/pki/

# 使用命令 systemctl status kubelet 确认状态保持activating (auto-restart),即使当前没运行,但init后就会运行
kubeadm init --config config.yaml # 此时三个点应该都启动了

# 允许master部署pod(可选)
kubectl taint nodes --all node-role.kubernetes.io/master-
# 禁止master部署pod(可选)
kubectl taint nodes centos-master-1 node-role.kubernetes.io/master=true:NoSchedule

# 验证服务状态
kubectl get nodes
kubectl get cs
kubectl get pods --all-namespaces

在node节点执行:

kubeadm join --token b99a00.a144ef80536d4344 192.168.11.222:6443 --discovery-token-ca-cert-hash sha256:cf68966ae386e10c0233e008f21597d1d54a60ea9202d0c360a4b19fa8443328

集群添加用户名密码

默认验证方式有kubeconfig和token,但这里使用basicauth的方式进行apiserver的验证

# 编辑 /etc/kubernetes/pki/basic_auth_file 加入以下内容存放用户名和密码
#user,password,userid
admin,admin,2

编辑 /etc/kubernetes/manifests/kube-apiserver.yaml,给kube-apiserver添加basic_auth验证

- --basic_auth_file=/etc/kubernetes/pki/basic_auth_file

重启 kubelet,不重启的话,会报The connection to the server 192.168.11.223:6443 was refused - did you specify the right host or port?

systemctl daemon-reload && systemctl restart kubelet.service
  • 给admin授权

默认cluster-admin是拥有全部权限的,将admin和cluster-admin bind这样admin就有cluster-admin的权限。

kubectl get clusterrole/cluster-admin -o yaml
kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin
kubectl get clusterrolebinding/login-on-dashboard-with-cluster-admin -o yaml

k8s 注意点:

  1. The connection to the server 192.168.11.160:6443 was refused - did you specify the right host or port?

    #看是不是 Active: inactive (dead)
    systemctl status kubelet
    systemctl daemon-reload && systemctl restart kubelet
    
  2. 查看日志的命令

    kubectl logs kube-apiserver --namespace=kube-system
    
  3. 常用命令

    kubectl get componentstatuses //查看node节点组件状态
    kubectl get svc -n kube-system //查看应用
    kubectl cluster-info //查看集群信息
    kubectl describe --namespace kube-system service kubernetes-dashboard //详细服务信息
    kubectl apply -f kube-apiserver.yaml   //更新kube-apiserver容器
    kubectl delete -f /root/k8s/k8s_images/kubernetes-dashboard.yaml //删除应用
    kubectl  delete service example-server //删除服务
    systemctl  start kube-apiserver.service //启动服务。
    kubectl get deployment --all-namespaces //启动的应用
    kubectl get pod  -o wide  --all-namespaces //查看pod上跑哪些服务
    kubectl get pod -o wide -n kube-system //查看应用在哪个node上
    kubectl describe pod --namespace=kube-system //查看pod上活动信息
    kubectl describe depoly kubernetes-dashboard -n kube-system
    kubectl get depoly kubernetes-dashboard -n kube-system -o yaml
    kubectl get service kubernetes-dashboard -n kube-system //查看应用
    kubectl delete -f kubernetes-dashboard.yaml //删除应用
    kubectl get events //查看事件
    kubectl get rc/kubectl get svc
    kubectl get namespace //获取namespace信息
    find -type f -print -exec grep hello {} \;
    
    kubeadm reset
    netstat -lutpn | grep 6443
    

Harbor 安装和配置

安装Harbor需要docker-compose,docker-compose可通过 pip 安装

安装 pip 和 docker-compose

# 通过pip安装,先安装pip
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py

# 安装docker-compose
pip install -U docker-compose

centos7 下 pip install docker-compose 安装 docker-compose 下报错:
Cannot uninstall ‘requests’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
解决办法:pip install docker-compose --ignore-installed requests

安装 harbor

下载 Harbor v1.6.0,修改 harbor.cfg 使 hostname = 192.168.11.200,在harbor目录执行 ./install.sh 完成安装:

# 查看 harbor 组件状态
docker-compose ps
# 启动/停止/重启harbor
docker-compose start/stop/restart

K8S 应用日志收集——EFK(Elasticsearch/Filebeat/Kibana)

系统环境

  • 3台虚拟机,一个master两个node
  • master
    • 192.168.11.218 k8s-master1-test
  • node
    • 192.168.11.219 k8s-node1-test
    • 192.168.11.221 k8s-node2-test
  • 其它
    • 3台机器全部安装jdk1.8,因为elasticsearch是java开发的
    • 3台全部安装elasticsearch
    • 192.168.11.218 作为主节点
    • 192.168.11.219以及192.168.11.221作为数据节点
    • 主节点192.168.11.218上需要安装kibana
  • ELK版本信息:
    • Elasticsearch-6.0.0
    • kibana-6.0.0
    • filebeat-5.4.0

安装es

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm
rpm -ivh elasticsearch-6.0.0.rpm

### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service

配置es

elasticsearch配置文件在这两个地方,有两个配置文件:

  • /etc/elasticsearch/elasticsearch.yml
  • /etc/sysconfig/elasticsearch

elasticsearch.yml 文件用于配置集群节点等相关信息的,elasticsearch 文件则是配置服务本身相关的配置,例如某个配置文件的路径以及java的一些路径配置

在 192.168.77.128 上编辑配置文件:

cluster.name: es-master-node  # 集群中的名称
node.name: k8s-master1-test  # 该节点名称
node.master: true  # 意思是该节点为主节点
node.data: false  # 表示这不是数据节点
network.host: 0.0.0.0  # 监听全部ip,在实际环境中应设置为一个安全的ip
http.port: 9200  # es服务的端口号
discovery.zen.ping.unicast.hosts: ["192.168.11.218", "192.168.11.219", "192.168.11.221"] # 配置自动发现

其余两节点同样编辑配置文件:

cluster.name: es-master-node  # 集群中的名称
node.name: k8s-node1-test  # 该节点名称
node.master: false  # 意思是该节点为主节点
node.data: true  # 表示这不是数据节点
network.host: 0.0.0.0  # 监听全部ip,在实际环境中应设置为一个安全的ip
http.port: 9200  # es服务的端口号
discovery.zen.ping.unicast.hosts: ["192.168.11.218", "192.168.11.219", "192.168.11.221"] # 配置自动发现

安装完启动:

systemctl start elasticsearch.service

curl查看es集群情况:

curl 'localhost:9200/_cluster/health?pretty'
# 返回结果
{
  "cluster_name" : "es-master-node",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1,
  "active_shards" : 1,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.0
}

查看集群的详细信息:

curl '192.168.11.218:9200/_cluster/state?pretty'

安装 kibana

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
rpm -ivh kibana-6.0.0-x86_64.rpm

配置 kibana

编辑 vim /etc/kibana/kibana.yml 增加以下内容

server.port: 5601  # 配置kibana的端口
server.host: 192.168.11.218  # 配置监听ip
elasticsearch.url: "http://192.168.11.218:9200"  # 配置es服务器的ip,如果是集群则配置该集群中主节点的ip
logging.dest: /var/log/kibana.log  # 配置kibana的日志文件路径,不然默认是messages里记录日志

启动:

systemctl start kibana

浏览器里进行访问 http://192.168.11.218:5601/ 即可

  • 常用:
#查看es索引:
curl '192.168.11.218:9200/_cat/indices?v'
#获取指定索引详细信息:
curl -XGET '192.168.77.128:9200/system-syslog-2018.03?pretty'
#需要删除索引的话,使用以下命令可以删除指定索引:
curl -XDELETE 'localhost:9200/logcollection-test'

日志收集

制作 filebeat 镜像:

首先准备 filebeat-5.4.0-linux-x86_64.tar.gz

  • docker-entrypoint.sh
#!/bin/bash
config=/etc/filebeat/filebeat.yml
env
echo 'Filebeat init process done. Ready for start up.'
echo "Using the following configuration:"
cat /etc/filebeat/filebeat.yml
exec "$@"
  • dockerfile
FROM centos
MAINTAINER YangLiangWei 

# Install Filebeat
WORKDIR /usr/local
COPY filebeat-5.4.0-linux-x86_64.tar.gz  /usr/local
RUN cd /usr/local && \
    tar xvf filebeat-5.4.0-linux-x86_64.tar.gz && \
    rm -f filebeat-5.4.0-linux-x86_64.tar.gz && \
    ln -s /usr/local/filebeat-5.4.0-linux-x86_64 /usr/local/filebeat && \
    chmod +x /usr/local/filebeat/filebeat && \
    mkdir -p /etc/filebeat

ADD ./docker-entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/docker-entrypoint.sh

ENTRYPOINT ["docker-entrypoint.sh"]

CMD ["/usr/local/filebeat/filebeat","-e","-c","/etc/filebeat/filebeat.yml"]
  • build docker 镜像
docker build -t filebeat:v5.4.0
docker tag filebeat:v5.4.0 192.168.11.218/hlg_web/filebeat:v5.4.0

配置 k8s Secret 和私库认证

# 其中 .dockerconfigjson 字段值可以在login到私库后用 cat /root/.docker/config.json | base64 -w 0 获取
kind: Secret
apiVersion: v1
metadata:
  name: regsecret
type: kubernetes.io/dockerconfigjson
data:
  ".dockerconfigjson": ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjExLjIxOCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0KfQ==

这样,以后若在pod中定义的镜像需要拉取私库私有镜像则只要添加imagePullSecrets字段即可:

      containers:
      - image: 192.168.11.200/hlg_web/datalower:1.1
        name: app
        ports:
        - containerPort: 80
        volumeMounts:
        - name: app-logs
          mountPath: /app/log
      - image: 192.168.11.200/hlg_web/filebeat:v5.4.0
        name: filebeat
        volumeMounts:
        - name: app-logs
          mountPath: /log
        - name: filebeat-config
          mountPath: /etc/filebeat/
      volumes:
      - name: app-logs
        emptyDir: {}
      - name: filebeat-config
        configMap:
          name: filebeat-config
      imagePullSecrets:
      - name: regsecret

Deployment 配置

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: logcollection-test
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: logcollection-test
    spec:
      containers:
      - image: 192.168.11.200/hlg_web/datalower:1.1
        name: app
        ports:
        - containerPort: 80
        volumeMounts:
        - name: app-logs
          mountPath: /app/log
      - image: 192.168.11.200/hlg_web/filebeat:v5.4.0
        name: filebeat
        volumeMounts:
        - name: app-logs
          mountPath: /log
        - name: filebeat-config
          mountPath: /etc/filebeat/
      volumes:
      - name: app-logs
        emptyDir: {}
      - name: filebeat-config
        configMap:
          name: filebeat-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
data:
  filebeat.yml: |
    filebeat.prospectors:
    - input_type: log
      paths:
        - "/log/*"
    output.elasticsearch:
      hosts: ["192.168.11.218:9200"]
      index: "logcollection-test"

参考:

  • kubeadm快速部署kubernetes(HA)
  • [K8s 1.9实践]Kubeadm 1.9 HA 高可用 集群 本地离线镜像部署
  • k8s(一)、 1.9.0高可用集群本地离线部署记录
  • Harbor的搭建(vmware企业级docker镜像私服)
  • 搭建ELK日志分析平台(上)—— ELK介绍及搭建 Elasticsearch 分布式集群
  • 搭建ELK日志分析平台(下)—— 搭建kibana和logstash服务器
  • K8S使用filebeat统一收集应用日志
  • 应用日志收集· Kubernetes Handbook - Kubernetes中文指南

你可能感兴趣的:(Kubernetes 记录)