kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。
kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。
为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster)。
创建 kubelet bootstrap kubeconfig 文件
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_name in k8s-node1 k8s-node2 k8s-node3 do echo ">>> ${node_name}" # 创建 token export BOOTSTRAP_TOKEN=$(kubeadm token create \ --description kubelet-bootstrap-token \ --groups system:bootstrappers:${node_name} \ --kubeconfig ~/.kube/config) # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig done
- 证书中写入 Token 而非证书,证书后续由 kube-controller-manager 创建。
查看 kubeadm 为各节点创建的 token:
[root@k8s-master1 work]# kubeadm token list --kubeconfig ~/.kube/config TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 5bmk75.3kmttw7ff5ppxli8 23h 2019-05-16T16:15:31+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node3 d9d9gq.1ty3yoyc9czgvhbo 23h 2019-05-16T16:15:31+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node2 j9ae3c.ngrn78qr8cw8eeo6 23h 2019-05-16T16:15:30+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node1
- 创建的 token 有效期为 1 天,超期后将不能再被使用,且会被 kube-controller-manager 的 tokencleaner 清理(如果启用该 controller 的话);
- kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers;
查看各 token 关联的 Secret:
[root@k8s-master1 work]# kubectl get secrets -n kube-system|grep bootstrap-token bootstrap-token-5bmk75 bootstrap.kubernetes.io/token 7 77s bootstrap-token-d9d9gq bootstrap.kubernetes.io/token 7 77s bootstrap-token-j9ae3c bootstrap.kubernetes.io/token 7 78s
分发 bootstrap kubeconfig 文件到所有 worker 节点
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_name in k8s-node1 k8s-node2 k8s-node3 do echo ">>> ${node_name}" scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig done
创建和分发 kubelet 参数配置文件
从 v1.10 开始,kubelet 部分参数需在配置文件中配置,kubelet --help
会提示:
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag
创建 kubelet 参数配置模板文件:
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat <config.yaml.template kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: "/etc/kubernetes/cert/ca.pem" authorization: mode: Webhook clusterDomain: "${CLUSTER_DNS_DOMAIN}" clusterDNS: - "${CLUSTER_DNS_SVC_IP}" podCIDR: "${POD_CIDR}" maxPods: 220 serializeImagePulls: false hairpinMode: promiscuous-bridge cgroupDriver: cgroupfs runtimeRequestTimeout: "15m" rotateCertificates: true serverTLSBootstrap: true readOnlyPort: 0 port: 10250 address: "##NODE_IP##" EOF
- address:API 监听地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
- readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
- authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
- authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
- authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
- 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
- authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
- featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 --experimental-cluster-signing-duration 参数;
- 需要 root 账户运行;
为各节点创建和分发 kubelet 配置文件:
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in 192.168.161.170 192.168.161.171 192.168.161.172 do echo ">>> ${node_ip}" sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml done
替换后的 kubelet-config.yaml 文件: kubelet-config.yaml
创建和分发 kubelet systemd unit 文件
创建 kubelet systemd unit 文件模板:
cd /opt/k8s/work cat > kubelet.service.template <<EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=${K8S_DIR}/kubelet ExecStart=/opt/k8s/bin/kubelet \\ --root-dir=${K8S_DIR}/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\ --cert-dir=/etc/kubernetes/cert \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-config.yaml \\ --hostname-override=##NODE_NAME## \\ --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 --allow-privileged=true \\ --event-qps=0 \\ --kube-api-qps=1000 \\ --kube-api-burst=2000 \\ --registry-qps=0 \\ --image-pull-progress-deadline=30m \\ --logtostderr=true \\ --v=2 Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target EOF
- 如果设置了
--hostname-override
选项,则kube-proxy
也需要设置该选项,否则会出现找不到 Node 的情况; --bootstrap-kubeconfig
:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;- K8S approve kubelet 的 csr 请求后,在
--cert-dir
目录创建证书和私钥文件,然后写入--kubeconfig
文件; --pod-infra-container-image
不使用 redhat 的pod-infrastructure:latest
镜像,它不能回收容器的僵尸;
替换后的 unit 文件:kubelet.service
为各节点创建和分发 kubelet systemd unit 文件:
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_name in k8s-node1 k8s-node2 k8s-node3 do echo ">>> ${node_name}" sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service done
Bootstrap Token Auth 和授予权限
kublet 启动时查找配置的 --kubeletconfig 文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 向 kube-apiserver 发送证书签名请求 (CSR)。
kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。
默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下:
$ sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests' May 06 06:42:36 m7-autocv-gpu01 kubelet[26986]: F0506 06:42:36.314378 26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope May 06 06:42:36 m7-autocv-gpu01 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a May 06 06:42:36 m7-autocv-gpu01 systemd[1]: kubelet.service: Failed with result 'exit-code'.
解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
启动 kubelet 服务
source /opt/k8s/bin/environment.sh for node_ip in 192.168.161.170 192.168.161.171 192.168.161.172 do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet" ssh root@${node_ip} "/usr/sbin/swapoff -a" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet" done
- 必须创建工作目录;
- 关闭 swap 分区,否则 kubelet 会启动失败;
$ journalctl -u kubelet |tail 8月 15 12:16:49 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:49.578598 7807 feature_gate.go:230] feature gates: &{map[RotateKubeletClientCertificate:true RotateKubeletServerCertificate:true]} 8月 15 12:16:49 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:49.578698 7807 feature_gate.go:230] feature gates: &{map[RotateKubeletClientCertificate:true RotateKubeletServerCertificate:true]} 8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.205871 7807 mount_linux.go:214] Detected OS with systemd 8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.205939 7807 server.go:408] Version: v1.11.2 8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206013 7807 feature_gate.go:230] feature gates: &{map[RotateKubeletClientCertificate:true RotateKubeletServerCertificate:true]} 8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206101 7807 feature_gate.go:230] feature gates: &{map[RotateKubeletServerCertificate:true RotateKubeletClientCertificate:true]} 8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206217 7807 plugins.go:97] No cloud provider specified. 8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206237 7807 server.go:524] No cloud provider specified: "" from the config file: "" 8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206264 7807 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file 8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.208628 7807 bootstrap.go:86] No valid private key and/or certificate found, reusing existing private key or creating a new one
kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。
注意:kube-controller-manager 需要配置 --cluster-signing-cert-file
和 --cluster-signing-key-file
参数,才会为 TLS Bootstrap 创建证书和私钥。
[root@k8s-master1 work]# kubectl get csr NAME AGE REQUESTOR CONDITION csr-6jnhf 13s system:bootstrap:3up3ds Pending csr-kdqt9 14s system:bootstrap:xoaem5 Pending csr-kmjtf 13s system:bootstrap:qb0a7h Pending [root@k8s-master1 work]# kubectl get nodes No resources found.
- 三个 work 节点的 csr 均处于 pending 状态;
自动 approve CSR 请求
创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:
cd /opt/k8s/work cat > csr-crb.yaml <<EOF # Approve all CSRs for the group "system:bootstrappers" kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: auto-approve-csrs-for-group subjects: - kind: Group name: system:bootstrappers apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient apiGroup: rbac.authorization.k8s.io --- # To let a node of the group "system:nodes" renew its own credentials kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-client-cert-renewal subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient apiGroup: rbac.authorization.k8s.io --- # A ClusterRole which instructs the CSR approver to approve a node requesting a # serving cert matching its client cert. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: approve-node-server-renewal-csr rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/selfnodeserver"] verbs: ["create"] --- # To let a node of the group "system:nodes" renew its own server credentials kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-server-cert-renewal subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: approve-node-server-renewal-csr apiGroup: rbac.authorization.k8s.io EOF
- auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
- node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
- node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;
生效配置:
kubectl apply -f csr-crb.yaml
查看 kublet 的情况
等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approved:
[root@k8s-master1 work]# kubectl get csr NAME AGE REQUESTOR CONDITION csr-6jnhf 26m system:bootstrap:3up3ds Approved,Issued csr-85ncq 19m system:node:k8s-node2 Approved,Issued csr-jf7m6 15m system:node:k8s-node1 Approved,Issued csr-kdqt9 26m system:bootstrap:xoaem5 Approved,Issued csr-kmjtf 26m system:bootstrap:qb0a7h Approved,Issued csr-l4n7g 19m system:node:k8s-node3 Approved,Issued csr-rg7sg 19m system:node:k8s-node1 Approved,Issued csr-rmq8r 15m system:node:k8s-node2 Approved,Issued csr-rsg8j 15m system:node:k8s-node3 Approved,Issued
所有节点均 ready:
[root@k8s-master1 work]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready19m v1.14.1 k8s-node2 Ready 19m v1.14.1 k8s-node3 Ready 19m v1.14.1
kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥:
[root@k8s-node2 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig -rw------- 1 root root 2312 5月 15 17:16 /etc/kubernetes/kubelet.kubeconfig
- 没有自动生成 kubelet server 证书;
kubelet 提供的 API 接口
kublet 启动后监听多个端口,用于接收 kube-apiserver 或其它组件发送的请求:
[root@k8s-node2 ~]# sudo netstat -lnpt|grep kubelet tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 2071/kubelet tcp 0 0 192.168.161.171:10250 0.0.0.0:* LISTEN 2071/kubelet tcp 0 0 127.0.0.1:39792 0.0.0.0:* LISTEN 2071/kubelet
- 10248: healthz http 服务;
- 10250: https API 服务;注意:未开启只读端口 10255;
例如执行 kubectl exec -it nginx-ds-5rmws -- sh
命令时,kube-apiserver 会向 kubelet 发送如下请求:
POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1
kubelet 接收 10250 端口的 https 请求:
- /pods、/runningpods
- /metrics、/metrics/cadvisor、/metrics/probes
- /spec
- /stats、/stats/container
- /logs
- /run/、"/exec/", "/attach/", "/portForward/", "/containerLogs/" 等管理;
详情参考:https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L434:3
由于关闭了匿名认证,同时开启了 webhook 授权,所有访问 10250 端口 https API 的请求都需要被认证和授权。
预定义的 ClusterRole system:kubelet-api-admin 授予访问 kubelet 所有 API 的权限(kube-apiserver 使用的 kubernetes 证书 User 授予了该权限):
[root@k8s-master1 work]# kubectl describe clusterrole system:kubelet-api-admin Name: system:kubelet-api-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- nodes/log [] [] [*] nodes/metrics [] [] [*] nodes/proxy [] [] [*] nodes/spec [] [] [*] nodes/stats [] [] [*] nodes [] [] [get list watch proxy]
kublet api 认证和授权
kublet 配置了如下认证参数:
- authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
- authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
- authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
同时配置了如下授权参数:
- authroization.mode=Webhook:开启 RBAC 授权;
kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:
[root@k8s-master1 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.161.170:10250/metrics Unauthorized
curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.161.170:10250/metrics Unauthorized
通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);
证书认证和授权
# 权限不足的证书; sudo curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.161.170:10250/metrics Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics) # 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书; [root@k8s-master1 work]# sudo curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.161.170:10250/metrics|head # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend. # TYPE apiserver_audit_event_total counter apiserver_audit_event_total 0 # HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend. # TYPE apiserver_audit_requests_rejected_total counter apiserver_audit_requests_rejected_total 0 # HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request. # TYPE apiserver_client_certificate_expiration_seconds histogram apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
--cacert
、--cert
、--key
的参数值必须是文件路径,如上面的./admin.pem
不能省略./
,否则返回401 Unauthorized
;
bear token 认证和授权
创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:
[root@k8s-master1 work]# kubectl create sa kubelet-api-test serviceaccount/kubelet-api-test created [root@k8s-master1 work]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test clusterrolebinding.rbac.authorization.k8s.io/kubelet-api-test created [root@k8s-master1 work]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}') [root@k8s-master1 work]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}') [root@k8s-master1 work]# echo ${TOKEN} eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVsZXQtYXBpLXRlc3QtdG9rZW4tMmZ0ZnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZWxldC1hcGktdGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIzYzg4NTI5LTc2ZjctMTFlOS1iMDc3LTAwMGMyOWFhMGE2OCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVsZXQtYXBpLXRlc3QifQ.qCs7cgQYsvkxr2QsOklzFlsf14_0vpEfKX3NSf5sOuwHBixMq4HGBoLRCLRnGiJ35W7zKIGVMwpIXFDkOc7TDFFjUe8rBY6uCcZKBmchIyaqsu-vi4T-U2kRd3ST011eQdYJZHcbOHYM1dYt_5pOfDlA-wS-DobfpFpTt3BkOtrdoO1taRLr-u5I3Xun4wzKS5a9GN8S-Ap0Xn9UEGuSMqB6CplomPmi-P8WOTnz3D-D1WLtJRKkJt8RvIFDwRDGPf6ytStDhaQYtOQ8nbdfuzOLuiIApzwvrK-bCInM70dfu-iHGrLZQW2zNwPU1Y9-f6MFw4hoQ13AeZRFS5FI-Q [root@k8s-master1 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.27.128.149:10250/metrics|head ^C [root@k8s-master1 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.161.170:10250/metrics|head # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend. # TYPE apiserver_audit_event_total counter apiserver_audit_event_total 0 # HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend. # TYPE apiserver_audit_requests_rejected_total counter apiserver_audit_requests_rejected_total 0 # HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request. # TYPE apiserver_client_certificate_expiration_seconds histogram apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
cadvisor 和 metrics
cadvisor 统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况,分别在自己的 http web 页面(4194 端口)和 10250 以 promehteus metrics 的形式输出。