目录
部署 HPA
部署 metrics-server
部署 HPA
资源限制 - Pod
资源限制 - 命名空间
1.计算资源配额
2.配置对象数量配额限制
HPA(Horizontal Pod Autoscaling)Pod 水平自动伸缩,Kubernetes 有一个 HPA 的资源,HPA 可以根据 CPU 利用率自动伸缩一个 Replication Controller、 Deployment 或者Replica Set 中的 Pod 数量。
(1)HPA 基于 Master 上的 kube-controller-manager 服务启动参数 horizontal-pod-autoscaler-sync-period 定义的时长(默认为30秒),周期性的检测 Pod 的 CPU 使用率。
(2)HPA 与之前的 RC、Deployment 一样,也属于一种 Kubernetes 资源对象。通过追踪分析 RC 控制的所有目标 Pod 的负载变化情况, 来确定是否需要针对性地调整目标Pod的副本数,这是HPA的实现原理。
(3)metrics-server 也需要部署到集群中, 它可以通过 resource metrics API 对外提供度量数据。
●metrics-server:是kubernetes集群资源使用情况的聚合器,收集数据给kubernetes集群内使用,如kubectl、hpa、scheduler等。
在所有 Node 节点上传 metrics-server.tar 镜像包到 /opt 目录
cd /opt/
docker load -i metrics-server.tar
使用 helm install 安装 metrics-server
mkdir /opt/metrics
cd /opt/metrics
helm repo remove stable
helm repo add stable https://charts.helm.sh/stable
或
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo update
helm pull stable/metrics-server
vim metrics-server.yaml
args:
- --logtostderr
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
image:
repository: k8s.gcr.io/metrics-server-amd64
tag: v0.3.2
使用 helm install 安装 metrics-server
helm install metrics-server stable/metrics-server -n kube-system -f metrics-server.yaml
或
我使用的一个components.yaml文件来直接用kubectl apply -f components运行的
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:v0.4.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
kubectl get pods -n kube-system | grep metrics-server
metrics-server-64996ddc6d-bfrg7 1/1 Running 0 13m
#需要多等一会儿
kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master01 234m 11% 2170Mi 56%
node01 176m 8% 1421Mi 37%
node02 108m 5% 1518Mi 39%
kubectl top pods --all-namespaces
NAMESPACE NAME CPU(cores ) MEMORY(bytes)
cattle-prometheus exporter-kube-state-cluster-monitoring-79c667fdc9-cswrs 2m 11Mi
cattle-prometheus exporter-node-cluster-monitoring-6qqd2 5m 12Mi
cattle-prometheus exporter-node-cluster-monitoring-c6btb 0m 11Mi
cattle-prometheus exporter-node-cluster-monitoring-zw2fn 7m 8Mi
cattle-prometheus grafana-cluster-monitoring-575d64fcf-t4jf6 6m 33Mi
cattle-prometheus prometheus-cluster-monitoring-0 48m 185Mi
cattle-prometheus prometheus-operator-monitoring-operator-6dd84ddd49-ttsl5 1m 16Mi
cattle-system cattle-cluster-agent-8d9ccb78f-rml2p 8m 397Mi
default hostpath-yaml 0m 3Mi
default hostpath2-yaml 0m 1Mi
dev nginx-dev-76bc986bf6-g8tjm 0m 1Mi
dev nginx-dev-76bc986bf6-ktscw 0m 1Mi
dev nginx-dev-76bc986bf6-l44sx 0m 1Mi
fleet-system fleet-agent-55bfc495bd-whks8 3m 33Mi
ingress-nginx ingress-nginx-controller-clcr7 3m 105Mi
ingress-nginx ingress-nginx-controller-kpmh7 3m 70Mi
kube-flannel kube-flannel-ds-mfgt2 8m 26Mi
kube-flannel kube-flannel-ds-nkqws 6m 23Mi
kube-flannel kube-flannel-ds-xbsht 12m 25Mi
kube-system coredns-54d67798b7-jl2hn 2m 13Mi
kube-system coredns-54d67798b7-k2js8 3m 21Mi
kube-system etcd-master01 27m 124Mi
kube-system kube-apiserver-master01 106m 489Mi
kube-system kube-controller-manager-master01 21m 51Mi
kube-system kube-proxy-c488k 9m 24Mi
kube-system kube-proxy-sx9tj 8m 22Mi
kube-system kube-proxy-tkx9v 7m 26Mi
kube-system kube-scheduler-master01 5m 15Mi
kube-system metrics-server-64996ddc6d-bfrg7 4m
//在所有节点上传 hpa-example.tar 镜像文件到 /opt 目录
hpa-example.tar 是谷歌基于 PHP 语言开发的用于测试 HPA 的镜像,其中包含了一些可以运行 CPU 密集计算任务的代码。
cd /opt
docker load -i hpa-example.tar
docker images | grep hpa-example
gcr.io/google_containers/hpa-example latest 4ca4c13a6d7c 5 years ago 481MB
创建用于测试的 Pod 资源,并设置请求资源为 cpu=200m
vim hpa-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: php-apache
name: php-apache
spec:
replicas: 1
selector:
matchLabels:
run: php-apache
template:
metadata:
labels:
run: php-apache
spec:
containers:
- image: mirrorgooglecontainers/hpa-example
name: php-apache
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: php-apache
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: php-apache
kubectl apply -f hpa-pod.yaml
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/hostpath-yaml 1/1 Running 1 2d
pod/hostpath2-yaml 1/1 Running 1 2d
pod/php-apache-7695f5688f-k9bsn 1/1 Running 0 2m22s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 2d1h
service/php-apache ClusterIP 10.96.55.210 80/TCP 2m22s
使用 kubectl autoscale 命令创建 HPA 控制器,设置 cpu 负载阈值为请求资源的 50%,指定最少负载节点数量为 1 个,最大负载节点数量为 10 个
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled
需要等一会儿,才能获取到指标信息 TARGETS
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 75s
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
php-apache-7695f5688f-k9bsn 1m 5Mi
创建一个测试客户端容器
kubectl run -it load-generator --image=busybox /bin/sh
增加负载
while true; do /bin/wget -q -O- 10.96.55.210 ; done
打开一个新的窗口,查看负载节点数目
kubectl get hpa -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 31m
php-apache Deployment/php-apache 490%/50% 1 10 1 31m
php-apache Deployment/php-apache 490%/50% 1 10 4 31m
#以上可以看到经过压测,负载节点数量最大上升到 10 个,并且 cpu 负载也随之下降。
//如果 cpu 性能较好导致负载节点上升不到 10 个,可再创建一个测试客户端同时测试:
kubectl run -i --tty load-generator1 --image=busybox /bin/sh
# while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
查看 Pod 状态,也发现已经创建了 10 个 Pod 资源
kubectl get pods
NAME READY STATUS RESTARTS AGE
load-generator 1/1 Running 0 15m
myapp 1/1 Running 1 28m
myapp2 1/1 Running 0 14m
php-apache-7695f5688f-4p6xv 0/1 Pending 0 63s
php-apache-7695f5688f-gmm75 0/1 Pending 0 78s
php-apache-7695f5688f-k5msc 0/1 Pending 0 78s
php-apache-7695f5688f-k9bsn 1/1 Running 0 38m
php-apache-7695f5688f-kcv47 0/1 Pending 0 78s
php-apache-7695f5688f-lj2l6 0/1 Pending 0 47s
php-apache-7695f5688f-n5wt9 0/1 Pending 0 63s
php-apache-7695f5688f-nf6nj 0/1 Pending 0 63s
php-apache-7695f5688f-wlbvv 0/1 Pending 0 47s
php-apache-7695f5688f-xqpts 0/1 Pending 0 63s
HPA 扩容的时候,负载节点数量上升速度会比较快;但回收的时候,负载节点数量下降速度会比较慢。
原因是防止在业务高峰期时因为网络波动等原因的场景下,如果回收策略比较积极的话,K8S集群可能会认为访问流量变小而快速收缩负载节点数量,而仅剩的负载节点又承受不了高负载的压力导致崩溃,从而影响业务。
Kubernetes对资源的限制实际上是通过cgroup来控制的,cgroup是容器的一组用来控制内核如何运行进程的相关属性集合。针对内存、CPU 和各种设备都有对应的 cgroup。
默认情况下,Pod 运行没有 CPU 和内存的限额。这意味着系统中的任何 Pod 将能够像执行该 Pod 所在的节点一样, 消耗足够多的 CPU 和内存。一般会针对某些应用的 pod 资源进行资源限制,这个资源限制是通过 resources 的 requests 和 limits 来实现。requests 为创建 Pod 时初始要分配的资源,limits 为 Pod 最高请求的资源值。
示例:
spec:
containers:
- image: xxxx
imagePullPolicy: IfNotPresent
name: auth
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 250m
memory: 250Mi
apiVersion: v1
kind: ResourceQuota #使用 ResourceQuota 资源类型
metadata:
name: compute-resources
namespace: spark-cluster #指定命令空间
spec:
hard:
pods: "20" #设置 Pod 数量最大值
requests.cpu: "2"
requests.memory: 1Gi
limits.cpu: "4"
limits.memory: 2Gi
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: spark-cluster
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4" #设置 pvc 数量最大值
replicationcontrollers: "20" #设置 rc 数量最大值
secrets: "10"
services: "10"
services.loadbalancers: "2"
#如果Pod没有设置requests和limits,则会使用当前命名空间的最大资源;如果命名空间也没设置,则会使用集群的最大资源。
K8S 会根据 limits 限制 Pod 使用资源,当内存超过 limits 时 cgruops 会触发 OOM。
这里就需要创建 LimitRange 资源来设置 Pod 或其中的 Container 能够使用资源的最大默认值
apiVersion: v1
kind: LimitRange #使用 LimitRange 资源类型
metadata:
name: mem-limit-range
namespace: test #可以给指定的 namespace 增加一个资源限制
spec:
limits:
- default: #default 即 limit 的值
memory: 512Mi
cpu: 500m
defaultRequest: #defaultRequest 即 request 的值
memory: 256Mi
cpu: 100m
type: Container #类型支持 Container、Pod、PVC