k8s_弹性伸缩—HPA篇

node扩容,打开最下面的newnode 分组

[root@aghj-11 ~]# cat   ansible-install-k8s/hosts 
[master]
# 如果部署单Master,只保留一个Master节点
# 默认Naster节点也部署Node组件
10.1.1.31 node_name=k8s-master1
10.1.1.32 node_name=k8s-master2
10.1.1.33 node_name=k8s-master3

[node]
10.1.1.34 node_name=k8s-node1
10.1.1.35 node_name=k8s-node2
10.1.1.36 node_name=k8s-node3

[etcd]
10.1.1.31 etcd_name=etcd-1
10.1.1.32 etcd_name=etcd-2
10.1.1.33 etcd_name=etcd-3

[lb]
# 如果部署单Master,该项忽略
10.1.1.25 lb_name=lb-master
10.1.1.26 lb_name=lb-backup

[k8s:children]
master
node

[newnode]
#10.1.1.37 node_name=k8s-node4
ansible-playbook -i hosts add-node.yml

###############################

node级别缩容

如果你想从Kubernetes集群中删除节点,正确流程如下:

**1、获取节点列表**

kubectl get node

**2、设置不可调度**

kubectl cordon $node_name

**3、驱逐节点上的Pod**

kubectl drain $node_name --ignore-daemonsets

**4、移除节点**

该节点上已经没有任何资源了,可以直接移除节点:

kubectl delete node $node_name

这样,我们平滑移除了一个 k8s 节点。

##################################################################################

如果你使用kubeadm部署的,默认已开启。如果你使用二进制方式部署的话,需要在kube-APIServer中添加启动参数,增加以下配置,在设置完成重启 kube-apiserver 服务,就启用 API 聚合功能了

```
# vi /opt/kubernetes/cfg/kube-apiserver.conf
...
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true \
...
```
systemctl restart kube-apiserver

 安装metrics-server

[root@k8s-master1 metrics-server]# ll
总用量 28
-rw-r--r--. 1 root root  397 3月  15 21:04 aggregated-metrics-reader.yaml
-rw-r--r--. 1 root root  303 3月  15 21:04 auth-delegator.yaml
-rw-r--r--. 1 root root  324 3月  15 21:04 auth-reader.yaml
-rw-r--r--. 1 root root  298 3月  15 21:04 metrics-apiservice.yaml
-rw-r--r--. 1 root root 1277 3月  27 15:57 metrics-server-deployment.yaml
-rw-r--r--. 1 root root  297 3月  15 21:04 metrics-server-service.yaml
-rw-r--r--. 1 root root  532 3月  15 21:04 resource-reader.yaml
[root@k8s-master1 metrics-server]# kubectl apply -f .

测试:

kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
kubectl top node

 

#################################################################################

pod横向扩缩容HPA

pod 一定要写资源限制,不然检测不到

[root@k8s-master1 MengDe]# cat  educationcrmvue.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: educationcrmvue
  name: educationcrmvue
spec:
  replicas: 1
  selector:
    matchLabels:
      app: educationcrmvue
  template:
    metadata:
      labels:
        app: educationcrmvue
    spec:
      containers:
        - name: educationcrmvue
          image: 10.1.1.11/library/educationcrmvue:5
          ports:
          - containerPort: 80
          resources:
            limits:
              cpu: 0.5
              memory: 500Mi
            requests:
              cpu: 250m 
              memory: 300Mi
 
 
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: educationcrmvue
spec:
  maxReplicas: 5
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: educationcrmvue
  targetCPUUtilizationPercentage: 60
---
apiVersion: v1
kind: Service
metadata:
  name: educationcrmvue
spec:
  type: NodePort
  selector:
    app: educationcrmvue 
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
 
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: educationcrmvue
  namespace: default
spec:
  tls:
  - hosts:
    - educationcrmvue.rpdns.com
    secretName: educationcrmvue-ingress-secret
  rules:
  - host: educationcrmvue.rpdns.com
    http:
      paths:
      - path: /
        backend:
          serviceName: educationcrmvue
          servicePort: 80

看看资源使用

[root@k8s-master1 MengDe]# kubectl get hpa
NAME              REFERENCE                    TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
educationcrmvue   Deployment/educationcrmvue   0%/60%    1         5         1          3h48m

安装ab压测

yum -y install httpd-tools

开始压测

[root@aghj-11 ~]# ab -n 100000 -c 1000 http://10.1.1.31:32257/index.html

查看 pod的数量变成了4个,这个数量不是一下子就起来了,是根据负载的时间比如3分钟

[root@k8s-master1 MengDe]# kubectl get hpa
NAME              REFERENCE                    TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
educationcrmvue   Deployment/educationcrmvue   192%/60%   1         5         4          4h25m
[root@k8s-master1 MengDe]# kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
educationcrmgo-56ccc7dcf8-qp4dj    1/1     Running   0          19h
educationcrmgw-74dcd57fbd-858rz    1/1     Running   0          19h
educationcrmvue-765b79b9bd-4b4p7   1/1     Running   0          4h28m
educationcrmvue-765b79b9bd-8tkk5   1/1     Running   0          28s
educationcrmvue-765b79b9bd-hnv5g   1/1     Running   0          28s
educationcrmvue-765b79b9bd-j7sgk   1/1     Running   0          28s
[root@k8s-master1 MengDe]# kubectl get hpa
NAME              REFERENCE                    TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
educationcrmvue   Deployment/educationcrmvue   192%/60%   1         5         4          4h25m

下面我们可以看到当负载下去了隔了一段时间pod自动缩容了一个一个减少回到预期的值,5分钟左右

educationcrmvue   Deployment/educationcrmvue   0%/60%    1         5         4          4h29m
[root@k8s-master1 MengDe]# kubectl get hpa
NAME              REFERENCE                    TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
educationcrmvue   Deployment/educationcrmvue   0%/60%    1         5         2          4h31m
[root@k8s-master1 MengDe]# kubectl get hpa
NAME              REFERENCE                    TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
educationcrmvue   Deployment/educationcrmvue   0%/60%    1         5         1          4h32m

####################################################################

基于prometheus自定义指标

安装需要用到pv供给,准备一个动态供给的nfs

先部署一个nfs,我之前的文章有些过,这里就不写了

[root@k8s-master1 nfs-client]# ll
总用量 12
-rw-r--r--. 1 root root  225 11月 30 20:50 class.yaml
-rw-r--r--. 1 root root 1054 5月  15 13:38 deployment.yaml
-rw-r--r--. 1 root root 1526 11月 30 20:50 rbac.yaml
[root@k8s-master1 nfs-client]# kubectl apply -f .

部署promethes

[root@k8s-master1 prometheus]# ll
总用量 68
-rw-r--r--. 1 root root  657 12月 23 06:50 alertmanager-configmap.yaml
-rw-r--r--. 1 root root 2183 12月 23 06:50 alertmanager-deployment.yaml
-rw-r--r--. 1 root root  331 12月 23 06:50 alertmanager-pvc.yaml
-rw-r--r--. 1 root root  392 12月 23 06:50 alertmanager-service.yaml
drwxr-xr-x. 2 root root  141 12月 23 06:50 dashboard
-rw-r--r--. 1 root root 1198 12月 23 06:50 grafana.yaml
-rw-r--r--. 1 root root 2378 12月 23 06:50 kube-state-metrics-deployment.yaml
-rw-r--r--. 1 root root 2576 12月 23 06:50 kube-state-metrics-rbac.yaml
-rw-r--r--. 1 root root  506 12月 23 06:50 kube-state-metrics-service.yaml
-rw-r--r--. 1 root root 1641 12月 23 06:50 node-exporter-ds.yml
-rw-r--r--. 1 root root  425 12月 23 06:50 node-exporter-service.yaml
-rw-r--r--. 1 root root 4973 12月 23 06:50 prometheus-configmap.yaml
-rw-r--r--. 1 root root 1080 12月 23 06:50 prometheus-rbac.yaml
-rw-r--r--. 1 root root 4884 12月 23 06:50 prometheus-rules.yaml
-rw-r--r--. 1 root root  392 12月 23 06:50 prometheus-service.yaml
-rw-r--r--. 1 root root 3259 12月 23 06:50 prometheus-statefulset.yaml
[root@k8s-master1 prometheus]# kubectl apply -f .

在地址栏添加

http://prometheus.kube-system:9090

k8s_弹性伸缩—HPA篇_第1张图片

再导入模板,在之前的文章提到过,这里就略过

部署 Custom Metrics Adapter 

但是prometheus采集到的metrics并不能直接给k8s用,因为两者数据格式不兼容,还需要另外一个组件(k8s-prometheus-adpater),将prometheus的metrics 数据格式转换成k8s API接口能识别的格式,转换以后,因为是自定义API,所以还需要用Kubernetes aggregator在主APIServer中注册,以便直接通过/apis/来访问。

 https://github.com/DirectXMan12/k8s-prometheus-adapter 

该 PrometheusAdapter 有一个稳定的Helm Charts,我们直接使用。
 

wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
tar zxvf helm-v3.0.0-linux-amd64.tar.gz 
mv linux-amd64/helm /usr/bin/
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo update
helm repo list

部署prometheus-adapter,指定prometheus地址 

[root@k8s-master1 ~]# helm install prometheus-adapter stable/prometheus-adapter --namespace kube-system --set prometheus.url=http://prometheus.kube-system,prometheus.port=9090
NAME: prometheus-adapter
LAST DEPLOYED: Fri May 15 15:38:32 2020
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):

  kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
helm list -n kube-system
kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
prometheus-adapter-77b7b4dd8b-ktsvx   1/1     Running   0          9m

确保适配器注册到APIServer:

[root@k8s-master1 ~]# kubectl get apiservices |grep custom
v1beta1.custom.metrics.k8s.io          kube-system/prometheus-adapter   True        10m
[root@k8s-master1 ~]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1"

 ###########################################

基于QPS指标实践,弹性伸缩

部署一个应用

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: metrics-app
  name: metrics-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: metrics-app
  template:
    metadata:
      labels:
        app: metrics-app
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "80"
        prometheus.io/path: "/metrics"
    spec:
      containers:
      - image: lizhenliang/metrics-app
        name: metrics-app
        ports:
        - name: web
          containerPort: 80
        resources:
          requests:
            cpu: 200m
            memory: 256Mi
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-app
  labels:
    app: metrics-app
spec:
  ports:
  - name: web
    port: 80
    targetPort: 80
  selector:
    app: metrics-app
[root@k8s-master1 xiangmu]# curl 10.0.0.8/metrice
Hello! My name is metrics-app-7674cfb699-s7vs4. The last 10 seconds, the average QPS has been 0.6. Total requests served: 38

创建HPA策略

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: metrics-app-hpa 
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: metrics-app
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metric:
        name: http_requests_per_second
      target:
        type: AverageValue
        averageValue: 800m   # 800m 即0.8个/秒

配置适配器收集特定的指标 

# kubectl edit cm prometheus-adapter -n kube-system
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: prometheus-adapter
    chart: prometheus-adapter-v0.1.2
    heritage: Tiller
    release: prometheus-adapter
  name: prometheus-adapter
data:
  config.yaml: |
    rules:
    - seriesQuery: 'http_requests_total{kubernetes_namespace!="",kubernetes_pod_name!=""}'
      resources:
        overrides:
          kubernetes_namespace: {resource: "namespace"}
          kubernetes_pod_name: {resource: "pod"}
      name:
        matches: "^(.*)_total"
        as: "${1}_per_second"
      metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by (<<.GroupBy>>)'
...
```

 为了让配置生效可以删除pod,让他自动拉起一个

[root@k8s-master1 hpa]# kubectl delete  pod  prometheus-adapter-96bc4fb7-x6jwg -n kube-system
[root@k8s-master1 hpa]# kubectl get hpa
NAME              REFERENCE                    TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
educationcrmvue   Deployment/educationcrmvue   0%/60%      1         5         1          12h
metrics-app-hpa   Deployment/metrics-app       416m/800m   1         10        3          6m11s

压测一波

ab -n 100000 -c 100  http://10.0.0.8/metrics

 几分钟后pod达到最大值

[root@k8s-master2 ~]# kubectl get hpa
NAME              REFERENCE                    TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
educationcrmvue   Deployment/educationcrmvue   0%/60%      1         5         1          12h
metrics-app-hpa   Deployment/metrics-app       1252/800m   1         10        10         12m

1. 通过/metrics收集每个Pod的http_request_total指标;
2.  prometheus将收集到的信息汇总;
3. APIServer定时从Prometheus查询,获取request_per_second的数据;
4. HPA定期向APIServer查询以判断是否符合配置的autoscaler规则;
5. 如果符合autoscaler规则,则修改Deployment的ReplicaSet副本数量进行伸缩。

你可能感兴趣的:(k8s篇)