K8s脱坑前的挣扎——Pod资源控制

前言:对pod资源限制可以参考官方网站的模板https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
K8s脱坑前的挣扎——Pod资源控制_第1张图片
K8s脱坑前的挣扎——Pod资源控制_第2张图片

文章目录

  • 一、资源限制
    • 1.示例
  • 二、重启策略
    • 1.示例
  • 三、健康检查
    • 1.exec方式
    • 2.httpGet方式
    • 3.tcpSocket方式

一、资源限制

  • Pod和Container的资源请求和限制:
    • spec.containers[].resources.limits.cpu //cpu上限
    • spec.containers[].resources.limits.memory //内存上限
    • spec.containers[].resources.requests.cpu //创建时分配的基本CPU资源
    • spec.containers[].resources.requests.memory //创建时分配的基本内存资源

1.示例

  • 创建一个资源并进行限制
[root@master01 demo]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: db           //db资源
    image: mysql
    env:
    - name: MYSQL_ROOT_PASSWORD    
      value: "password"
    resources:
      requests:  //创建时分配的资源
        memory: "64Mi"
        cpu: "250m"     //表示一个核心数占百分之二十五的资源
      limits:              //上线
        memory: "128Mi"      //不能超过128
        cpu: "500m"            //不能超过百分之五十
  - name: wp               //wp资源
    image: wordpress
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  • 创建并查看资源是否限制
[root@master01 demo]# kubectl apply -f pod2.yaml 
pod/frontend created
[root@master01 demo]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
frontend                          2/2     Running   2          107s
//查看具体事件
[root@master01 demo]# kubectl describe pod frontend
  Type     Reason     Age                  From                      Message
  ----     ------     ----                 ----                      -------
  Normal   Scheduled  2m32s                default-scheduler         Successfully assigned default/frontend to 192.168.170.136
  Normal   Pulling    118s                 kubelet, 192.168.170.136  pulling image "wordpress"
  Normal   Created    81s                  kubelet, 192.168.170.136  Created container
  Normal   Started    81s                  kubelet, 192.168.170.136  Started container
  Normal   Pulled     81s                  kubelet, 192.168.170.136  Successfully pulled image "wordpress"
  Normal   Created    17s (x4 over 118s)   kubelet, 192.168.170.136  Created container
  Normal   Started    17s (x4 over 118s)   kubelet, 192.168.170.136  Started container
  Normal   Pulling    17s (x4 over 2m30s)  kubelet, 192.168.170.136  pulling image "mysql"
  Normal   Pulled     17s (x4 over 118s)   kubelet, 192.168.170.136  Successfully pulled image "mysql"
  Warning  BackOff    9s (x4 over 73s)     kubelet, 192.168.170.136  Back-off restarting failed container

[root@master01 demo]# kubectl describe nodes 192.168.170.136  //查看node节点资源状态
Namespace                  Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                     ------------  ----------  ---------------  -------------
  default                    frontend                                 500m (50%)    1 (100%)    128Mi (3%)       256Mi (6%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       550m (55%)  1100m (110%)
  memory    228Mi (5%)  556Mi (14%)
  //资源被限制

二、重启策略

  • 重启策略:Pod在遇到故障之后重启的动作

    • 1:Always:当容器终止退出后,总是重启容器,默认策略
    • 2:OnFailure:当容器异常退出(退出状态码非0)时,重启容器
    • 3:Never:当容器终止退出,从不重启容器。
  • 注意:k8s中不支持重启Pod资源,只有删除重建

1.示例

[root@master01 demo]# kubectl edit deploy
 restartPolicy: Always   //默认重启策略为always
  • 创建一个资源并设置重启策略
[root@master01 demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - /bin/sh    //通过shell形式
    - -c
    - sleep 30; exit 3    //表示睡眠时间为30秒并异常退出
[root@master01 demo]# kubectl apply -f pod3.yaml  //创建
pod/foo created
//查看重启次数加1
[root@master01 demo]# kubectl get pods
NAME                              READY   STATUS             RESTARTS   AGE
foo                               1/1     Running            1          39s
//修改yaml文件
[root@master demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - /bin/sh
    - -c
    - sleep 10
  restartPolicy: Never    //更改重启策略
[root@master01 demo]# kubectl delete -f pod3.yaml   //删除之前的资源
pod "foo" deleted   
[root@master01 demo]# kubectl apply -f pod3.yaml    //重新创建资源
pod/foo created
[root@master01 demo]# kubectl get pods //完成状态不会进行重启
NAME                              READY   STATUS      RESTARTS   AGE
foo                               0/1     Completed   0          17s

三、健康检查

  • 健康检查:又称为探针(Probe)
    (注意:)规则可以同时定义

    • livenessProbe 如果检查失败,将杀死容器,根据Pod的restartPolicy来操作
    • ReadinessProbe 如果检查失败,kubernetes会把Pod从service endpoints中剔除
  • Probe支持三种检查方法:

    • httpGet 发送http请求,返回200-400范围状态码为成功
    • exec 执行Shell命令返回状态码是0为成功
    • tcpSocket 发起TCP Socket建立成功

具体可参考官方网站https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

1.exec方式

  • 官方示例
    K8s脱坑前的挣扎——Pod资源控制_第3张图片
  • 示例,创建一个资源
[root@master01 demo]# vim liveness.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh       //在shell环境下执行命令
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy
    livenessProbe:      //根据重启策略操作
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5    //表示重启创建完成后,5秒后开始进行健康检查
      periodSeconds: 5          //表示检查的间隔频率
[root@master01 demo]# kubectl apply -f liveness.yaml   //创建资源
pod/liveness-exec created
[root@master01 demo]# kubectl get pods -w
NAME            READY   STATUS              RESTARTS   AGE
liveness-exec   0/1     ContainerCreating   0          4s
liveness-exec   1/1   Running   0     17s
liveness-exec   0/1   Completed   0     47s
liveness-exec   1/1   Running   1     63s
[root@master01 demo]# kubectl get pods     //查看pod可以看到,重启了四次
liveness-exec                     1/1     Running     4          4m11s

2.httpGet方式

K8s脱坑前的挣扎——Pod资源控制_第4张图片

3.tcpSocket方式

K8s脱坑前的挣扎——Pod资源控制_第5张图片

你可能感兴趣的:(K8s)