Pod资源管理进阶

              Pod资源管理进阶

                                     作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。 

 

 

 

.livenessProbe(需要做健康状态检查,即验证存活状态检测,当发现容器运行不正常会立即重启,若重启后容器依旧不正常运行会逐一累计间隔时间进行重启)

1>.定义yaml文件

[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/liveness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness-exec
  name: liveness-exec
spec:
  containers:
  - name: liveness-demo
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - test
        - -e
        - /tmp/healthy
[[email protected] ~]# 
[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/liveness-exec.yaml
[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/liveness-http.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness-demo
    image: nginx:1.14-alpine
    ports:
    - name: http
      containerPort: 80
    lifecycle:
      postStart:
        exec:
          command:
          - /bin/sh
          - -c
          - 'echo Healty > /usr/share/nginx/html/healthz'
    livenessProbe:
      httpGet:
        path: /healthz
        port: http
        scheme: HTTP
      periodSeconds: 2
      failureThreshold: 2
      initialDelaySeconds: 3
[[email protected] ~]# 
[[email protected] ~]# 
[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/liveness-http.yaml

2>.应用yaml文件并查看pods信息

[[email protected] ~]# kubectl apply -f /yinzhengjie/data/k8s/manifests/pod/liveness-exec.yaml 
pod/liveness-exec created
[[email protected] ~]# 
[[email protected] ~]# kubectl apply -f /yinzhengjie/data/k8s/manifests/pod/liveness-http.yaml 
pod/liveness-http created
[[email protected] ~]# 
[[email protected] ~]# kubectl get pods
NAME                       READY   STATUS             RESTARTS   AGE
liveness-exec              0/1     CrashLoopBackOff   7          19m
liveness-http              1/1     Running            0          10m
mynginx-677d85dbd5-t9xfz   1/1     Running            0          4h46m
[[email protected] ~]# 
[[email protected] ~]# 

3>.验证pod是否发生重启

[[email protected] ~]# kubectl get pods
NAME                       READY   STATUS             RESTARTS   AGE
liveness-exec              0/1     CrashLoopBackOff   7          20m
liveness-http              1/1     Running            0          10m
mynginx-677d85dbd5-t9xfz   1/1     Running            0          4h46m
[[email protected] ~]# 
[[email protected] ~]# kubectl describe pods liveness-http
Name:         liveness-http
Namespace:    default
Priority:     0
Node:         node201.yinzhengjie.org.cn/172.200.1.201
Start Time:   Thu, 06 Feb 2020 13:15:43 +0800
Labels:       test=liveness
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness"},"name":"liveness-http","na
Status:       Running
IP:           10.244.1.3
IPs:
  IP:  10.244.1.3
Containers:
  liveness-demo:
    Container ID:   docker://f9457bb20479d8e0c121c8c1fbe04146f767ee895522c5cc47a759e939993b07
    Image:          nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 06 Feb 2020 13:15:44 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/healthz delay=3s timeout=1s period=2s #success=1 #failure=2
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4jpjf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-4jpjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4jpjf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                 Message
  ----    ------     ----  ----                                 -------
  Normal  Scheduled  10m   default-scheduler                    Successfully assigned default/liveness-http to node201.yinzhengjie.o
  Normal  Pulled     10m   kubelet, node201.yinzhengjie.org.cn  Container image "nginx:1.14-alpine" already present on machine
  Normal  Created    10m   kubelet, node201.yinzhengjie.org.cn  Created container liveness-demo
  Normal  Started    10m   kubelet, node201.yinzhengjie.org.cn  Started container liveness-demo
[[email protected] ~]# 
[[email protected] ~]# 
[[email protected] ~]# kubectl describe pods liveness-http
[[email protected] ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
liveness-exec              1/1     Running   8          21m
liveness-http              1/1     Running   0          11m
mynginx-677d85dbd5-t9xfz   1/1     Running   0          4h47m
[[email protected] ~]# 
[[email protected] ~]# kubectl describe pods liveness-exec
Name:         liveness-exec
Namespace:    default
Priority:     0
Node:         node202.yinzhengjie.org.cn/172.200.1.202
Start Time:   Thu, 06 Feb 2020 13:05:53 +0800
Labels:       test=liveness-exec
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness-exec"},"name":"liveness-exec
","namespace":"default...Status:       Running
IP:           10.244.2.2
IPs:
  IP:  10.244.2.2
Containers:
  liveness-demo:
    Container ID:  docker://0e718e53f333af21d266b3ff0c1f69c76712675ec772da0f5b57eeb2cc9a0512
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
    Port:          
    Host Port:     
    Args:
      /bin/sh
      -c
      touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    State:          Running
      Started:      Thu, 06 Feb 2020 13:26:44 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 06 Feb 2020 13:20:10 +0800
      Finished:     Thu, 06 Feb 2020 13:21:30 +0800
    Ready:          True
    Restart Count:  8
    Liveness:       exec [test -e /tmp/healthy] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4jpjf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-4jpjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4jpjf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From                                 Message
  ----     ------     ----                 ----                                 -------
  Normal   Scheduled  21m                  default-scheduler                    Successfully assigned default/liveness-exec to node2
02.yinzhengjie.org.cn  Normal   Created    18m (x3 over 21m)    kubelet, node202.yinzhengjie.org.cn  Created container liveness-demo
  Normal   Started    18m (x3 over 21m)    kubelet, node202.yinzhengjie.org.cn  Started container liveness-demo
  Normal   Pulling    16m (x4 over 21m)    kubelet, node202.yinzhengjie.org.cn  Pulling image "busybox"
  Normal   Pulled     16m (x4 over 21m)    kubelet, node202.yinzhengjie.org.cn  Successfully pulled image "busybox"
  Warning  Unhealthy  11m (x19 over 20m)   kubelet, node202.yinzhengjie.org.cn  Liveness probe failed:
  Normal   Killing    6m28s (x8 over 20m)  kubelet, node202.yinzhengjie.org.cn  Container liveness-demo failed liveness probe, will 
be restarted  Warning  BackOff    84s (x38 over 10m)   kubelet, node202.yinzhengjie.org.cn  Back-off restarting failed container
[[email protected] ~]# 
[[email protected] ~]# kubectl describe pods liveness-exec

Pod资源管理进阶_第1张图片

 

二.readinessProbe(就绪状态检测,即验证服务是否正常运行,如果就绪的话就作为service的后端,如果一直处于未就绪状态就会讲该容器从service的后端移除掉;需要注意的是该步骤也没有权限重启容器,这就是它和健康检查的重要区别)

1>.定义yaml文件

[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/readiness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: readiness-exec
  name: readiness-exec
spec:
  containers:
  - name: readiness-demo
    image: busybox
    args: ["/bin/sh", "-c", "while true; do rm -f /tmp/ready; sleep 30; touch /tmp/ready; sleep 300; done"] 
    readinessProbe:
      exec:
        command: ["test", "-e", "/tmp/ready"]
      initialDelaySeconds: 5
      periodSeconds: 5
[[email protected] ~]# 
[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/readiness-exec.yaml

2>.应用yaml文件并查看pods信息

[[email protected] ~]# kubectl apply -f /yinzhengjie/data/k8s/manifests/pod/readiness-exec.yaml 
pod/readiness-exec created
[[email protected] ~]# 
[[email protected] ~]# kubectl get pods        #虽然状态是"Running",但并没有READY
NAME                       READY   STATUS             RESTARTS   AGE
liveness-exec              0/1     CrashLoopBackOff   11         37m
liveness-http              1/1     Running            0          27m
mynginx-677d85dbd5-t9xfz   1/1     Running            0          5h3m
readiness-exec             0/1     Running            0          7s
[[email protected] ~]# 
[[email protected] ~]# kubectl get pods        #由于我们设置了延迟检测,因此需要等一会pod的容器才会被认为是正常运行的
NAME                       READY   STATUS    RESTARTS   AGE
liveness-exec              1/1     Running   12         37m
liveness-http              1/1     Running   0          27m
mynginx-677d85dbd5-t9xfz   1/1     Running   0          5h4m
readiness-exec             1/1     Running   0          45s
[[email protected] ~]# 

3>.验证pod是否发生重启

[[email protected] ~]# 
[[email protected] ~]# kubectl exec readiness-exec -- touch /tmp/ready
[[email protected] ~]# 
[[email protected] ~]# kubectl get pods
NAME                       READY   STATUS             RESTARTS   AGE
liveness-exec              0/1     CrashLoopBackOff   13         45m
liveness-http              1/1     Running            0          35m
mynginx-677d85dbd5-t9xfz   1/1     Running            0          5h11m
readiness-exec             1/1     Running            0          8m28s
[[email protected] ~]# 
[[email protected] ~]# 
[[email protected] ~]# kubectl describe pods readiness-exec 
Name:         readiness-exec
Namespace:    default
Priority:     0
Node:         node202.yinzhengjie.org.cn/172.200.1.202
Start Time:   Thu, 06 Feb 2020 13:42:51 +0800
Labels:       test=readiness-exec
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"readiness-exec"},"name":"readiness-exec","namespace":"defau...
Status:       Running
IP:           10.244.2.3
IPs:
  IP:  10.244.2.3
Containers:
  readiness-demo:
    Container ID:  docker://a603c504e38d91d420c9aaa8d062d2b595323b0f91ab789bedafc26035e95eb6
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
    Port:          
    Host Port:     
    Args:
      /bin/sh
      -c
      while true; do rm -f /tmp/ready; sleep 30; touch /tmp/ready; sleep 300; done
    State:          Running
      Started:      Thu, 06 Feb 2020 13:42:57 +0800
    Ready:          True
    Restart Count:  0
    Readiness:      exec [test -e /tmp/ready] delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4jpjf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-4jpjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4jpjf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From                                 Message
  ----     ------     ----                  ----                                 -------
  Normal   Scheduled  10m                   default-scheduler                    Successfully assigned default/readiness-exec to node202.yinzhengjie.org.cn
  Normal   Pulling    10m                   kubelet, node202.yinzhengjie.org.cn  Pulling image "busybox"
  Normal   Pulled     10m                   kubelet, node202.yinzhengjie.org.cn  Successfully pulled image "busybox"
  Normal   Created    10m                   kubelet, node202.yinzhengjie.org.cn  Created container readiness-demo
  Normal   Started    10m                   kubelet, node202.yinzhengjie.org.cn  Started container readiness-demo
  Warning  Unhealthy  5m22s (x39 over 10m)  kubelet, node202.yinzhengjie.org.cn  Readiness probe failed:
[[email protected] ~]# 
[[email protected] ~]# kubectl exec readiness-exec -- touch /tmp/ready
[[email protected] ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
liveness-exec              1/1     Running   14         47m
liveness-http              1/1     Running   0          37m
mynginx-677d85dbd5-t9xfz   1/1     Running   0          5h13m
readiness-exec             1/1     Running   0          10m
[[email protected] ~]# 
[[email protected] ~]# kubectl exec readiness-exec -- rm -f /tmp/ready
[[email protected] ~]# 
[email protected] ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
liveness-exec              1/1     Running   15         48m
liveness-http              1/1     Running   0          38m
mynginx-677d85dbd5-t9xfz   1/1     Running   0          5h14m
readiness-exec             0/1     Running   0          11m
[[email protected] ~]# 
[[email protected] ~]# 
[[email protected] ~]# kubectl get pods
NAME                       READY   STATUS             RESTARTS   AGE
liveness-exec              0/1     CrashLoopBackOff   15         49m
liveness-http              1/1     Running            0          39m
mynginx-677d85dbd5-t9xfz   1/1     Running            0          5h15m
readiness-exec             0/1     Running            0          12m
[[email protected] ~]# kubectl describe pods readiness-exec 
Name:         readiness-exec
Namespace:    default
Priority:     0
Node:         node202.yinzhengjie.org.cn/172.200.1.202
Start Time:   Thu, 06 Feb 2020 13:42:51 +0800
Labels:       test=readiness-exec
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"readiness-exec"},"name":"readiness-exec","namespace":"defau...
Status:       Running
IP:           10.244.2.3
IPs:
  IP:  10.244.2.3
Containers:
  readiness-demo:
    Container ID:  docker://a603c504e38d91d420c9aaa8d062d2b595323b0f91ab789bedafc26035e95eb6
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
    Port:          
    Host Port:     
    Args:
      /bin/sh
      -c
      while true; do rm -f /tmp/ready; sleep 30; touch /tmp/ready; sleep 300; done
    State:          Running
      Started:      Thu, 06 Feb 2020 13:42:57 +0800
    Ready:          False
    Restart Count:  0
    Readiness:      exec [test -e /tmp/ready] delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4jpjf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-4jpjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4jpjf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                 From                                 Message
  ----     ------     ----                ----                                 -------
  Normal   Scheduled  12m                 default-scheduler                    Successfully assigned default/readiness-exec to node202.yinzhengjie.org.cn
  Normal   Pulling    12m                 kubelet, node202.yinzhengjie.org.cn  Pulling image "busybox"
  Normal   Pulled     12m                 kubelet, node202.yinzhengjie.org.cn  Successfully pulled image "busybox"
  Normal   Created    12m                 kubelet, node202.yinzhengjie.org.cn  Created container readiness-demo
  Normal   Started    12m                 kubelet, node202.yinzhengjie.org.cn  Started container readiness-demo
  Warning  Unhealthy  93s (x66 over 11m)  kubelet, node202.yinzhengjie.org.cn  Readiness probe failed:
[[email protected] ~]# 
[[email protected] ~]# kubectl exec readiness-exec -- rm -f /tmp/ready

Pod资源管理进阶_第2张图片

 

三.pod对象的相位(phase)

  Pod对象总是应该处于其生命进程周期中以下几个相位(phase)之一:

  Pending:
    API Server创建了Pod资源对象并已存入etcd中,但它尚未被调度完成(一般是资源需求紧缺导致,比如内存不足),或仍处于从仓库中下载镜像的过程中。

  Running:
    Pod已经被调度至某节点,并且所有容器都已经被kubelet创建完成。

  Succeeded:
    Pod中的所有容器都已经成功终止并且不会被重启。

  Failed:
    所有容器都已经终止,但至少有一个容器终止失败,即容器返回了非0值的退出状态码或已经被系统终止。

  Unknown:
    API Server无法正常获取到Pod对象的状态信息,通常是由于其无法在所在工作节点的kubelet通信所致。

 

四.Pod对象的创建过程

  (1)用户提交创建Pod请求给K8S API Server;

  (2)API Server将用户的请求中未提交的参数使用默认参数补齐(即准入控制)后写入etcd中;

  (3)Scheduler通过监测(watch)事件发现需要创建新的pod,于是根据自己的默认算法选出将一个最佳的K8S node,并将该节点回馈给API Server;

  (4)API Server更新etcd中的数据(即"nameNode"字段),与此同时会通知相应的K8S node主机的kubelet进程;

  (5)kubelet进程接收到API Server的消息后,向API Server发送请求获取需要创建的pod的元数据信息属性,通过这些属性调用本地的Docker引擎去创建相应的容器,Docker引擎创建完毕后由kubelet返回给API Server状态;

  (6)API Server接收到Kubelet的数据后,将pod信息再一次更新到etcd中(比如更新容器的相位(phase)的状态由Pending变为Running);

 

五.容器的重启策略

  Pod对象因容器程序崩溃或容器申请超出限制的资源等原因都可能导致其被终止,此时是否应该重建它取决于其重启策略(restartPolicy)属性的定义。

  Always:
    但凡Pod对象终止就将其重启,此为默认设定。

  OnFailure:
    仅在Pod对象出现错误时方才将其重启。

  Nerver:
    从不重启。
    

 

六.Pod的终止过程

  (1)用户提交删除Pod请求给K8S API Server(在用户提交删除请求后,该Pod会被标记为"terminating");

  (
2)API Server将用户的请求中未提交的参数使用默认参数补齐(即准入控制)后写入etcd中,并设置宽限策略(set grace period,比如30s时间,这也是为什么我们删除Pod时会等待一段时间),即并不会立即删除Pod;

  (3)所有的K8S node主机kubelet进程通过API Server的监测(watch)事件得知需要终止的Pod,于是会向本地的Docker引擎发送终止信号,于是Docker引擎开始运行"pre Stop hook"相关指令;

  (4)API Server将pod标记为终止终端(terminating)状态并告知端点控制器(Endpoint controller),端点控制器开始移除相应的service;

  (5)如果kubulet在超出了指定宽限策略(我们假设宽限策略是30s,而我们使用了31s)还没有移除Pod的所有service那么就会发送SIGKILL(比如发送"kill -9")信号,此时Pod会被立即删除;

  (6)移除成功后,此时API Server将etcd中的数据移除。

 

七.POD安全配置接口(需要的小伙伴可根据官方文档自行调研)

[[email protected] ~]# kubectl explain pods.spec.securityContext
KIND:     Pod
VERSION:  v1

RESOURCE: securityContext 

DESCRIPTION:
     SecurityContext holds pod-level security attributes and common container
     settings. Optional: Defaults to empty. See type description for default
     values of each field.

     PodSecurityContext holds pod-level security attributes and common container
     settings. Some fields are also present in container.securityContext. Field
     values of container.securityContext take precedence over field values of
     PodSecurityContext.

FIELDS:
   fsGroup    
     A special supplemental group that applies to all containers in a pod. Some
     volume types allow the Kubelet to change the ownership of that volume to be
     owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit
     is set (new files created in the volume will be owned by FSGroup) 3. The
     permission bits are OR'd with rw-rw---- If unset, the Kubelet will not
     modify the ownership and permissions of any volume.

   runAsGroup    
     The GID to run the entrypoint of the container process. Uses runtime
     default if unset. May also be set in SecurityContext. If set in both
     SecurityContext and PodSecurityContext, the value specified in
     SecurityContext takes precedence for that container.

   runAsNonRoot    
     Indicates that the container must run as a non-root user. If true, the
     Kubelet will validate the image at runtime to ensure that it does not run
     as UID 0 (root) and fail to start the container if it does. If unset or
     false, no such validation will be performed. May also be set in
     SecurityContext. If set in both SecurityContext and PodSecurityContext, the
     value specified in SecurityContext takes precedence.

   runAsUser    
     The UID to run the entrypoint of the container process. Defaults to user
     specified in image metadata if unspecified. May also be set in
     SecurityContext. If set in both SecurityContext and PodSecurityContext, the
     value specified in SecurityContext takes precedence for that container.

   seLinuxOptions    
     The SELinux context to be applied to all containers. If unspecified, the
     container runtime will allocate a random SELinux context for each
     container. May also be set in SecurityContext. If set in both
     SecurityContext and PodSecurityContext, the value specified in
     SecurityContext takes precedence for that container.

   supplementalGroups    <[]integer>
     A list of groups applied to the first process run in each container, in
     addition to the container's primary GID. If unspecified, no groups will be
     added to any container.

   sysctls    <[]Object>
     Sysctls hold a list of namespaced sysctls used for the pod. Pods with
     unsupported sysctls (by the container runtime) might fail to launch.

   windowsOptions    
     The Windows specific settings applied to all containers. If unspecified,
     the options within a container's SecurityContext will be used. If set in
     both SecurityContext and PodSecurityContext, the value specified in
     SecurityContext takes precedence.

[[email protected] ~]#  
   
  [[email protected] ~]# kubectl explain pods.spec.securityContext 
  
 
[[email protected] ~]# kubectl explain pods.spec.containers.securityContext
KIND:     Pod
VERSION:  v1

RESOURCE: securityContext 

DESCRIPTION:
     Security options the pod should run with. More info:
     https://kubernetes.io/docs/concepts/policy/security-context/ More info:
     https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

     SecurityContext holds security configuration that will be applied to a
     container. Some fields are present in both SecurityContext and
     PodSecurityContext. When both are set, the values in SecurityContext take
     precedence.

FIELDS:
   allowPrivilegeEscalation    
     AllowPrivilegeEscalation controls whether a process can gain more
     privileges than its parent process. This bool directly controls if the
     no_new_privs flag will be set on the container process.
     AllowPrivilegeEscalation is true always when the container is: 1) run as
     Privileged 2) has CAP_SYS_ADMIN

   capabilities    
     The capabilities to add/drop when running containers. Defaults to the
     default set of capabilities granted by the container runtime.

   privileged    
     Run container in privileged mode. Processes in privileged containers are
     essentially equivalent to root on the host. Defaults to false.

   procMount    <string>
     procMount denotes the type of proc mount to use for the containers. The
     default is DefaultProcMount which uses the container runtime defaults for
     readonly paths and masked paths. This requires the ProcMountType feature
     flag to be enabled.

   readOnlyRootFilesystem    
     Whether this container has a read-only root filesystem. Default is false.

   runAsGroup    
     The GID to run the entrypoint of the container process. Uses runtime
     default if unset. May also be set in PodSecurityContext. If set in both
     SecurityContext and PodSecurityContext, the value specified in
     SecurityContext takes precedence.

   runAsNonRoot    
     Indicates that the container must run as a non-root user. If true, the
     Kubelet will validate the image at runtime to ensure that it does not run
     as UID 0 (root) and fail to start the container if it does. If unset or
     false, no such validation will be performed. May also be set in
     PodSecurityContext. If set in both SecurityContext and PodSecurityContext,
     the value specified in SecurityContext takes precedence.

   runAsUser    
     The UID to run the entrypoint of the container process. Defaults to user
     specified in image metadata if unspecified. May also be set in
     PodSecurityContext. If set in both SecurityContext and PodSecurityContext,
     the value specified in SecurityContext takes precedence.

   seLinuxOptions    
     The SELinux context to be applied to the container. If unspecified, the
     container runtime will allocate a random SELinux context for each
     container. May also be set in PodSecurityContext. If set in both
     SecurityContext and PodSecurityContext, the value specified in
     SecurityContext takes precedence.

   windowsOptions    
     The Windows specific settings applied to all containers. If unspecified,
     the options from the PodSecurityContext will be used. If set in both
     SecurityContext and PodSecurityContext, the value specified in
     SecurityContext takes precedence.

[[email protected] ~]# 
[[email protected] ~]#  
   
  [[email protected] ~]# kubectl explain pods.spec.containers.securityContext 
  
 
[[email protected] ~]# kubectl explain pods.spec.containers.securityContext.capabilities
KIND:     Pod
VERSION:  v1

RESOURCE: capabilities 

DESCRIPTION:
     The capabilities to add/drop when running containers. Defaults to the
     default set of capabilities granted by the container runtime.

     Adds and removes POSIX capabilities from running containers.

FIELDS:
   add    <[]string>
     Added capabilities

   drop    <[]string>
     Removed capabilities

[[email protected] ~]# 
[[email protected] ~]#  
   
  [[email protected] ~]# kubectl explain pods.spec.containers.securityContext.capabilities 
  
 

 

八.资源需求及资源限制(容器的计算资源配额)

  CPU属于可压缩性资源,即资源额度可按需收缩,而内存(当前)则是不可压缩性资源,对其执行收缩操作可能会导致某种程度的问题。

  CPU资源的计算方式:
    一个核心相当于1000个微核心,即1=1000m,0.5=500m。
  
  内存资源的计算方式:
    默认单位为字节,也可以使用E,P,T,G,M和K后缀单位,或Ei,Pi,Ti,Gi,Mi,Ki形式的单位后缀。

  温馨提示:
    下面有两个我从互联网上找到两个pod的yaml文件,感兴趣的小伙伴可以测试一下,测试方式如下:
      [[email protected] ~]# kubectl apply -f /yinzhengjie/data/k8s/manifests/pod/memleak-pod.yaml
      [[email protected] ~]# kubectl describe pods memleak-pod
[[email protected] ~]# kubectl explain pods.spec.containers.resources
KIND:     Pod
VERSION:  v1

RESOURCE: resources 

DESCRIPTION:
     Compute Resources required by this container. Cannot be updated. More info:
     https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

     ResourceRequirements describes the compute resource requirements.

FIELDS:
   limits    string]string>
     Limits describes the maximum amount of compute resources allowed. More
     info:
     https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

   requests    string]string>
     Requests describes the minimum amount of compute resources required. If
     Requests is omitted for a container, it defaults to Limits if that is
     explicitly specified, otherwise to an implementation-defined value. More
     info:
     https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

[[email protected] ~]#  
   
  [[email protected] ~]# kubectl explain pods.spec.containers.resources 
  
 
[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/memleak-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: memleak-pod
spec:
  containers:
  - name: simmemleak
    image: saadali/simmemleak
    resources:
      requests:
        memory: "64Mi"
        cpu: "1"
      limits:
        memory: "64Mi"
        cpu: "1"
[[email protected] ~]# 
[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/memleak-pod.yaml
[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/stress-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: stress-pod
spec:
  containers:
  - name: stress
    image: ikubernetes/stress-ng
    command: ["/usr/bin/stress-ng", "-c 1", "-m 1", "--metrics-brief"]
    resources:
      requests:
        memory: "128Mi"
        cpu: "200m"
      limits:
        memory: "512Mi"
        cpu: "400m"
[[email protected] ~]# 
[[email protected] ~]# 
[[email protected] ~]# cat /yinzhengjie/data/k8s/manifests/pod/stress-pod.yaml

 

九.Pod服务质量类别

  根据Pod对象的requests和limits属性,kubernetes把Pod对象归类到BestEffort,Burstable和Guaranteed三个服务质量类别(Quality of Service,简称QoS)。

  Guaranteed:
    每个容器都为CPU资源设置了具有相同值得requests和limits属性,以及每个容器都为内存资源设置了具体相同值的requests和limits属性的pod资源会自动归属此类别,它们具有中等优先级。

  Burstable:
    至少有一个容器设置了CPU和内存资源的requests属性,但不满足Guaranteed类别要求的pod资源自动归属此类别,它们具有中等优先级。

  BestEffort:
    未为任何一个容器设置requests或limits属性的pod资源自动归属此类别,它们的优先级为最低级别。
  
[[email protected] ~]# kubectl explain pods.spec.priorityClassName
KIND:     Pod
VERSION:  v1

FIELD:    priorityClassName <string>

DESCRIPTION:
     If specified, indicates the pod's priority. "system-node-critical" and
     "system-cluster-critical" are two special keywords which indicate the
     highest priorities with the former being the highest priority. Any other
     name must be defined by creating a PriorityClass object with that name. If
     not specified, the pod priority will be default or zero if there is no
     default.
[[email protected] ~]# 
[[email protected] ~]# kubectl explain pods.spec.priorityClassName
[[email protected] ~]# kubectl describe pods memleak-pod
Name:         memleak-pod
Namespace:    default
Priority:     0
Node:         node201.yinzhengjie.org.cn/172.200.1.201
Start Time:   Thu, 06 Feb 2020 15:30:42 +0800
Labels:       
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"memleak-pod","namespace":"default"},"spec":{"containers":[{"image":"s...
Status:       Running
IP:           10.244.1.12
IPs:
  IP:  10.244.1.12
Containers:
  simmemleak:
    Container ID:   docker://58d3a4bb976bf247510d05ae66fdaa4096a3d96cd67a19eb8041cf41f20285ad
    Image:          saadali/simmemleak
    Image ID:       docker-pullable://saadali/simmemleak@sha256:5cf58299a7698b0c9779acfed15c8e488314fcb80944550eab5992cdf3193054
    Port:           
    Host Port:      
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Thu, 06 Feb 2020 15:42:13 +0800
      Finished:     Thu, 06 Feb 2020 15:42:13 +0800
    Ready:          False
    Restart Count:  7
    Limits:
      cpu:     1
      memory:  64Mi
    Requests:
      cpu:        1
      memory:     64Mi
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4jpjf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-4jpjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4jpjf
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                  From                                 Message
  ----     ------          ----                 ----                                 -------
  Normal   Scheduled       11m                  default-scheduler                    Successfully assigned default/memleak-pod to node201.yinzhengjie.org.cn
  Normal   Started         11m (x3 over 11m)    kubelet, node201.yinzhengjie.org.cn  Started container simmemleak
  Normal   SandboxChanged  11m (x3 over 11m)    kubelet, node201.yinzhengjie.org.cn  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling         10m (x4 over 11m)    kubelet, node201.yinzhengjie.org.cn  Pulling image "saadali/simmemleak"
  Normal   Pulled          10m (x4 over 11m)    kubelet, node201.yinzhengjie.org.cn  Successfully pulled image "saadali/simmemleak"
  Normal   Created         10m (x4 over 11m)    kubelet, node201.yinzhengjie.org.cn  Created container simmemleak
  Warning  BackOff         107s (x56 over 11m)  kubelet, node201.yinzhengjie.org.cn  Back-off restarting failed container
[[email protected] ~]# 
[[email protected] ~]# 
[[email protected] ~]# kubectl describe pods memleak-pod | grep QoS
QoS Class:       Guaranteed
[[email protected] ~]# 
[[email protected] ~]# kubectl describe pods memleak-pod | grep QoS

 

你可能感兴趣的:(Pod资源管理进阶)