cka2023练习目

文章目录

  • 1. 准备工作
    • 1.1 证书详情
    • 1.2 考试注意事项
  • 2. 考试说明
    • 2.1 考试环境
    • 2.2 考试期间允许的资源
    • 2.3 可接受的测试地点
    • 2.4 其他资源
    • 2.A 考试小提示
  • 3. 练习题
    • Task 1. RBAC - role based access control
    • Task 2. drain - highly-available
    • Task 3. upgrade - Kubeadm
    • Task 4. snapshot - Implement etcd
    • Task 5. network policy - network interface
    • Task 6. expose - service types
    • Task 7. ingress nginx
    • Task 8. scale - scale applications
    • Task 9. schedule - Pod scheduling
    • Task 10. describe - Pod scheduling
    • Task 11. multi Containers
    • Task 12. pv - persistent volumes
    • Task 13. pvc - persistent volume claims
    • Task 14. logs - node logging
    • Task 15. sidecar - Manage container stdout
    • Task 16. top - monitor applications
    • Task 17. Daemon - cluster component
  • A. 附录
    • A1. 判分脚本

1. 准备工作

1.1 证书详情

https://training.linuxfoundation.cn/certificate/details/1

1.2 考试注意事项

2. 考试说明

2.1 考试环境

CKA Clusters

Cluster Members 练习环境 节点
- console 物理机
k8s 1 master
2 worker
k8s-master
k8-worker1, k8s-worker2
ek8s 1 master
2 worker
ek8s-master
ek8-worker1, ek8s-worker2
  • 在每个任务开始时,您将收到命令以确保您在正确的集群上完成任务
  • 可以使用如下命令通过 ssh 访问组成每个集群的节点:ssh
  • 您可以通过发出以下命令在任何节点上获得提升的权限:sudo -i
  • 您还可以随时使用 sudo 以提升的权限执行命令
  • 完成每项任务后,您必须返回基本节点(主机名 node-1)
  • 不支持嵌套的 ssh
  • 您可以使用 kubectl 和适当的上下文从基本节点处理任何集群。当通过 ssh 连接到集群成员时,您将只能通过 kubectl 在该特定集群上工作
  • 为方便起见,所有环境,即基本系统和集群节点,都预安装和预配置了以下附加命令行工具:
    • kubectl带有别名k和 Bash 自动完成功能
    • jq 用于 YAML/JSON 处理
    • tmux 用于终端复用
    • curl并用于测试 Web 服务wget
    • man 和手册页以获取更多文档
  • 将在适当的任务中提供有关连接到集群节点的进一步说明
  • 如果未指定显式命名空间,则应使用默认命名空间
  • 如果您需要销毁/重新创建资源以执行特定任务,您有责任在销毁资源之前适当地备份资源定义
  • 在 K8s 发布日期的大约 4 到 8 周内,CKA、CKS 和 CKAD 考试环境将与最新的 K8s 次要版本保持一致

2.2 考试期间允许的资源

在考试期间,考生可以:

  • 查看命令行终端中显示的考试内容说明
  • 查看发行版安装的文件(即 /usr/share 及其子目录)
  • https://kubernetes.io/docs,https://github.com/kubernetes,https://kubernetes.io/blog和他们的子域。包括所有可用的语言翻译https://kubernetes.io/zh/docs

2.3 可接受的测试地点

以下是对可接受的测试地点的期望:

  • 整洁的工作区
    • 表面上没有纸、书写工具、电子设备或其他物体等物体
    • 测试表面下方无纸、垃圾桶或其他物体等物体
  • 干净的墙壁
    • 墙上没有纸/打印件
    • 绘画和其他墙壁装饰是可以接受的
    • 考生将被要求在考试开始前移除非装饰物品
  • 灯光
    • 空间必须光线充足,以便监考人员能够看到考生的脸、手和周围的工作区域
    • 考生身后没有明亮的灯光或窗户
  • 其他
    • 考生在考试期间必须保持在相机范围内
    • 空间必须是私密的,没有过多的噪音。不允许进入咖啡店、商店、开放式办公环境等公共场所。

有关考试期间政策、程序和规则的更多信息,请参阅[考生手册]

2.4 其他资源

如果您需要其他帮助,请使用您的 LF 帐户登录 https://trainingsupport.linuxfoundation.org 并使用搜索栏查找问题的答案,或从提供的类别中选择您的请求类型

2.A 考试小提示

  • 注意在哪个集群操作的

  • 注意在哪个节点操作的

  • 注意在哪个ns操作的

3. 练习题

Task 1. RBAC - role based access control

Task weight: 4%
Set configuration context:

$ kubectl config use-context ck8s

内容:

为部署管道创建一个新的 ClusterRole 并将其绑定到范围为特定 namespace 的特定 ServiceAccount

任务:

  • 创建一个名字为 deployment-clusterrole 且仅允许创建以下资源类型的新 ClusterRole
    • Deployment
    • StatefulSet
    • DaemonSet
  • 在现有的 namespace app-team1 中创建一个名为 cicd-token 的新 ServiceAccount
  • 限于 namespace app-team1 ,将新的 ClusterRole deployment-clusterrole 绑定到新的 ServiceAccount cicd-token

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 创建 ClusterRole(资源名后面的 s 可以不加)

    $ kubectl create clusterrole --help
    
    *$ kubectl create clusterrole deployment-clusterrole \
      --verb=create \
      --resource=Deployment,StatefulSet,DaemonSet
    
  3. 创建 serviceaccount

    *$ kubectl \
      --namespace app-team1 \
      create serviceaccount cicd-token
    
  4. 绑定 rolebinding

    $ kubectl create rolebinding --help
    
    *$ kubectl create rolebinding cicd-token-deployment-clusterrole \
      --clusterrole=deployment-clusterrole \
      --serviceaccount=app-team1:cicd-token \
      --namespace=app-team1
    
  5. 验证

    $ kubectl describe clusterrole deployment-clusterrole
    Name:         deployment-clusterrole
    Labels:       <none>
    Annotations:  <none>
    PolicyRule:
      Resources          Non-Resource URLs  Resource Names  Verbs
      ---------          -----------------  --------------  -----
      `daemonsets.apps`  []                 []              [`create`]
      `deployments.apps` []                 []              [`create`]
      `statefulsets.apps`[]                 []              [`create`]
    
    $ kubectl -n app-team1 get serviceaccounts
    NAME         SECRETS   AGE
    `cicd-token` 1         16m
    default      1         18m
    
    $ kubectl -n app-team1 get rolebindings
    NAME                                ROLE                                 AGE
    `cicd-token-deployment-clusterrole`   ClusterRole/deployment-clusterrole   11m
    
    $ kubectl -n app-team1 describe rolebindings cicd-token-deployment-clusterrole
    Name:         cicd-token-deployment-clusterrole
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind: `ClusterRole`
      Name: `deployment-clusterrole`
    Subjects:
      Kind            Name        Namespace
      ----            ----        ---------
      ServiceAccount `cicd-token``app-team1`
      
    $ kubectl -n app-team1 \
      auth can-i create deployment \
      --as system:serviceaccount:app-team1:cicd-token
    `yes`
    
    $ kubectl \
      auth can-i create deployment \
      --as system:serviceaccount:app-team1:cicd-token
    `no`
    

Task 2. drain - highly-available

Task weight: 4%
Set configuration context:

$ kubectl config use-context ck8s

任务:

将名为 k8s-worker1 的 node 设置为不可用, 并重新调度该 node 上所有运行的 pods

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 确认节点状态

    $ kubectl get nodes
    k8s-master    Ready    control-plane   9d    v1.27.1
    k8s-worker1   Ready    <none>          9d    v1.27.1
    k8s-worker2   Ready    <none>          9d    v1.27.1
    
  3. 驱逐应用,并设置节点不可调度

    $ kubectl drain k8s-worker1
    node/k8s-worker1 cordoned
    error: unable to drain node "k8s-worker1" due to error:[cannot delete DaemonSet-managed Pods (use `--ignore-daemonsets` to ignore): kube-system/calico-node-g5wj7, kube-system/kube-proxy-8pv56, cannot delete Pods with local storage (use `--delete-emptydir-data` to override): kube-system/metrics-server-5fdbb498cc-k4mgt], continuing command...
    There are pending nodes to be drained:
     k8s-worker1
    cannot delete DaemonSet-managed Pods (use `--ignore-daemonsets` to ignore): kube-system/calico-node-g5wj7, kube-system/kube-proxy-8pv56
    cannot delete Pods with local storage (use `--delete-emptydir-data` to override): kube-system/metrics-server-5fdbb498cc-k4mgt
    
    *$ kubectl drain k8s-worker1 --ignore-daemonsets --delete-emptydir-data
    
  4. 验证

    $ kubectl get nodes
    NAME          STATUS                     ROLES                  AGE   VERSION
    k8s-master    Ready                      control-plane,master   84m   v1.27.1
    k8s-worker1   Ready,`SchedulingDisabled` <none>                 79m   v1.27.1
    k8s-worker2   Ready                      <none>                 76m   v1.27.1
    
    $ kubectl get pod -A -owide | grep worker1
    kube-system	`calico-node-j6r9s`	1/1	Running	1 (9d ago)	9d	192.168.147.129	k8s-worker1	<none>	<none>
    kube-system	`kube-proxy-psz2g`	1/1	Running	1 (9d ago)	9d	192.168.147.129	k8s-worker1	<none>	<none>
    

Task 3. upgrade - Kubeadm

Task weight: 7%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 现有的 kubernetes 集群正在运行的版本是 1.28.1仅将主节点上的所有 kubernetes 控制平面和节点组件升级到版本 1.28.2

  • 另外, 在主节点上升级 kubeletkubectl

确保在升级前 drain 主节点, 并在升级后 uncordon 主节点。请不要升级工作节点,etcd,container 管理器,CNI 插件,DNS 服务或任何其他插件

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 确认节点状态

    *$ kubectl get nodes
    NAME          STATUS                     ROLES           AGE    VERSION
    `k8s-master`  Ready                      control-plane   6d2h  `v1.28.1`
    ...
    
  3. 登录到 k8s-master

    *$ ssh root@k8s-master
    
  4. 执行 “kubeadm upgrade” / 对于第一个控制面节点 / 升级 kubeadm:

    apt update
    apt-cache madison kubeadm
    
    
    apt-mark unhold kubeadm && \
    apt-get update && apt-get install -y kubeadm=1.28.2-00 && \
    apt-mark hold kubeadm
    
    
  5. Verify the upgrade plan:(验证升级计划:)

    kubeadm version
    
    kubeadm upgrade plan
    
    
  6. 选择要升级到的目标版本,运行合适的命令。例如:

    kubeadm upgrade apply v1.28.2 \
      --etcd-upgrade=false
    :<<EOF
    ...
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: `y`
    ...
    EOF
    
  7. 腾空节点

    kubectl drain k8s-master --ignore-daemonsets
    
    
  8. 升级 kubectl 和 kubelet

    apt-mark unhold kubelet kubectl && \
    apt-get update && apt-get install -y kubelet=1.28.2-00 kubectl=1.28.2-00 && \
    apt-mark hold kubelet kubectl
    
    
  9. 重启 kubelet

    systemctl daemon-reload 
    systemctl restart kubelet
    
    
  10. 解除节点的保护

    kubectl uncordon k8s-master
    

    Ctrl-d

  11. 验证结果

    $ kubectl get nodes
    NAME          STATUS                     ROLES                  AGE    VERSION
    k8s-master    Ready                      control-plane,master   157m  `v1.28.2`
    k8s-worker1   Ready,SchedulingDisabled   <none>                 152m   v1.28.1
    k8s-worker2   Ready                      <none>                 149m   v1.28.1
    
    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2",....
    
    $ kubelet --version
    Kubernetes v1.28.2
    

Task 4. snapshot - Implement etcd

Task weight: 7%

任务:

  • 首先,为运行在 https://127.0.0.1:2379上的现有etcd实例创建快照,并将快照保存到/srv/backup/etcd-snapshot.db
为给定实例创建快照预计能在几秒钟内完成。 如果该操作似乎挂起, 则命令可能有问题。 用 CTRL+ c 来取消操作, 然后重试。
  • 然后,还原位于/srv/data/etcd-snapshot-previous.db的现有先前快照。
提供了以下TLS证书和密钥,以通过 etcdctl 连接到服务器。
  • CA 证书:/opt/KUIN00601/ca.crt
  • 客户端证书:/opt/KUIN00601/etcd-client.crt
  • 客户端密钥:/opt/KUIN00601/etcd-client.key

提示

  • 参考网址: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#built-in-snapshot
  • 考试时,是单独的集群
  • 练习环境中,etcd 快照的题,单独做

参考答案

  1. 备份命令

    $ ETCDCTL_API=3 etcdctl snapshot save --help
    
    *$ ETCDCTL_API=3 etcdctl \
      --endpoints=https://127.0.0.1:2379 \
      --cacert=/opt/KUIN00601/ca.crt \
      --cert=/opt/KUIN00601/etcd-client.crt \
      --key=/opt/KUIN00601/etcd-client.key \
      snapshot save /srv/backup/etcd-snapshot.db
    
    
  2. 还原命令

    *$ sudo mv /etc/kubernetes/manifests /etc/kubernetes/manifests.bk
    
    *$ kubectl get pod -A
    The connection to the server 192.168.147.128:6443 was refused - did you specify the right host or port?
    
    $ sudo grep data /etc/kubernetes/manifests.bk/etcd.yaml
    *$ sudo mv /var/lib/etcd /var/lib/etcd.bk
    
    *$ sudo chown $USER /srv/data/etcd-snapshot-previous.db
    
    *$ sudo ETCDCTL_API=3 etcdctl \
      --data-dir /var/lib/etcd \
      snapshot restore /srv/data/etcd-snapshot-previous.db
    
    *$ sudo mv /etc/kubernetes/manifests.bk /etc/kubernetes/manifests
    
  3. 验证

    $ ETCDCTL_API=3 etcdctl snapshot status /srv/backup/etcd-snapshot.db
    89703627, 14521, 1929, 4.3 MB
    
    $ kubectl get componentstatuses
    Warning: v1 ComponentStatus is deprecated in v1.19+
    NAME                 STATUS    MESSAGE                         ERROR
    scheduler            Healthy   ok
    controller-manager   Healthy   ok
    etcd-0               Healthy   {"health":"true","reason":""}
    

Task 5. network policy - network interface

Task weight: 7%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 创建一个名为allow-port-from-namespace的新NetworkPolicy, 以允许现有 namespace internal 中的 Pods 连接到 namespace echo 中其他 Pods 的端口 8080
  • 确保新的 NetworkPolicy
    • 不允许对没有在监听端口 8080 的 Pods 的访问
    • 不允许不来自 namespace internal 的 Pods 的访问

提示

  • 务必分析清楚规则是进还是出
  • 考试时候,对应的 namespace 可能不存在

参考答案

  1. 切换 kubernetes

    *$ kubectl config use-context ck8s
    
  2. 考试时,确认 namespace internal 是否存在

    $ kubectl create ns internal
    
    $ kubectl get ns internal --show-labels
    new    Active   11s   
    
    $ kubectl label ns internal kubernetes.io/metadata.name=internal
    
    $ kubectl get ns internal --show-labels
    new    Active   11s   kubernetes.io/metadata.name=internal
    
  3. 查看标签

    *$ kubectl get ns internal --show-labels
    NAME       STATUS   AGE     LABELS
    internal   Active   3h59m   kubernetes.io/metadata.name=internal
    
  4. 编辑 yaml

    *$ sudo tee -a /etc/vim/vimrc <<EOF
    set number ts=2 et cuc
    EOF
    
    $ vim 5.yml
    
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
    # name: test-network-policy
      name: allow-port-from-namespace 
    # namespace: default
      namespace: echo
    spec:
    # podSelector:
      podSelector: {}
    #   matchLabels:
    #     role: db
      policyTypes:
      - Ingress
    # - Egress
      ingress:
      - from:
    #   - ipBlock:
    #       cidr: 172.17.0.0/16
    #       except:
    #       - 172.17.1.0/24
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: internal
    #   - podSelector:
    #       matchLabels:
    #         role: frontend
        ports:
        - protocol: TCP
    #     port: 6379
          port: 8080
    # egress:
    # - to:
    #   - ipBlock:
    #       cidr: 10.0.0.0/24
    #   ports:
    #   - protocol: TCP
    #     port: 5978
    
  5. 应用

    $ kubectl create ns echo
    
    *$ kubectl apply -f 5.yml
    
  6. 验证

    $ kubectl -n echo describe networkpolicies allow-port-from-namespace
    Name:         allow-port-from-namespace
    Namespace:    echo
    Created on:   YYYY-mm-dd HH:MM:ss +0800 CST
    Labels:       <none>
    Annotations:  <none>
    Spec:
      PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
      Allowing ingress traffic:
        To Port: 8080/TCP
        From:
          NamespaceSelector: kubernetes.io/metadata.name=internal
      Not affecting egress traffic
      Policy Types: Ingress
    

Task 6. expose - service types

Task weight: 7%
Set configuration context:

$ kubectl config use-context ck8s

任务

  • 请重新配置现有的 deployment front-end:添加名为 http 的端口规范来公开现有容器 nginx 的端口 80/tcp
  • 创建一个名为 front-end-svc 的新服务, 以公开容器端口 http
  • 配置此服务, 以通过在指定的节点上的 NodePort 来公开各个 pods。

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 确认 deployments

    $ kubectl get deployments front-end
    NAME        READY   UP-TO-DATE   AVAILABLE   AGE
    `front-end` 1/1     1            1           10m
    
  3. 获取 ports 编写信息(可选),建议使用网页检索

    $ kubectl explain --help
    
    $ kubectl explain pod.spec.containers
    
    $ kubectl explain pod.spec.containers.ports
    
    $ kubectl explain deploy.spec.template.spec.containers.ports
    
  4. 编辑 deployments front-end

    *$ kubectl edit deployments front-end
    
    ...
      template:
    ...
        spec:
          containers:
          - image: nginx
            # 添加 3 行
            ports:
            - name: http
              containerPort: 80
    ...
    
    $ kubectl get deployments front-end
    NAME        READY   UP-TO-DATE   AVAILABLE   AGE
    front-end  `1/1`    1            1           12m
    
  5. ️ 创建 Service

    $ kubectl expose -h
    
    *$ kubectl expose deployment front-end \
       --port=80 --target-port=http \
       --name=front-end-svc \
       --type=NodePort
    
    

    ️ 创建 Service(建议)

    *$ kubectl get deployments front-end --show-labels
    NAME        READY   UP-TO-DATE   AVAILABLE   AGE   LABELS
    front-end   0/1     1            0           37s  `app=front-end`
    
    *$ vim 6.yml
    
    apiVersion: v1
    kind: Service
    metadata:
    # name: my-service
      name: front-end-svc
    spec:
      # 题意要求(确认)
      type: NodePort
      selector:
    #   app: MyApp
        app: front-end
      ports:
        - port: 80
          targetPort: http
    
    *$ kubectl apply -f 6.yml
    
  6. 验证

    $ kubectl get services front-end-svc
    NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    front-end-svc  `NodePort`  10.106.46.251   <none>        80:`32067`/TCP   39s
    
    $ curl k8s-worker1:32067
    ...
    <title>Welcome to nginx!</title>
    ...
    

Task 7. ingress nginx

Task wight: 7%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 如下创建一个新的 nginx ingress 资源:
  • 名称: ping
  • namespace: ing-internal
  • 使用服务端口 5678 在路径/hi 上公开服务 hi
可以使用以下命令检查服务 hello 的可用性, 该命令返回 hi:
$ curl -kL <INTERNAL_IP>/hi

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 安装 ingressclasses

    ️ 考试时

    提示

    • https://kubernetes.github.io/ingress-nginx/deploy/
    *$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
    

    kubernetes.github.io 这个网址不让访问,解决方案

    • https://github.com/kubernetes/ingress-nginx

    ️ 练习环境

    *$ kubectl apply -f http://k8s.ruitong.cn:8080/K8s/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
    
  3. 验证已安装

    *$ kubectl get ingressclasses
    NAME    CONTROLLER             PARAMETERS   AGE
    `nginx` k8s.io/ingress-nginx   <none>       9s
    
    *$ kubectl get pod -A | grep ingress
    ingress-nginx  ingress-nginx-admission-create-w2h4k        0/1  Completed  0  92s
    ingress-nginx  ingress-nginx-admission-patch-k6pgk         0/1  Completed  1  92s
    ingress-nginx  `ingress-nginx-controller-58b94f55c8-gl7gk` 1/1 `Running`   0  92s
    
  4. 编辑 yaml 文件

    *$ vim 7.yml
    
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    # name: minimal-ingress
      name: ping
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /
    # 添加1行
      namespace: ing-internal
    spec:
    # ingressClassName: nginx-example
      ingressClassName: nginx
      rules:
      - http:
          paths:
    #     - path: /testpath
          - path: /hi
            pathType: Prefix
            backend:
              service:
    #           name: test
                name: hi
                port:
    #             number: 80
                  number: 5678
    
  5. 应用生效

    *$ kubectl apply -f 7.yml
    
    $ kubectl -n ing-internal get ingress
    NAME   CLASS   HOSTS   ADDRESS   PORTS   AGE
    `ping` nginx   *                 80      11m
    
  6. 验证

    *$ kubectl get pods -A -o wide \
      | grep ingress
    ...
    ingress-nginx  `ingress-nginx-controller`-769f969657-4zfjv  1/1  Running  0  12m   `172.16.126.15`  k8s-worker2  <none>  <none>
    
    *$ curl 172.16.126.15/hi
    hi
    

Task 8. scale - scale applications

Task weight: 4%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 将 deployment webserver 扩展至 6 pods

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 查看

    $ kubectl get deployments webserver 
    NAME        READY   UP-TO-DATE   AVAILABLE   AGE
    webserver  `1/1`    1            1           30s
    
  3. 扩展副本
    ️ edit

    *$ kubectl edit deployments webserver
    
    ...
    spec:
      progressDeadlineSeconds: 600
    # replicas: 1
      replicas: 6
    ...
    

    ️ scale

    $ kubectl scale deployment webserver --replicas 6
    
  4. 验证

    $ kubectl get deployments webserver -w
    NAME        READY   UP-TO-DATE   AVAILABLE   AGE
    webserver  `6/6`    6            6           120s
    <Ctrl-C>
    

Task 9. schedule - Pod scheduling

Task weight 4%
Set configuration context:

$ kubectl config use-context ck8s

任务

  • 按如下要求调度一个 pod:
  • 名称: nginx-kusc00401
  • image: nginx
  • Node selector: disk=spinning

提示

  • 官网手册中搜索 nodeselect,将 Pod 指派给节点 | Kubernetes

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 创建 pod

    *$ vim 9.yml
    
    apiVersion: v1
    kind: Pod
    metadata:
    # name: nginx
      name: nginx-kusc00401
    spec:
      containers:
      - name: nginx
        # 符合题题意要求
        image: nginx
      nodeSelector:
    #   disktype: ssd
        disk: spinning
    
  3. 应用

    *$ kubectl apply -f 9.yml
    
  4. 确认

    $ kubectl get pod nginx-kusc00401 -o wide -w
    NAME           READY  STATUS   RESTARTS  AGE  IP             NODE           NOMINATED NODE   READINESS GATES
    nginx-kusc00401  1/1  Running  0         11s  172.16.126.30  `k8s-worker2`  <none>           <none>
    <Ctrl-C>
    
    $ kubectl get nodes -l disk=spinning
    NAME          STATUS   ROLES    AGE   VERSION
    `k8s-worker2`   Ready    <none>   9d    v1.27.1
    

Task 10. describe - Pod scheduling

Task weight 4%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 检查有多少个 nodes 已准备就绪(不包括被打上 tainted: NoSchedule 的节点) , 并将数量写入/opt/KUSC00402/kusc00402.txt

提示

  • 仔细看 taints 是否为 NoSchedule

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 确认节点状态

    $ kubectl get nodes
    k8s-master   Ready                     control-plane  9d  v1.27.2
    k8s-worker1  Ready,SchedulingDisabled  <none>         9d  v1.27.1
    k8s-worker2  Ready                     <none>         9d  v1.27.1
    
  3. 检查有 Taint 的节点

    *$ kubectl describe nodes | grep -i taints
    Taints:  node-role.kubernetes.io/control-plane:`NoSchedule`
    Taints:  node.kubernetes.io/unschedulable:`NoSchedule`
    Taints:  <none>
    
  4. 写入结果

    *$ echo 1 > /opt/KUSC00402/kusc00402.txt
    

Task 11. multi Containers

Task weight 4%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 创建一个名字为kucc1的 pod, 在pod里面分别为以下每个images单独运行一个app container(可能会有 1-4 个 images),容器名称和镜像如下:

nginx + redis + memcached + consul

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 创建 pod

    *$ vim 11.yml 
    
    apiVersion: v1
    kind: Pod
    metadata:
    # name: myapp-pod
      name: kucc1
    spec:
      containers:
    # - name: myapp-container
      - name: nginx
    #   image: busybox:1.28
        image: nginx
    # 添加
      - name: redis
        image: redis
      - name: memcached
        image: memcached
      - name: consul
        image: consul
    
  3. 应用

    *$ kubectl apply -f 11.yml
    
  4. 验证

    $ kubectl get pod kucc1 -w
    NAME    READY   STATUS    RESTARTS   AGE
    kucc1  `4/4`    Running   0          77s
    

    Ctrl-C

Task 12. pv - persistent volumes

Task weight: 4%
Set configuration context:

任务:

  • 创建名为 app-data 的 persistent volume, 容量为 1Gi, 访问模式为 ReadWriteMany。 volume类型为 hostPath, 位于/srv/app-data

参考答案

  1. 编辑 yaml

    *$ vim 12.yml
    
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    # name: task-pv-volume
      name: app-data
    spec:
    # storageClassName: manual
      capacity:
    #   storage: 10Gi
        storage: 1Gi
      accessModes:
    #   - ReadWriteOnce
        - ReadWriteMany
      hostPath:
    #   path: "/mnt/data"
        path: "/srv/app-data"
    # 增加
        type: DirectoryOrCreate
    
  2. 应用生效

    *$ kubectl apply -f 12.yml
    
  3. 验证

    $ kubectl get pv
    NAME        CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS     CLAIM  STORAGECLASS  REASON   AGE
    `app-data` `1Gi`      `RWX`         Retain          Available                                   4s
    

Task 13. pvc - persistent volume claims

Task weight: 7%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 创建一个新的 PersistentVolumeClaim

    • 名称: pv-volume
    • class: csi-hostpath-sc
    • 容量: 10Mi
  • 创建一个新的 pod, 此 pod 将作为 volume 挂载到 PersistentVolumeClaim

    • 名称:web-server
    • image:nginx
    • 挂载路径:/usr/share/nginx/html
  • 配置新的 pod, 以对 volume 具有 ReadWriteOnce 权限。

  • 最后, 使用 kubectl edit 或者 kubectl patchPersistentVolumeClaim 的容量扩展为 70Mi, 并记录此次更改。

参考答案

  1. 切换 kubernetes

    *$ kubectl config use-context ck8s
    
  2. 创建 pvc

    *$ vim 13pvc.yml
    
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    # name: claim1
      name: pv-volume
    spec:
      accessModes:
        - ReadWriteOnce
    # storageClassName: fast
      storageClassName: csi-hostpath-sc
      resources:
        requests:
    #     storage: 30Gi
          storage: 10Mi
    
    *$ kubectl apply -f 13pvc.yml
      
    $ kubectl get pvc
    NAME       STATUS   VOLUME                                     CAPACITY  ACCESS MODES   STORAGECLASS      AGE
    pv-volume  `Bound`   pvc-89935613-3af9-4193-9a68-116067cf1a34  10Mi      RWO            csi-hostpath-sc   6s
      
    $ kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS      REASON   AGE
    app-data                                   1Gi        RWX            Retain           Available                                                  72m
    pvc-89935613-3af9-4193-9a68-116067cf1a34   10Mi       RWO            Delete          `Bound`      default/pv-volume   csi-hostpath-sc            39s
    
  3. 创建 pod

    *$ vim 13pod.yml
    
    apiVersion: v1
    kind: Pod
    metadata:
    # name: task-pv-pod
      name: web-server
    spec:
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
    #       claimName: task-pv-claim
            claimName: pv-volume
      containers:
    #   - name: task-pv-container
        - name: web-server
          image: nginx
    #     ports:
    #       - containerPort: 80
    #         name: "http-server"
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: task-pv-storage
    
    *$ kubectl apply -f 13pod.yml
    pod/web-server created
    
    $ kubectl get pod web-server
    NAME         READY   STATUS    RESTARTS   AGE
    web-server   1/1    `Running`  0          9s
    
  4. 允许扩容

    *$ kubectl edit storageclasses csi-hostpath-sc
    ...
    # 添加1行
    allowVolumeExpansion: true
      
    $ kubectl get storageclasses -A 
    NAME             PROVISIONER     RECLAIMPOLICY  VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION  AGE
    csi-hostpath-sc  k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           `true`                   5m51s
    
  5. 并记录此次更改

    *$ kubectl edit pvc pv-volume --record
    
    ...
    spec:
    ...
    #     storage: 10Mi
          storage: 70Mi
    ...
    
  6. 确认(练习环境显示值还是10Mi,考试环境会正常显示)

    $ kubectl get pvc
    NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
    pv-volume  Bound    pvc-9a5fb9b6-b127-4868-b936-cb4f17ef910e  `70Mi`      RWO            csi-hostpath-sc   31m
    
    $ kubectl describe pvc pv-volume | grep record
    

Task 14. logs - node logging

Task weight: 5%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 监控 pod bar 的日志:
  • 提取与错误 unable-to-access-website 相对应的日志行
  • 将这些日志行写入到/opt/KUTR00101/bar

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 查看 logs

    *$ kubectl logs bar \
        | grep unable-to-access-website \
        > /opt/KUTR00101/bar
    
  3. 验证

    $ cat /opt/KUTR00101/bar
    YYYY-mm-dd 07:13:03,618: ERROR `unable-to-access-website`
    

Task 15. sidecar - Manage container stdout

Task weight 7%
Set configuration context:

$ kubectl config use-context ck8s

内容:

在不更改其现有容器的情况下, 需要将一个现有的 pod 集成到 kubernetes 的内置日志记录体系结构中(例如 kubectl logs) 。 添加 streamimg sidecar 容器是实现此要求的一种好方法

任务:

  • 将一个 busybox sidecar 容器名称为sidecar添加到现有的 big-corp-app。 新的 sidecar 容器必须运行以下命令:

      /bin/sh -c tail -f /var/log/legacy-app.log
    
  • 使用名为 logs 的 volume mount 来让文件/var/log/legacy-app.log 可用于 sidecar 容器

不要更改现有容器
不要修改日志文件的路径, 两个容器必须通过 /var/log/legacy-app.log来访问该文件。

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s 
    
  2. 编辑 yaml

    *$ kubectl get pod big-corp-app -o yaml > 15.yml
    
    *$ vim 15.yml
    
    ...
    spec:
      containers:
    ...
        volumeMounts:
        # 已有容器 增加 2 行
        - name: logs
          mountPath: /var/log
      # 新容器 增加 6 行
      - name: sidecar
        image: busybox
        args: [/bin/sh, -c, 'tail -f /var/log/legacy-app.log']
        volumeMounts:
        - name: logs
          mountPath: /var/log
    ...
      volumes:
      # 增加 2 行
      - name: logs
        emptyDir: {}
    ...
    

    *$ kubectl delete -f 15.yml --grace-period=0 --force
    *$ kubectl apply -f 15.yml
    

    *$ kubectl replace -f 15.yml --grace-period=0 --force
    
  3. 确认

    $ kubectl get pod big-corp-app -w
    NAME           READY   STATUS    RESTARTS   AGE
    big-corp-app  `2/2`    Running   1          37s
    
    $ kubectl logs -c sidecar big-corp-app
    

Task 16. top - monitor applications

Task weight: 5%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 通过 pod label name=cpu-loader, 找到运行时占用大量 CPU 的 pod, 并将占用 CPU 最高的 pod 名称写入到文件/opt/KUTR00401/KUTR00401.txt(已存在)

参考答案

  1. 切换 kubernetes 集群

    *$ kubectl config use-context ck8s
    
  2. 查找

    $ kubectl top pod -h
    
    *$ kubectl top pod -l name=cpu-loader -A
    NAMESPACE   NAME                          CPU(cores)   MEMORY(bytes)
    default    `bar`                         `1m`          5Mi
    default     cpu-loader-5b898f96cd-56jf5   0m           3Mi
    default     cpu-loader-5b898f96cd-9zlt5   0m           4Mi
    default     cpu-loader-5b898f96cd-bsvsb   0m           4Mi           
    
  3. 写入

    *$ echo bar > /opt/KUTR00401/KUTR00401.txt
    

Task 17. Daemon - cluster component

Task weight: 13%
Set configuration context:

$ kubectl config use-context ck8s

任务:

  • 名为 k8s-worker1 的 kubernetes worker node 处于 NotReady 状态。 调查发生这种情况的原因, 并采取相应措施将 node 恢复为 Ready 状态,确保所做的任何更改永久生效。
可使用以下命令通过 ssh 连接到故障 node:
$ ssh k8s-worker1
可使用以下命令在该 node 上获取更高权限:
$ sudo -i

提示

  • 这题与第二题有关联
  • kiosk@k8s-master:~$ cka-setup 17
  • kiosk@k8s-master:~$ sshpass -p vagrant ssh k8s-worker1 sudo systemctl disable --now kubelet

参考答案

  1. 切换集群环境

    *$ kubectl config use-context ck8s
    
  2. 确认节点状态

    $ kubectl get nodes
    NAME          STATUS                     ROLES                  AGE   VERSION
    k8s-master    Ready                      control-plane,master   43d   v1.27.1
    k8s-worker1  `NotReady`                  <none>                 43d   v1.27.1
    
    *$ kubectl describe nodes k8s-worker1
    ...
    Conditions:
      Type                 Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
      ----                 ------    -----------------                 ------------------                ------              -------
      NetworkUnavailable   False     Tue, 31 May YYYY 11:25:06 +0000   Tue, 31 May YYYY 11:25:06 +0000   CalicoIsUp          Calico is running on this node
      MemoryPressure       Unknown   Tue, 31 May YYYY 13:51:08 +0000   Tue, 31 May YYYY 13:53:42 +0000  `NodeStatusUnknown   Kubelet stopped posting node status.`
      DiskPressure         Unknown   Tue, 31 May YYYY 13:51:08 +0000   Tue, 31 May YYYY 13:53:42 +0000  `NodeStatusUnknown   Kubelet stopped posting node status.`
      PIDPressure          Unknown   Tue, 31 May YYYY 13:51:08 +0000   Tue, 31 May YYYY 13:53:42 +0000  `NodeStatusUnknown   Kubelet stopped posting node status.`
      Ready                Unknown   Tue, 31 May YYYY 13:51:08 +0000   Tue, 31 May YYYY 13:53:42 +0000  `NodeStatusUnknown   Kubelet stopped posting node status.`
    ...
    
  3. 启动服务

    *$ ssh k8s-worker1
    
    *$ sudo -i
    
    *# systemctl enable --now kubelet.service
    # systemctl status kubelet
    

    q 退出 status

    Ctrl-D 退出 sudo

    Ctrl-D 退出 ssh

  4. 确认

    *$ kubectl get nodes
    NAME          STATUS                     ROLES                  AGE   VERSION
    k8s-master    Ready                      control-plane,master   43d   v1.27.1
    k8s-worker1  `Ready`,SchedulingDisabled  <none>                 43d   v1.27.1
    

A. 附录

A1. 判分脚本

  • 整体判分
$ cka-grade

 Spend Time: up 1 hours, 1 minutes  Wed 01 Jun YYYY 04:58:06 PM UTC
==================================================================
 PASS	Task1.  - RBAC
 PASS	Task2.  - drain
 PASS	Task3.  - upgrade
 PASS	Task4.  - snapshot
 PASS	Task5.  - network-policy
 PASS	Task6.  - service
 PASS	Task7.  - ingress-nginx
 PASS	Task8.  - replicas
 PASS	Task9.  - schedule
 PASS	Task10. - NoSchedule
 PASS	Task11. - multi_pods
 PASS	Task12. - pv
 PASS	Task13. - Dynamic-Volume
 PASS	Task14. - logs
 PASS	Task15. - Sidecar
 PASS	Task16. - Metric
 PASS	Task17. - Daemon (kubelet, containerd, docker)
==================================================================
 The results of your CKA v1.27:  `PASS`	 Your score: `100`
  • 单题判分
$ cka-grade 1

 Spend Time: up 1 hours, 2 minutes  Wed 01 Jun YYYY 04:58:14 PM UTC
===================================================================
 `PASS`	Task1.  - RBAC
===================================================================
 The results of your CKA v1.27:  FAIL	 Your score: 4

你可能感兴趣的:(k8s)