2021年11月CKA认证之旅.md

CAK认证

CKA考前准备工作

  • 需要科学上网,CKA考试时需要检查上最低500Kbps下载和256Kbps上传,
  • 笔记本电脑自带摄像头即可
  • 需要安装Innovative Exams Screensharing谷歌浏览器插件
  • 多做CKA真题,基本上考试就是参数有变动,都是原题
  • 重点重点重点 考试时可以看官方文档,可以提前做成书签,考试时都有直接点击即可,只能打开一个页签。

CKA英文考试试题

  • 试题1
**Set configuration context $kubectl config use-context k8s. Monitor the logs of Pod foobar and Extract log lines corresponding to error unable-to-access-website . Write them to /opt/KULM00612/foobar.**

翻译:设置配置上下文$kubectl config use context k8s,监控Pod foobar的日志,并提取错误“unable-to-access-website”对应的日志行。把它们写到/opt/KULM00612/foobar。

解析:就是看下一个pod中的日志,把满足条件的日志行保存在某一文件中

解答:
首先切换下上下文k8s环境
kubectl config use context k8s
[root@k8s-node1 ~]# kubectl   logs nginx|grep "unable-to-access-website" > /tmp/task.txt
  • 试题2
**Set configuration context $kubectl config use-context k8s. List all PVs sorted by capacity, saving the full kubectl output to /opt/KUCC0006/my_volumes. Use kubectl own functionally for sorting the output, and do not manipulate it any further**

翻译:设置配置上下文$kubectl config use context k8s。列出按容量排序的所有PV,将完整的kubectl输出保存到/opt/KUCC0006/my_volumes。在功能上使用kubectl 本身对输出进行排序,不要再对其进行任何操作
解析:pv排序

解答: 
[root@k8s-node1 pv]# kubectl  get pv -A  --sort-by={.spec.capacity.sotrge} >  /opt/cka-tak.txt

然后在验证下是否文件中有数据。
[root@k8s-node1 pv]# cat  /opt/cka-tak.txt
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfspv1   1Gi        RWO            Recycle          Available           mynfs                   4m12s
[root@k8s-node1 pv]# 
  • 试题3
**Set configuration context $kubectl config use-context k8s. Ensure a single instance of Pod nginx is running on each node of the Kubernetes cluster where nginx also represents the image name which has to be used. Do no override any taints currently in place. Use Daemonset to complete this task and use ds.kusc00612 as Daemonset name**

翻译:设置配置上下文$kubectl config use context k8s。确保在Kubernetes集群的每个节点上运行Pod nginx的单个实例,其中nginx还表示必须使用的映像名称。不要覆盖当前存在的任何污点。使用Daemonset完成此任务并使用ds.kusc00612作为守护进程名称

解答: 可参照官网的Daemonset进行修改
https://kubernetes.io/zh/docs/concepts/workloads/controllers/daemonset/

[root@k8s-node1 ~]# vim Daemonset.ayml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds.kusc00612
  labels:
    k8s-app: ds.kusc00612
spec:
  selector:
    matchLabels:
      name: ds.kusc00612
  template:
    metadata:
      labels:
        name: ds.kusc00612
    spec:
      containers:
      - name: nginx
        image: nginx
        
apply加载yaml文件验证是否正常
[root@k8s-node1 ~]# kubectl  apply -f Daemonset.ayml 
daemonset.apps/ds.kusc00612 created
[root@k8s-node1 ~]# kubectl  get  daemonset
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ds.kusc00612   1         1         1       1            1                     2m14s
  • 试题4
Set configuration context $kubectl config use-context k8s Perform the following tasks: Add an init container to lumpy-koala(which has been defined in spec file /opt/kucc00100/pod-specKUCC00612.yaml). The init container should create an empty file named /workdir/calm.txt. If /workdir/calm.txt is not detected, the Pod should exit. Once the spec file has been updated with the init container definition, the Pod should be created
翻译:执行以下任务:将init容器添加到lumpy-koala(已在文件/opt/kucc00100/pod-specKUCC00612.yaml中定义)。init容器应该创建一个名为/workdir/calm.txt的空文件. 如果/workdir/calm.txt未检测到,Pod应退出。一旦用init容器定义更新了spec文件,就应该创建Pod
解析:这道题在/opt/kucc00100/pod-specKUCC00612.yaml路径下已经有写好的Yaml了,但是还未在集群中创建该对象。所以你上去最好先kubectl get po | grep pod名字。发现集群还没有该pod。所以你就先改下这个Yaml,然后apply.先创建Initcontainer,然后在里面创建文件,/workdir目录明显是个挂载进度的目录,题目没规定,你就定义empDir类型。这边还要用到liveness检查

解答: 可参照官网:
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#using-init-containers
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
[root@k8s-node1 ~]# vim  init-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', "touch /workdir/calm.txt"]
    volumeMounts:
    - mountPath: /workdir
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}

[root@k8s-node1 ~]# kubectl  apply -f  init-pod.yaml 
pod/myapp-pod created
  • 试题5
Set configuration context $kubectl config use-context k8s. Create a pod named kucc6 with a single container for each of the following images running inside(there may be between 1 and 4 images specified):nginx +redis+memcached+consul。
翻译:创建一个名为kucc6的pod,其中包含运行在其中的以下映像的单个容器(可能会指定1到4个映像):nginx+redis+memcached+consur

解答:
可参照官网:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/

[root@k8s-node1 ~]# vim images.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: kucc6
  labels:
    env: kucc6
spec:
  containers:
  - name: nginx
    image: nginx
  - name: redis
    image: redis
  - name: memcached
    image: memcached
  - name: consul
    image: consul

验证:
[root@k8s-node1 ~]# kubectl  apply -f images.yaml 
pod/kucc6 created
  • 案例6
Set configuration context $kubectl config use-context k8s Schedule a Pod as follows: Name: nginxkusc00612 Image: nginx Node selector: disk=ssd
翻译:创建 Pod,名字为 nginx,镜像为 nginx,部署到 label disk=ssd的node上
解析:pod调度到指定节点,Nodeselector

解答: 官网搜索nodeselector找yaml文件,
参考文档https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/
#首先查询下node节点上面的标签,是否有对应得lable标签然后在操作
[root@k8s-node1 ~]# kubectl  get nodes   --show-labels 
[root@k8s-node1 ~]# vim nodeSelector.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginxkusc00612
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disktype: ssd

[root@k8s-node1 ~]# kubectl  apply -f nodeSelector.yaml 
pod/nginxkusc00612 created

[root@k8s-node1 ~]# kubectl  get pods nginxkusc00612 
NAME             READY   STATUS    RESTARTS   AGE
nginxkusc00612   1/1     Running   0          3m
  • 案例7
Set configuration context $kubectl config use-context k8s. Create a deployment as follows: Name: nginxapp Using container nginx with version 1.11.9-alpine. The deployment should contain 3 replicas. Next, deploy the app with new version 1.12.0-alpine by performing a rolling update and record that update.Finally,rollback that update to the previous version 1.11.9-alpine.
解析:部署deploy,然后修改进镜像(滚动更新),然后回滚上一版本
官网搜索Deployment

解答:  https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
[root@k8s-node1 ~]# kubectl  create deployment  nginxapp --image=nginx:1.11.9-alpine
deployment.apps/nginxapp created
[root@k8s-node1 ~]# kubectl  scale     deployment   nginxapp  --replicas=3
deployment.apps/nginxapp scaled

[root@k8s-node1 ~]# kubectl set image  deployment/nginxapp nginx=nginx:1.12.0-alpine --record=true
deployment.apps/nginxapp image updated
[root@k8s-node1 ~]# kubectl  rollout  undo   deployment.apps/nginxapp 
  • 案例8
Set configuration context $kubectl config use-context k8s Create and configure the service front-endservice so it’s accessible through NodePort/ClusterIp and routes to the existing pod named nginxkusc00612
解析:创建service,指定后端到已有pod: nginxkusc00612

解答: https://kubernetes.io/zh/docs/tasks/access-application-cluster/connecting-frontend-backend/

#由于没有验证成功,待后续在验证。找不到Endpoints
[root@k8s-node1 ~]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: pod-service
spec:
  selector:
    app: front-end
  type: NodePort
  ports:
  - protocol: TCP
    port: 80
    targetPort: http

  • 案例9
Set configuration context $kubectl config use-context k8s Create a Pod as follows: Name: jenkins Using image: jenkins In a new Kubernetes namespace named pro-test
解析:在新的命名空间中创建jenkins的pod

解答:
首先查询是否有pro-test名称空间
[root@k8s-node1 ~]# kubectl  get namespaces  pro-test 

然后在创建
[root@k8s-node1 ~]# kubectl    run   jenkins   --image=jenkins  --namespace=pro-test 

然后在验证:
[root@k8s-node1 ~]# kubectl  get pods -n pro-test 
  • 案例10
Set configuration context $kubectl config use-context k8s Create a deployment spec file that will: Launch 7 replicas of the redis image with the label : app_enb_stage=dev Deployment name: kual00612 Save a copy of this spec file to /opt/KUAL00612/deploy_spec.yaml (or .json) When you are done,clean up(delete) any new k8s API objects that you produced during this task

翻译:设置配置上下文$kubectl config use context k8s创建一个部署规范文件,该文件将:启动redis映像的7个副本,标签为:app_enb_stage=dev deployment name:kual00612将此规范文件的副本保存到/opt/kual00612/deploy_spec.yaml(或.json)完成后,清理(删除)您在此任务期间生成的任何新的k8sapi对象
解析:创建7副本的redis的deploy,指明标签,然后把yaml保存在指定位置

解答: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[root@k8s-node1 ~]# vim ReplicaSet.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kual00612
  labels:
    app_enb_stage: dev
spec:
  replicas: 7
  selector:
    matchLabels:
      app: kual00612
  template:
    metadata:
      labels:
        app: kual00612
    spec:
      containers:
      - name: redis
        image: redis

验证是否正常:
[root@k8s-node1 ~]# kubectl  get pods |grep kual00612|grep  Running|wc -l

然后在把yaml复制到/opt/KUAL00612/deploy_spec.yaml
cat ReplicaSet.yaml >  /opt/KUAL00612/deploy_spec.yaml
  • 案例11
Set configuration context $kubectl config use-context k8s Create a file /opt/KUCC00612/kucc00612.txt that lists all pods that implement Service foo in Namespace production. The format of the file should be one pod name per line.
解析:满足foo service选择规则的pod,并把名字写入某个文件

kubecet get svc   -n production  --show-lables|grep foo
kubectl  get pods -nccod45 -l name=foo |grep -v NAME|awk '{print $1}' >>   /opt/KUCC00302/kucc00302.txt
  • 案例12
Set configuration context $kubectl config use-context k8s Create a Kubernetes Secret as follows: Name: super-secret credential: blob, Create a Pod named pod-secrets-via-file using the redis image which mounts a secret named super-secret at /secrets. Create a second Pod named pod-secretsvia-env using the redis image, which exports credential as 
翻译:创建一个Kubernetes Secret,如下所示:Name:super Secret credential:blob,使用redis映像创建一个名为Pod secrets的Pod,该映像在/secrets处挂载一个名为super Secret的机密。使用redis映像创建第二个名为Pod secretsvia env的Pod,它将凭证导出为凭证
解析:创建secret,并在pod中通过Volume和环境变量使用该secret

解答:https://kubernetes.io/zh/docs/concepts/configuration/secret/
[root@k8s-node1 ~]#  echo blob |base64
YmxvYgo=



  • 案例13
Set configuration context $kubectl config use-context k8s Create a pod as follows: Name: nonpersistent-redis Container image: redis Named-volume with name: cache-control Mount path : /data/redis It should launch in the pre-prod namespace and the volume MUST NOT be persistent.
解析:创建一个pod,并挂载volume

解答: https://kubernetes.io/zh/docs/concepts/storage/volumes/
[root@k8s-node1 ~]# cat volumes.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nonpersistent-redis
spec:
  containers:
  - image: redis
    name: nonpersistent-redis
    volumeMounts:
    - mountPath: /data/redis
      name: cache-control
  volumes:
  - name: cache-control
    emptyDir: {}
  • 案例14
Set configuration context $kubectl config use-context k8s Scale the deployment webserver to 6 pods
解析:扩缩容

解答:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#scale 
[root@k8s-node1 ~]# kubectl  scale --replicas=6 webserver
  • 案例15
Set configuration context $kubectl config use-context k8s Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum.
解析:有多少节点是ready状态的,不包含被打了NoSchedule污点的节点
kubectl describe nodes `kubectl get nodes|grep Ready|awk '{print $1}'` | grep Taints|grep -vc NoSchedule >  /opt/nodenum
  • 案例16
Set configuration context $kubectl config use-context k8s Create a deployment as follows: Name: nginxdns Exposed via a service : nginx-dns Ensure that the service & pod are accessible via their respective DNS records The container(s) within any Pod(s) running as a part of this deployment should use the nginx image. Next, use the utility nslookup to look up the DNS records of the service & pod and write the output to /opt/service.dns and /opt/pod.dns respectively. Ensure you use the busybox:1.28 image (or earlier) for any testing, an the latest release has an upstream bug which impacts the use of nslookup
解析:创建service和deployment,然后解析service的dns和pod的dns,并把解析记录保存到指定文件

解答: 参考网址:https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/
      https://kubernetes.io/zh/docs/concepts/workloads/pods/init-containers/

[root@k8s-node1 dns]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
spec:
  selector:
    matchLabels:
      app: nginxdns
  replicas: 1
  template:
    metadata:
      labels:
        app: nginxdns
    spec:
      containers:
        - name: nginx
          image: nginx
          ports:
            - name: http
              containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxdns
spec:
  selector:
    app: nginxdns
  ports:
  - protocol: TCP
    port: 80
    targetPort: http
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox-test
  labels:
    app: busybox-test
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']

[root@k8s-node1 dns]#  kubectl exec -ti busybox-test -- nslookup nginxdns   #解析svc地址 >/opt/service.dns

kubectl exec -ti busybox-test -- nslookup  10.244.1.52   > /opt/pod.dns#Pod ip解析 
  • 案例17
No configuration context change required for this item Create a snapshot of the etcd instance running at https://127.0.0.1:2379 saving the snapshot to the file path /data/backup/etcd-snapshot.db The etcd instance is running etcd version 3.2.18 The following TLS certificates/key are supplied for connecting to the server with etcdctl CA certificate: /opt/KUCM0612/ca.crt Client certificate: /opt/KUCM00612/etcdclient.crt Client key: /opt/KUCM00612/etcd-client.key

解答:https://kubernetes.io/zh/docs/tasks/administer-cluster/configure-upgrade-etcd/
备份etcd
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key \
  snapshot save /etc/kubernetes/pki/etcd/etcd-snapshot-test.db

验证etcd
TCDCTL_API=3 etcdctl --write-out=table snapshot status /etc/kubernetes/pki/etcd/etcd-snapshot-test.db

还原etcd
  ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379  snapshot restore \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key\
  /etc/kubernetes/pki/etcd/etcd-snapshot-test.db 

然后在重启下etcd之后,在看下nodes和pods
  • 案例18
Set configuration context $kubectl config use-context ek8s Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.
解析:将标签未name=ek8s-node-1设置成不可用且把这个节点上面的pod调度到其他节点上去。其实就是使用kubectl drain命令

kubectl cordon  ek8s-node-1  #先隔离
kubectl drain ek8s-node-1 --ignore-daemonsets --delete-local-data --force
完成后一定要通过 get nodes 加以确认
  • 案例19
Set configuration context $kubectl config use-context wk8s A Kubernetes worker node,labelled with name=wk8s-node-0 is in state NotReady. Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, Ensuring that any changes are made permanent. Hints: You can ssh to the failed node using $ssh wk8s-node-0. You can assume elevated privileges on the node with the following command $sudo -i
题目解析:wk8s-node-0是NotReady状态,你需要处理下,使其变为ready,别更改需要永久性

  • 案例20
21、Set configuration context $kubectl config use-context wk8s Configure the kubelet system managed service,on the node labelled with name=wk8s-node-1, to Launch a Pod containing a single container of image nginx named myservice automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node. Hints: You can ssh to the failed node using $ssh wk8snode-1. You can assume elevated privileges on the node with the following command $sudo -i


  • 案例21
这题是给你两个节点,master1和node1,和一个admin.conf文件,然后让你在这两个节点上部署集群。
解析:

  • 案例22
Set configuration context $kubectl configuse-context bk8s Given a partially-functioning Kubernetes cluser, identify symptoms of failure on the cluter. Determine the node, the failing service and take actions to bring up the failed service and restore the health of the cluser. Ensure that any changes are made permanently. The worker node in this cluster is labelled with name=bk8s-node-0 Hints: You can ssh to the relevant nodes using $ssh $(NODE) where $(NODE) is one of bk8s-master-0 or bk8s-node-0. You can assume elevated privileges on any node in the cluster with the following command: $ sudo -i.
解析:这题的意思是,有个集群部分功能出现问题,需要你去修一下,需要是永久性的修复。


  • 案例23
Set configuration context $kubectl config use-context hk8s Create a persistent volume with name appconfig of capacity 1Gi and access mode ReadWriteMany. The type of volume is hostPath and its locationis /srv/app-config


CKA模拟题

  • 案例1
Task weight: 1%

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.
  • 案例2
Task weight: 3%

Use context: kubectl config use-context k8s-c1-H

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a master node, do not add new labels any nodes.

Shortly write the reason on why Pods are by default not scheduled on master nodes into /opt/course/2/master_schedule_reason .
  • 案例3
Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources. Record the action.
  • 案例4
Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources. Record the action.
  • 案例5
Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply runs true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.

Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.

Now the first Pod should be in ready state, confirm that.
  • 案例6
Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).

Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.
  • 案例7
Task weight: 8%

Use context: kubectl config use-context k8s-c1-H

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.

Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.

Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.
  • 案例8
Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Ssh into the master node with ssh cluster1-master1. Check how the master components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the master node. Also find out the name of the DNS application and how it's started/installed on the master node.

Write your findings into file /opt/course/8/master-components.txt. The file should be structured like:

# /opt/course/8/master-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]
Choices of [TYPE] are: not-installed, process, static-pod, pod
  • 案例9
Task weight: 5%

Use context: kubectl config use-context k8s-c2-AC

Ssh into the master node with ssh cluster2-master1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.

Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm its started but not scheduled on any node.

Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-master1. Make sure it's running.

Start the kube-scheduler again and confirm its running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-worker1.
  • 案例10
Use context: kubectl config use-context k8s-c1-H

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
  • 案例11
Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 megabytes memory. The Pods of that DaemonSet should run on all nodes.
  • 案例12
Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause.

There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-worker1 and cluster1-worker2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added.

In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.
  • 案例13
Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.

Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running on value available as environment variable MY_NODE_NAME.

Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.

Container c3 should be of image busybox:1.31.1 and constantly write the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.

Check the logs of container c3 to confirm correct setup.
  • 案例14
Use context: kubectl config use-context k8s-c1-H

You're ask to find out following information about the cluster k8s-c1-H:

How many master nodes are available?
How many worker nodes are available?
What is the Service CIDR?
Which Networking (or CNI Plugin) is configured and where is its config file?
Which suffix will static pods have that run on cluster1-worker1?
Write your answers into file /opt/course/14/cluster-info, structured like this:

# /opt/course/14/cluster-info
1: [ANSWER]
2: [ANSWER]
3: [ANSWER]
4: [ANSWER]
5: [ANSWER]
  • 案例15
Use context: kubectl config use-context k8s-c2-AC

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time. Use kubectl for it.

Now kill the kube-proxy Pod running on node cluster2-worker1 and write the events this caused into /opt/course/15/pod_kill.log.

Finally kill the containerd container of the kube-proxy Pod on node cluster2-worker1 and write the events into /opt/course/15/container_kill.log.

Do you notice differences in the events both actions caused?
  • 案例16
Use context: kubectl config use-context k8s-c1-H

Create a new Namespace called cka-master.

Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt.

Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.
  • 案例17
Use context: kubectl config use-context k8s-c1-H

In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.

Using command crictl:

Write the ID of the container and the info.runtimeType into /opt/course/17/pod-container.txt

Write the logs of the container into /opt/course/17/pod-container.log
  • 案例18
Use context: kubectl config use-context k8s-c3-CCC

There seems to be an issue with the kubelet not running on cluster3-worker1. Fix it and confirm that cluster3 has node cluster3-worker1 available in Ready state afterwards. Schedule a Pod on cluster3-worker1.

Write the reason of the is issue into /opt/course/18/reason.txt.
  • 案例19
this task can only be solved if questions 18 or 20 have been successfully implemented and the k8s-c3-CCC cluster has a functioning worker node

Use context: kubectl config use-context k8s-c3-CCC

Do the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time, it should be able to run on master nodes as well.

There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the secret Namespace and mount it readonly into the Pod at /tmp/secret1.

Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod's container as environment variables APP_USER and APP_PASS.

Confirm everything is working.
  • 案例20
Your coworker said node cluster3-worker2 is running an older Kubernetes version and is not even part of the cluster. Update kubectl and kubeadm to the version that's running on cluster3-master1. Then add this node to the cluster, you can use kubeadm for this.
  • 案例21
Use context: kubectl config use-context k8s-c3-CCC

Create a Static Pod named my-static-pod in Namespace default on cluster3-master1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.

Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if its reachable through the cluster3-master1 internal IP address. You can connect to the internal node IPs from your main terminal.
  • 案例22
Use context: kubectl config use-context k8s-c2-AC

Check how long the kube-apiserver server certificate is valid on cluster2-master1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration.

Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.

Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.
  • 案例23
Use context: kubectl config use-context k8s-c2-AC

Node cluster2-worker1 has been added to the cluster using kubeadm and TLS bootstrapping.

Find the "Issuer" and "Extended Key Usage" values of the cluster2-worker1:

kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
kubelet server certificate, the one used for incoming connections from the kube-apiserver.
Write the information into file /opt/course/23/certificate-info.txt.

Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.
  • 案例24
Use context: kubectl config use-context k8s-c1-H

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.

To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:

connect to db1-* Pods on port 1111
connect to db2-* Pods on port 2222
Use the app label of Pods in your policy.

After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.
  • 案例25
Use context: kubectl config use-context k8s-c3-CCC

Make a backup of etcd running on cluster3-master1 and save it on the master node at /tmp/etcd-backup.db.

Then create a Pod of your kind in the cluster.

Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.

  • 案例26
Use context: kubectl config use-context k8s-c1-H

Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the Nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.
  • 案例27
Use context: kubectl config use-context k8s-c1-H

There is an existing ServiceAccount secret-reader in Namespace project-hamster. Create a Pod of image curlimages/curl:7.65.3 named tmp-api-contact which uses this ServiceAccount. Make sure the container keeps running.

Exec into the Pod and use curl to access the Kubernetes Api of that cluster manually, listing all available secrets. You can ignore insecure https connection. Write the command(s) for this into file /opt/course/e4/list-secrets.
  • 案例26
Preview Question 1
Use context: kubectl config use-context k8s-c2-AC

The cluster admin asked you to find out the following information about etcd running on cluster2-master1:

Server private key location
Server certificate expiration date
Is client certificate authentication endabled
Write these information into /opt/course/p1/etcd-info.txt

Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-master1 and display its status.
  • 案例27
Preview Question 2
Use context: kubectl config use-context k8s-c1-H

You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster:

Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time.

Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80.

Find the kube-proxy container on all nodes cluster1-master1, cluster1-worker1 and cluster1-worker2 and make sure that it's using iptables. Use command crictl for this.

Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt.

Finally delete the Service and confirm that the iptables rules are gone from all nodes.
  • 案例26
Preview Question 3
Use context: kubectl config use-context k8s-c2-AC

Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.

Change the Service CIDR to 11.96.0.0/12 for the cluster.

Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.

CKA真命题

  • 注意点
    • 如果遇到etcd还原的后和升级k8s最好这题最后在做吧
    • 升级k8s也最后在做吧,避免整个k8s异常,导致其他的题目无法进行了,我第一次考挂了,导致后面的题目无法做。
    • 参考文章: https://blog.csdn.net/u011127242/category_10823035.html?spm=1001.2014.3001.5482
  • 题目1 需要重点记录
创建名称 deployment-clusterrole 的 ClusterRole
该角色具备创建 Deployment、Statefulset、Daemonset 的权限
在命名空间 app-team1 中创建名称为 cicd-token 的 ServiceAccount
绑定 ClusterRole 到 ServiceAccount,且限定命名空间为 app-team1


kubectl create ns app-team1

kubectl create serviceaccount cicd-token -n app-team1

kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployment,statefulset,daemonset

 kubectl -n app-team1 create rolebinding cicd-clusterrole   --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token

  • 题目2 需要重点记录
设置 ek8s-node-1 节点为不可用
重新调度该节点上的所有 pod

kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --ignore-daemonsets --delete-local-data --force

  • 题目3
升级 master 节点为1.20.1
升级前确保drain master 节点
不要升级worker node 、容器 manager、 etcd、 CNI插件、DNS 等内容

https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
kubectl get nodes
ssh mk8s-master-0
kubectl cordon mk8s-master-0
kubectl drain mk8s-master-0 --ignore-daemonsets
apt-mark unhold kubeadm kubectl kubelet 
apt-get update && apt-get install -y kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00
apt-mark hold kubeadm kubectl kubelet
kubeadm upgrade plan
kubeadm upgrade apply v1.20.1 --etcd-upgrade=false
// kubectl rollout undo deployment coredns -n kube-system ,有些大佬建议rollout coredns,笔者考试的时候没有rollover
kubectl uncordon mk8s-master-0


  • 题目4
备份 https://127.0.0.1:2379 上的 etcd 数据到 /var/lib/backup/etcd-snapshot.db
使用之前的文件 /data/backup/etcd-snapshot-previous.db 还原 etcd
使用指定的 ca.crt 、 etcd-client.crt 、etcd-client.key


备份etc,需要指定证书
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379    --cacert=/etc/kubernetes/pki/etcd/ca.crt  --cert=/etc/kubernetes/pki/etcd/peer.crt   --key=/etc/kubernetes/pki/etcd/peer.key  snapshot save   /var/lib/bacp/etcd-snapshot.db

验证etcd备份是否正确
ETCDCTL_API=3 etcdctl --write-out=table snapshot   status   etcd-snapshot.db   --cacert=/etc/kubernetes/pki/etcd/ca.crt  --cert=/etc/kubernetes/pki/etcd/peer.crt   --key=/etc/kubernetes/pki/etcd/peer.key 


恢复ectd集群
ETCDCTL_API=3 etcdctl --write-out=table snapshot   restore    etcd-snapshot.db   --cacert=/etc/kubernetes/pki/etcd/ca.crt  --cert=/etc/kubernetes/pki/etcd/peer.crt   --key=/etc/kubernetes/pki/etcd/peer.key   
  • 题目5 需要重点记录
拷贝 services-networking/network-policies 中的案例,删掉不必要的部分
设置网络策略所属的 ns 为 fubar,端口为 80
设置 namespaceSelector 为源ns my-app 的labels

https://kubernetes.io/docs/concepts/services-networking/network-policies/
[root@k8s-node1 ~]# vim NetworkPolicy.yaml 

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-from-namespace
  namespace: fubar
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          my-app-key: my-app-value
    - podSelector:
        matchLabels: {}
    ports:
    - protocol: TCP
      port: 80


  • 题目6 <需要重点注意点>
重新配置已有的 deployment front-end,添加一个名称为 http 的端口,暴露80/TCP
创建名称为 front-end-svc 的 service,暴露容器的 http 端口
配置service 的类别为NodePort

1)edit front-end ,在containers 中添加如下内容 
kubectl edit deployment front-end
ports:
- name: http
  protocol: TCP
  containerPort: 80
  
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#expose 参考 kubectl expose deployment,然后在使用-h查看帮助信息。
2)[root@k8s-node1 ~]# kubectl expose deployment  kual00612  --port=80 --target-port=80   --type=NodePort   --name=front-end-svc

  • 题目7
创建一个新的 Ingress 资源,名称 ping,命名空间 ing-internal
使用 /hello 路径暴露服务 hello 的 5678 端口

https://kubernetes.io/docs/concepts/services-networking/ingress/

[root@k8s-node1 ~]# vim ingress.yaml 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ping
  namespace: ing-internal
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /hello
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 5678
  • 题目8
扩容 deployment guestbook 为 6个pod

kubectl scale deployment --replicas=6 guestbook
或者使用
kubectl edit  deployment guestbook  #找到replicas 设置为6,在保存
  • 题目9
创建 pod 名称 nginx-kusc0041,镜像 nginx
调度该 pod 到 disk=ssd 的节点上
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc0041
spec:
  containers:
  - name: nginx
    image: nginx
  nodeSelector:
    disk: ssd

  • 题目10
检查有多少节点已准备就绪(不包括nodes tainted Noschedule),并将编号写入/opt/kusco0402/kusco0402.txt
[root@k8s-node1 ~]# kubectl get nodes|grep -v NAME|wc -l
2
[root@k8s-node1 ~]# kubectl describe nodes |grep NoSchedule|wc -l
0
pods数减去NoSchedule查询的数,写入到指定文件中
echo 2 > /opt/kusco0402/kusco0402.txt
  • 题目11
创建名称为 kucc1 的 pod
pod 中运行 nginx 和 redis 两个示例

[root@k8s-node1 ~]# more create-pods.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-to-redis
spec:
  containers:
  - name: redis
    image: redis
  - name: nginx
    image: nginx


  • 题目12
创建一个名为 app-config 的PV,PV的容量为2Gi,访问模式为 ReadWriteMany,volume 的类型为 hostPath

pv 映射的 hostPath 为 /srv/app-config 目录

直接从官方拷贝合适的案例,修改参数,然后设置 hostPath 为 /srv/app-config 即可
https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume
[root@k8s-node1 ~]# more create-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config-1
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/srv/app-config"
  • 题目13
使用指定 storageclass csi-hostpath-sc 创建一个名称为 pv-volume 的 pvc,容量为 10Mi
创建名称为 web-server 的pod,将 nginx 容器的 /usr/share/nginx/html 目录使用该 pvc 挂载
将上述 pvc 的大小从 10Mi 更新为 70Mi,并记录本次变更
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
[root@k8s-node1 ~]# cat  create-pvc.yaml   
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
spec:
  storageClassName: csi-hostpath-sc 
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Mi
[root@k8s-node1 ~]# cat  pvc-mount-pods.yaml
apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: my-pvc
      persistentVolumeClaim:
        claimName: pv-volume
  containers:
    - name: web-server
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: my-pvc
kubectl edit pvc pv-volume --record   #--record是记录在注解中

  • 题目15
添加 sidecar 容器并输出日志
添加一个 sidecar 容器(使用busybox 镜像)到已有的 pod 11-factor-app 中
确保 sidecar 容器能够输出 /var/log/11-factor-app.log 的信息
使用 volume 挂载 /var/log 目录,确保 sidecar 能访问 11-factor-app.log 文件

通过 kubectl get pod -o yaml 的方法备份原始 pod 信息,删除旧的pod 11-factor-app
copy 一份新 yaml 文件,添加 一个名称为 sidecar 的容器
新建 emptyDir 的卷,确保两个容器都挂载了 /var/log 目录
新建含有 sidecar 的 pod,并通过 kubectl logs 验证
https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
  • 题目16
节点 wk8s-node-0 状态为 NotReady,查看原因并恢复其状态为 Ready

确保操作为持久的
解析: 通过 get nodes 查看异常节点,登录节点查看 kubelet 等组件的 status 并判断原因
启动 kubelet 并 enable kubelet 即可

kubectl get nodes
ssh wk8s-node-0
sudo -i 
systemctl status kubelet
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet

再次 get nodes, 确保节点恢复 Ready 状态
  • 题目17
查找 label 为 name=cpu-loader 的 pod,筛选出 cpu 负载最高的那个 pod,并将名称 追加 到 /opt/KUTR00401/KUTR00401.txt

解析: 使用top命令,结合 -l label_key=label_value 和 --sort=cpu 过滤出目标即可
kubectl top pod -l name=cpu-loader -A --sort-by=cpu
echo podName >> /opt/KUTR00401/KUTR00401.txt

需要重点注意的题

  • 添加 sidecar 容器并输出日志
添加一个 sidecar 容器(使用busybox 镜像)到已有的 pod 11-factor-app 中
确保 sidecar 容器能够输出 /var/log/11-factor-app.log 的信息
使用 volume 挂载 /var/log 目录,确保 sidecar 能访问 11-factor-app.log 文件

解答:  https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
先用kubectl get podname -o yaml > podname.yaml 获取到yaml文件,然后删除旧的 pod;
再重新 copy 一个新的 yaml 添加 sidecar 容器,并在两个容器中都挂载 emtpyDir 到 /var/log目录,最后通过apply 生成带 sidecar 的 pod;
pod 正常拉起后,通过 kubectl logs 11-factor-app sidecar 确认能正常输入日志即可

#先备份
[root@k8s-node1 ~]# kubectl  get pods  11-factor-app -o yaml >    11-factor-app.ayml  
#然后在删除,使用官方的模板进行修改即可。

[root@k8s-node1 ~]# more sidecar.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: 11-factor-app
spec:
  containers:
  - name: 11-factor-app
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$(date) INFO $i" >> /var/log/11-factor-app.log;
        i=$((i+1));
        sleep 1;
      done      
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: sidecar
    image: busybox
    args: [/bin/sh, -c, 'tail -n+1 -f /var/log/11-factor-app.log']
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  volumes:
  - name: varlog
    emptyDir: {}
  • 创建和使用 PVC
使用指定 storageclass csi-hostpath-sc 创建一个名称为 pv-volume 的 pvc,容量为 10Mi
创建名称为 web-server 的pod,将 nginx 容器的 /usr/share/nginx/html 目录使用该 pvc 挂载
将上述 pvc 的大小从 10Mi 更新为 70Mi,并记录本次变更

首先要确认是否进错了k8s集群中,然后在查看StorageClass 是否有对应的
[root@k8s-node1 newpvc]# more createpvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume-to2
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Mi
  storageClassName: cis-hostpath-cs-to2
  
[root@k8s-node1 newpvc]# kubectl  get pvc #如果没绑定的话,看下
NAME            STATUS    VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS          AGE
pv-volume-to2   Bound     task-pv-volume-to2   1Gi        RWO            cis-hostpath-cs-to2   31m

[root@k8s-node1 newpvc]# more newpvcmount.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: web-server-to2
spec:
  containers:
    - name: nginx
      image: nginx
      volumeMounts:
      - mountPath: "/usr/share/nginx/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: pv-volume-to2

 kubectl  edit   pvc pv-volume-to2   --record  #由于nfs和本地卷不支持动态调整,故就没做。
  • 按要求创建 Ingress 资源
创建一个新的 Ingress 资源,名称 ping,命名空间 ing-internal
使用 /hello 路径暴露服务 hello 的 5678 端口, #解答创建ingress之后无法获取ip地址

CKA考试K8s官方页签

  • 使用说明
    • 使用复制之后写入文件txt文件中,后缀修改成html,然后在导入书签中即可
    • 见 https://www.jianshu.com/p/a743860b13fe

CKA证书

  • CKA证书
image.png

你可能感兴趣的:(2021年11月CKA认证之旅.md)