https://training.linuxfoundation.cn/certificate/details/1
CKA Clusters
Cluster | Members | 练习环境 节点 |
---|---|---|
- | console | 物理机 |
k8s | 1 master 2 worker |
k8s-master k8-worker1, k8s-worker2 |
ek8s | 1 master 2 worker |
ek8s-master ek8-worker1, ek8s-worker2 |
kubectl
带有别名k
和 Bash 自动完成功能jq
用于 YAML/JSON 处理tmux
用于终端复用curl
并用于测试 Web 服务wget
man
和手册页以获取更多文档在考试期间,考生可以:
以下是对可接受的测试地点的期望:
有关考试期间政策、程序和规则的更多信息,请参阅[考生手册]
如果您需要其他帮助,请使用您的 LF 帐户登录 https://trainingsupport.linuxfoundation.org 并使用搜索栏查找问题的答案,或从提供的类别中选择您的请求类型
注意在哪个集群
操作的
注意在哪个节点
操作的
注意在哪个ns
操作的
Task weight: 4%
Set configuration context:
$ kubectl config use-context ck8s
内容:
为部署管道创建一个新的
ClusterRole
并将其绑定到范围为特定 namespace 的特定ServiceAccount
任务:
- 创建一个名字为
deployment-clusterrole
且仅允许创建以下资源类型的新ClusterRole
:
Deployment
StatefulSet
DaemonSet
- 在现有的 namespace
app-team1
中创建一个名为cicd-token
的新ServiceAccount
。- 限于 namespace
app-team1
,将新的 ClusterRoledeployment-clusterrole
绑定到新的 ServiceAccountcicd-token
。
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
创建 ClusterRole(资源名后面的 s 可以不加)
$ kubectl create clusterrole --help
*$ kubectl create clusterrole deployment-clusterrole \
--verb=create \
--resource=Deployment,StatefulSet,DaemonSet
创建 serviceaccount
*$ kubectl \
--namespace app-team1 \
create serviceaccount cicd-token
绑定 rolebinding
$ kubectl create rolebinding --help
*$ kubectl create rolebinding cicd-token-deployment-clusterrole \
--clusterrole=deployment-clusterrole \
--serviceaccount=app-team1:cicd-token \
--namespace=app-team1
验证
$ kubectl describe clusterrole deployment-clusterrole
Name: deployment-clusterrole
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
`daemonsets.apps` [] [] [`create`]
`deployments.apps` [] [] [`create`]
`statefulsets.apps`[] [] [`create`]
$ kubectl -n app-team1 get serviceaccounts
NAME SECRETS AGE
`cicd-token` 1 16m
default 1 18m
$ kubectl -n app-team1 get rolebindings
NAME ROLE AGE
`cicd-token-deployment-clusterrole` ClusterRole/deployment-clusterrole 11m
$ kubectl -n app-team1 describe rolebindings cicd-token-deployment-clusterrole
Name: cicd-token-deployment-clusterrole
Labels: <none>
Annotations: <none>
Role:
Kind: `ClusterRole`
Name: `deployment-clusterrole`
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount `cicd-token``app-team1`
$ kubectl -n app-team1 \
auth can-i create deployment \
--as system:serviceaccount:app-team1:cicd-token
`yes`
$ kubectl \
auth can-i create deployment \
--as system:serviceaccount:app-team1:cicd-token
`no`
Task weight: 4%
Set configuration context:
$ kubectl config use-context ck8s
任务:
将名为
k8s-worker1
的 node 设置为不可用, 并重新调度该 node 上所有运行的 pods
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
确认节点状态
$ kubectl get nodes
k8s-master Ready control-plane 9d v1.27.1
k8s-worker1 Ready <none> 9d v1.27.1
k8s-worker2 Ready <none> 9d v1.27.1
驱逐应用,并设置节点不可调度
$ kubectl drain k8s-worker1
node/k8s-worker1 cordoned
error: unable to drain node "k8s-worker1" due to error:[cannot delete DaemonSet-managed Pods (use `--ignore-daemonsets` to ignore): kube-system/calico-node-g5wj7, kube-system/kube-proxy-8pv56, cannot delete Pods with local storage (use `--delete-emptydir-data` to override): kube-system/metrics-server-5fdbb498cc-k4mgt], continuing command...
There are pending nodes to be drained:
k8s-worker1
cannot delete DaemonSet-managed Pods (use `--ignore-daemonsets` to ignore): kube-system/calico-node-g5wj7, kube-system/kube-proxy-8pv56
cannot delete Pods with local storage (use `--delete-emptydir-data` to override): kube-system/metrics-server-5fdbb498cc-k4mgt
*$ kubectl drain k8s-worker1 --ignore-daemonsets --delete-emptydir-data
验证
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 84m v1.27.1
k8s-worker1 Ready,`SchedulingDisabled` <none> 79m v1.27.1
k8s-worker2 Ready <none> 76m v1.27.1
$ kubectl get pod -A -owide | grep worker1
kube-system `calico-node-j6r9s` 1/1 Running 1 (9d ago) 9d 192.168.147.129 k8s-worker1 <none> <none>
kube-system `kube-proxy-psz2g` 1/1 Running 1 (9d ago) 9d 192.168.147.129 k8s-worker1 <none> <none>
Task weight: 7%
Set configuration context:
$ kubectl config use-context ck8s
任务:
现有的 kubernetes 集群正在运行的版本是
1.28.1
。 仅将主节点上的所有 kubernetes 控制平面和节点组件升级到版本1.28.2
。另外, 在主节点上升级
kubelet
和kubectl
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
确认节点状态
*$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
`k8s-master` Ready control-plane 6d2h `v1.28.1`
...
登录到 k8s-master
*$ ssh root@k8s-master
执行 “kubeadm upgrade” / 对于第一个控制面节点 / 升级 kubeadm:
apt update
apt-cache madison kubeadm
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.28.2-00 && \
apt-mark hold kubeadm
Verify the upgrade plan:(验证升级计划:)
kubeadm version
kubeadm upgrade plan
选择要升级到的目标版本,运行合适的命令。例如:
kubeadm upgrade apply v1.28.2 \
--etcd-upgrade=false
:<<EOF
...
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: `y`
...
EOF
腾空节点
kubectl drain k8s-master --ignore-daemonsets
升级 kubectl 和 kubelet
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.28.2-00 kubectl=1.28.2-00 && \
apt-mark hold kubelet kubectl
重启 kubelet
systemctl daemon-reload
systemctl restart kubelet
解除节点的保护
kubectl uncordon k8s-master
Ctrl-d
验证结果
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 157m `v1.28.2`
k8s-worker1 Ready,SchedulingDisabled <none> 152m v1.28.1
k8s-worker2 Ready <none> 149m v1.28.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2",....
$ kubelet --version
Kubernetes v1.28.2
Task weight: 7%
任务:
- 首先,为运行在 https://127.0.0.1:2379上的现有
etcd
实例创建快照,并将快照保存到/srv/backup/etcd-snapshot.db
。
- 然后,还原位于
/srv/data/etcd-snapshot-previous.db
的现有先前快照。
提示
- 参考网址: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#built-in-snapshot
- 考试时,是单独的集群
- 练习环境中,etcd 快照的题,单独做
参考答案
备份命令
$ ETCDCTL_API=3 etcdctl snapshot save --help
*$ ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/opt/KUIN00601/ca.crt \
--cert=/opt/KUIN00601/etcd-client.crt \
--key=/opt/KUIN00601/etcd-client.key \
snapshot save /srv/backup/etcd-snapshot.db
还原命令
*$ sudo mv /etc/kubernetes/manifests /etc/kubernetes/manifests.bk
*$ kubectl get pod -A
The connection to the server 192.168.147.128:6443 was refused - did you specify the right host or port?
$ sudo grep data /etc/kubernetes/manifests.bk/etcd.yaml
*$ sudo mv /var/lib/etcd /var/lib/etcd.bk
*$ sudo chown $USER /srv/data/etcd-snapshot-previous.db
*$ sudo ETCDCTL_API=3 etcdctl \
--data-dir /var/lib/etcd \
snapshot restore /srv/data/etcd-snapshot-previous.db
*$ sudo mv /etc/kubernetes/manifests.bk /etc/kubernetes/manifests
验证
$ ETCDCTL_API=3 etcdctl snapshot status /srv/backup/etcd-snapshot.db
89703627, 14521, 1929, 4.3 MB
$ kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
Task weight: 7%
Set configuration context:
$ kubectl config use-context ck8s
任务:
- 创建一个名为
allow-port-from-namespace
的新NetworkPolicy
, 以允许现有 namespaceinternal
中的 Pods 连接到 namespaceecho
中其他 Pods 的端口8080
- 确保新的
NetworkPolicy
:
- 不允许对没有在监听端口
8080
的 Pods 的访问- 不允许不来自 namespace
internal
的 Pods 的访问
提示
- 务必分析清楚规则是进还是出
- 考试时候,对应的 namespace 可能不存在
参考答案
切换 kubernetes
*$ kubectl config use-context ck8s
考试时,确认 namespace internal 是否存在
$ kubectl create ns internal
$ kubectl get ns internal --show-labels
new Active 11s
$ kubectl label ns internal kubernetes.io/metadata.name=internal
$ kubectl get ns internal --show-labels
new Active 11s kubernetes.io/metadata.name=internal
查看标签
*$ kubectl get ns internal --show-labels
NAME STATUS AGE LABELS
internal Active 3h59m kubernetes.io/metadata.name=internal
编辑 yaml
*$ sudo tee -a /etc/vim/vimrc <<EOF
set number ts=2 et cuc
EOF
$ vim 5.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
# name: test-network-policy
name: allow-port-from-namespace
# namespace: default
namespace: echo
spec:
# podSelector:
podSelector: {}
# matchLabels:
# role: db
policyTypes:
- Ingress
# - Egress
ingress:
- from:
# - ipBlock:
# cidr: 172.17.0.0/16
# except:
# - 172.17.1.0/24
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: internal
# - podSelector:
# matchLabels:
# role: frontend
ports:
- protocol: TCP
# port: 6379
port: 8080
# egress:
# - to:
# - ipBlock:
# cidr: 10.0.0.0/24
# ports:
# - protocol: TCP
# port: 5978
应用
$ kubectl create ns echo
*$ kubectl apply -f 5.yml
验证
$ kubectl -n echo describe networkpolicies allow-port-from-namespace
Name: allow-port-from-namespace
Namespace: echo
Created on: YYYY-mm-dd HH:MM:ss +0800 CST
Labels: <none>
Annotations: <none>
Spec:
PodSelector: <none> (Allowing the specific traffic to all pods in this namespace)
Allowing ingress traffic:
To Port: 8080/TCP
From:
NamespaceSelector: kubernetes.io/metadata.name=internal
Not affecting egress traffic
Policy Types: Ingress
Task weight: 7%
Set configuration context:
$ kubectl config use-context ck8s
任务
- 请重新配置现有的 deployment
front-end
:添加名为http
的端口规范来公开现有容器nginx
的端口80/tcp
。- 创建一个名为
front-end-svc
的新服务, 以公开容器端口http
。- 配置此服务, 以通过在指定的节点上的
NodePort
来公开各个 pods。
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
确认 deployments
$ kubectl get deployments front-end
NAME READY UP-TO-DATE AVAILABLE AGE
`front-end` 1/1 1 1 10m
获取 ports 编写信息(可选),建议使用网页检索
$ kubectl explain --help
$ kubectl explain pod.spec.containers
$ kubectl explain pod.spec.containers.ports
$ kubectl explain deploy.spec.template.spec.containers.ports
编辑 deployments front-end
*$ kubectl edit deployments front-end
...
template:
...
spec:
containers:
- image: nginx
# 添加 3 行
ports:
- name: http
containerPort: 80
...
$ kubectl get deployments front-end
NAME READY UP-TO-DATE AVAILABLE AGE
front-end `1/1` 1 1 12m
️ 创建 Service
$ kubectl expose -h
*$ kubectl expose deployment front-end \
--port=80 --target-port=http \
--name=front-end-svc \
--type=NodePort
️ 创建 Service(建议)
*$ kubectl get deployments front-end --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
front-end 0/1 1 0 37s `app=front-end`
*$ vim 6.yml
apiVersion: v1
kind: Service
metadata:
# name: my-service
name: front-end-svc
spec:
# 题意要求(确认)
type: NodePort
selector:
# app: MyApp
app: front-end
ports:
- port: 80
targetPort: http
*$ kubectl apply -f 6.yml
验证
$ kubectl get services front-end-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front-end-svc `NodePort` 10.106.46.251 <none> 80:`32067`/TCP 39s
$ curl k8s-worker1:32067
...
<title>Welcome to nginx!</title>
...
Task wight: 7%
Set configuration context:
$ kubectl config use-context ck8s
任务:
- 如下创建一个新的 nginx ingress 资源:
- 名称:
ping
- namespace:
ing-internal
- 使用服务端口
5678
在路径/hi
上公开服务hi
$ curl -kL <INTERNAL_IP>/hi
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
安装 ingressclasses
️ 考试时
提示
- https://kubernetes.github.io/ingress-nginx/deploy/
*$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
kubernetes.github.io 这个网址不让访问,解决方案
- https://github.com/kubernetes/ingress-nginx
️ 练习环境
*$ kubectl apply -f http://k8s.ruitong.cn:8080/K8s/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
验证已安装
*$ kubectl get ingressclasses
NAME CONTROLLER PARAMETERS AGE
`nginx` k8s.io/ingress-nginx <none> 9s
*$ kubectl get pod -A | grep ingress
ingress-nginx ingress-nginx-admission-create-w2h4k 0/1 Completed 0 92s
ingress-nginx ingress-nginx-admission-patch-k6pgk 0/1 Completed 1 92s
ingress-nginx `ingress-nginx-controller-58b94f55c8-gl7gk` 1/1 `Running` 0 92s
编辑 yaml 文件
*$ vim 7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
# name: minimal-ingress
name: ping
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
# 添加1行
namespace: ing-internal
spec:
# ingressClassName: nginx-example
ingressClassName: nginx
rules:
- http:
paths:
# - path: /testpath
- path: /hi
pathType: Prefix
backend:
service:
# name: test
name: hi
port:
# number: 80
number: 5678
应用生效
*$ kubectl apply -f 7.yml
$ kubectl -n ing-internal get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
`ping` nginx * 80 11m
验证
*$ kubectl get pods -A -o wide \
| grep ingress
...
ingress-nginx `ingress-nginx-controller`-769f969657-4zfjv 1/1 Running 0 12m `172.16.126.15` k8s-worker2 <none> <none>
*$ curl 172.16.126.15/hi
hi
Task weight: 4%
Set configuration context:
$ kubectl config use-context ck8s
任务:
- 将 deployment
webserver
扩展至6
pods
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
查看
$ kubectl get deployments webserver
NAME READY UP-TO-DATE AVAILABLE AGE
webserver `1/1` 1 1 30s
扩展副本
️ edit
*$ kubectl edit deployments webserver
...
spec:
progressDeadlineSeconds: 600
# replicas: 1
replicas: 6
...
️ scale
$ kubectl scale deployment webserver --replicas 6
验证
$ kubectl get deployments webserver -w
NAME READY UP-TO-DATE AVAILABLE AGE
webserver `6/6` 6 6 120s
<Ctrl-C>
Task weight 4%
Set configuration context:
$ kubectl config use-context ck8s
任务
- 按如下要求调度一个 pod:
- 名称:
nginx-kusc00401
- image:
nginx
- Node selector:
disk=spinning
提示
- 官网手册中搜索 nodeselect,将 Pod 指派给节点 | Kubernetes
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
创建 pod
*$ vim 9.yml
apiVersion: v1
kind: Pod
metadata:
# name: nginx
name: nginx-kusc00401
spec:
containers:
- name: nginx
# 符合题题意要求
image: nginx
nodeSelector:
# disktype: ssd
disk: spinning
应用
*$ kubectl apply -f 9.yml
确认
$ kubectl get pod nginx-kusc00401 -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-kusc00401 1/1 Running 0 11s 172.16.126.30 `k8s-worker2` <none> <none>
<Ctrl-C>
$ kubectl get nodes -l disk=spinning
NAME STATUS ROLES AGE VERSION
`k8s-worker2` Ready <none> 9d v1.27.1
Task weight 4%
Set configuration context:
$ kubectl config use-context ck8s
任务:
- 检查有多少个 nodes 已准备就绪(不包括被打上 tainted:
NoSchedule
的节点) , 并将数量写入/opt/KUSC00402/kusc00402.txt
提示
- 仔细看 taints 是否为 NoSchedule
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
确认节点状态
$ kubectl get nodes
k8s-master Ready control-plane 9d v1.27.2
k8s-worker1 Ready,SchedulingDisabled <none> 9d v1.27.1
k8s-worker2 Ready <none> 9d v1.27.1
检查有 Taint 的节点
*$ kubectl describe nodes | grep -i taints
Taints: node-role.kubernetes.io/control-plane:`NoSchedule`
Taints: node.kubernetes.io/unschedulable:`NoSchedule`
Taints: <none>
写入结果
*$ echo 1 > /opt/KUSC00402/kusc00402.txt
Task weight 4%
Set configuration context:
$ kubectl config use-context ck8s
任务:
- 创建一个名字为
kucc1
的 pod, 在pod里面分别为以下每个images单独运行一个app container(可能会有 1-4 个 images),容器名称和镜像如下:
nginx
+redis
+memcached
+consul
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
创建 pod
*$ vim 11.yml
apiVersion: v1
kind: Pod
metadata:
# name: myapp-pod
name: kucc1
spec:
containers:
# - name: myapp-container
- name: nginx
# image: busybox:1.28
image: nginx
# 添加
- name: redis
image: redis
- name: memcached
image: memcached
- name: consul
image: consul
应用
*$ kubectl apply -f 11.yml
验证
$ kubectl get pod kucc1 -w
NAME READY STATUS RESTARTS AGE
kucc1 `4/4` Running 0 77s
Ctrl-C
Task weight: 4%
Set configuration context:
任务:
- 创建名为
app-data
的 persistent volume, 容量为1Gi
, 访问模式为ReadWriteMany
。 volume类型为hostPath
, 位于/srv/app-data
参考答案
编辑 yaml
*$ vim 12.yml
apiVersion: v1
kind: PersistentVolume
metadata:
# name: task-pv-volume
name: app-data
spec:
# storageClassName: manual
capacity:
# storage: 10Gi
storage: 1Gi
accessModes:
# - ReadWriteOnce
- ReadWriteMany
hostPath:
# path: "/mnt/data"
path: "/srv/app-data"
# 增加
type: DirectoryOrCreate
应用生效
*$ kubectl apply -f 12.yml
验证
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
`app-data` `1Gi` `RWX` Retain Available 4s
Task weight: 7%
Set configuration context:
$ kubectl config use-context ck8s
任务:
创建一个新的
PersistentVolumeClaim
:
- 名称:
pv-volume
- class:
csi-hostpath-sc
- 容量:
10Mi
创建一个新的 pod, 此 pod 将作为 volume 挂载到
PersistentVolumeClaim
:
- 名称:
web-server
- image:
nginx
- 挂载路径:
/usr/share/nginx/html
配置新的 pod, 以对 volume 具有
ReadWriteOnce
权限。最后, 使用
kubectl edit
或者kubectl patch
将PersistentVolumeClaim
的容量扩展为70Mi
, 并记录此次更改。
参考答案
切换 kubernetes
*$ kubectl config use-context ck8s
创建 pvc
*$ vim 13pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# name: claim1
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
# storageClassName: fast
storageClassName: csi-hostpath-sc
resources:
requests:
# storage: 30Gi
storage: 10Mi
*$ kubectl apply -f 13pvc.yml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-volume `Bound` pvc-89935613-3af9-4193-9a68-116067cf1a34 10Mi RWO csi-hostpath-sc 6s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
app-data 1Gi RWX Retain Available 72m
pvc-89935613-3af9-4193-9a68-116067cf1a34 10Mi RWO Delete `Bound` default/pv-volume csi-hostpath-sc 39s
创建 pod
*$ vim 13pod.yml
apiVersion: v1
kind: Pod
metadata:
# name: task-pv-pod
name: web-server
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
# claimName: task-pv-claim
claimName: pv-volume
containers:
# - name: task-pv-container
- name: web-server
image: nginx
# ports:
# - containerPort: 80
# name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
*$ kubectl apply -f 13pod.yml
pod/web-server created
$ kubectl get pod web-server
NAME READY STATUS RESTARTS AGE
web-server 1/1 `Running` 0 9s
允许扩容
*$ kubectl edit storageclasses csi-hostpath-sc
...
# 添加1行
allowVolumeExpansion: true
$ kubectl get storageclasses -A
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-hostpath-sc k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate `true` 5m51s
并记录此次更改
*$ kubectl edit pvc pv-volume --record
...
spec:
...
# storage: 10Mi
storage: 70Mi
...
确认(练习环境显示值还是10Mi,考试环境会正常显示)
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-volume Bound pvc-9a5fb9b6-b127-4868-b936-cb4f17ef910e `70Mi` RWO csi-hostpath-sc 31m
$ kubectl describe pvc pv-volume | grep record
Task weight: 5%
Set configuration context:
$ kubectl config use-context ck8s
任务:
- 监控 pod
bar
的日志:- 提取与错误
unable-to-access-website
相对应的日志行- 将这些日志行写入到
/opt/KUTR00101/bar
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
查看 logs
*$ kubectl logs bar \
| grep unable-to-access-website \
> /opt/KUTR00101/bar
验证
$ cat /opt/KUTR00101/bar
YYYY-mm-dd 07:13:03,618: ERROR `unable-to-access-website`
Task weight 7%
Set configuration context:
$ kubectl config use-context ck8s
内容:
在不更改其现有容器的情况下, 需要将一个现有的 pod 集成到 kubernetes 的内置日志记录体系结构中(例如 kubectl logs) 。 添加 streamimg sidecar 容器是实现此要求的一种好方法
任务:
将一个
busybox
sidecar 容器名称为sidecar添加到现有的big-corp-app
。 新的 sidecar 容器必须运行以下命令:/bin/sh -c tail -f /var/log/legacy-app.log
使用名为
logs
的 volume mount 来让文件/var/log/legacy-app.log
可用于 sidecar 容器
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
编辑 yaml
*$ kubectl get pod big-corp-app -o yaml > 15.yml
*$ vim 15.yml
...
spec:
containers:
...
volumeMounts:
# 已有容器 增加 2 行
- name: logs
mountPath: /var/log
# 新容器 增加 6 行
- name: sidecar
image: busybox
args: [/bin/sh, -c, 'tail -f /var/log/legacy-app.log']
volumeMounts:
- name: logs
mountPath: /var/log
...
volumes:
# 增加 2 行
- name: logs
emptyDir: {}
...
️
*$ kubectl delete -f 15.yml --grace-period=0 --force
*$ kubectl apply -f 15.yml
️
*$ kubectl replace -f 15.yml --grace-period=0 --force
确认
$ kubectl get pod big-corp-app -w
NAME READY STATUS RESTARTS AGE
big-corp-app `2/2` Running 1 37s
$ kubectl logs -c sidecar big-corp-app
Task weight: 5%
Set configuration context:
$ kubectl config use-context ck8s
任务:
- 通过 pod label
name=cpu-loader
, 找到运行时占用大量 CPU 的 pod, 并将占用 CPU 最高的 pod 名称写入到文件/opt/KUTR00401/KUTR00401.txt
(已存在)
参考答案
切换 kubernetes 集群
*$ kubectl config use-context ck8s
查找
$ kubectl top pod -h
*$ kubectl top pod -l name=cpu-loader -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default `bar` `1m` 5Mi
default cpu-loader-5b898f96cd-56jf5 0m 3Mi
default cpu-loader-5b898f96cd-9zlt5 0m 4Mi
default cpu-loader-5b898f96cd-bsvsb 0m 4Mi
写入
*$ echo bar > /opt/KUTR00401/KUTR00401.txt
Task weight: 13%
Set configuration context:
$ kubectl config use-context ck8s
任务:
- 名为
k8s-worker1
的 kubernetes worker node 处于NotReady
状态。 调查发生这种情况的原因, 并采取相应措施将 node 恢复为Ready
状态,确保所做的任何更改永久生效。
提示
- 这题与第二题有关联
- kiosk@k8s-master:~$ cka-setup 17
- kiosk@k8s-master:~$ sshpass -p vagrant ssh k8s-worker1 sudo systemctl disable --now kubelet
参考答案
切换集群环境
*$ kubectl config use-context ck8s
确认节点状态
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 43d v1.27.1
k8s-worker1 `NotReady` <none> 43d v1.27.1
*$ kubectl describe nodes k8s-worker1
...
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 31 May YYYY 11:25:06 +0000 Tue, 31 May YYYY 11:25:06 +0000 CalicoIsUp Calico is running on this node
MemoryPressure Unknown Tue, 31 May YYYY 13:51:08 +0000 Tue, 31 May YYYY 13:53:42 +0000 `NodeStatusUnknown Kubelet stopped posting node status.`
DiskPressure Unknown Tue, 31 May YYYY 13:51:08 +0000 Tue, 31 May YYYY 13:53:42 +0000 `NodeStatusUnknown Kubelet stopped posting node status.`
PIDPressure Unknown Tue, 31 May YYYY 13:51:08 +0000 Tue, 31 May YYYY 13:53:42 +0000 `NodeStatusUnknown Kubelet stopped posting node status.`
Ready Unknown Tue, 31 May YYYY 13:51:08 +0000 Tue, 31 May YYYY 13:53:42 +0000 `NodeStatusUnknown Kubelet stopped posting node status.`
...
启动服务
*$ ssh k8s-worker1
*$ sudo -i
*# systemctl enable --now kubelet.service
# systemctl status kubelet
q 退出 status
Ctrl-D 退出 sudo
Ctrl-D 退出 ssh
确认
*$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 43d v1.27.1
k8s-worker1 `Ready`,SchedulingDisabled <none> 43d v1.27.1
$ cka-grade
Spend Time: up 1 hours, 1 minutes Wed 01 Jun YYYY 04:58:06 PM UTC
==================================================================
PASS Task1. - RBAC
PASS Task2. - drain
PASS Task3. - upgrade
PASS Task4. - snapshot
PASS Task5. - network-policy
PASS Task6. - service
PASS Task7. - ingress-nginx
PASS Task8. - replicas
PASS Task9. - schedule
PASS Task10. - NoSchedule
PASS Task11. - multi_pods
PASS Task12. - pv
PASS Task13. - Dynamic-Volume
PASS Task14. - logs
PASS Task15. - Sidecar
PASS Task16. - Metric
PASS Task17. - Daemon (kubelet, containerd, docker)
==================================================================
The results of your CKA v1.27: `PASS` Your score: `100`
$ cka-grade 1
Spend Time: up 1 hours, 2 minutes Wed 01 Jun YYYY 04:58:14 PM UTC
===================================================================
`PASS` Task1. - RBAC
===================================================================
The results of your CKA v1.27: FAIL Your score: 4