CKA场景题

CKA

权限控制

需求

创建名称 deployment-clusterrole 的 ClusterRole,该⻆⾊具备创建 Deployment、Statefulset、 Daemonset 的权限,在命名空间 app-team1 中创建名称为 cicd-token 的 ServiceAccount,绑定 ClusterRole 到 ServiceAccount,且限定命名空间为 app-team1。

#创建命名空间app-team1
root@k8s-master:~# kubectl create ns app-team1
#查看命名空间
root@k8s-master:~# kubectl get ns
NAME              STATUS   AGE
app-team1         Active   659d
default           Active   675d
fubar             Active   659d
ing-internal      Active   659d
ingress-nginx     Active   659d
kube-node-lease   Active   675d
kube-public       Active   675d
kube-system       Active   675d
my-app            Active   659d
#在命名空间 app-team1 中创建名称为 cicd-token 的 ServiceAccount
root@k8s-master:~# kubectl create serviceaccount cicd-token -n app-team1
serviceaccount/cicd-token created
#创建名称 deployment-clusterrole 的 ClusterRole,该⻆⾊具备创建 Deployment、Statefulset、 Daemonset 的权限
root@k8s-master:~# kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets
clusterrole.rbac.authorization.k8s.io/deployment-clusterrole created
#绑定 ClusterRole 到 ServiceAccount,且限定命名空间为 app-team1。
root@k8s-master:~# kubectl -n app-team1 create rolebinding cicd-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=app1-team:cicd-token
rolebinding.rbac.authorization.k8s.io/cicd-clusterrole created

官方网址

设置节点不可用

需求:

设置 k8s-node1 节点为不可⽤、重新调度该节点上的所有 pod

#禁用该节点
root@k8s-master:~# kubectl cordon k8s-node1
#清空节点
root@k8s-master:~# kubectl drain k8s-node1 --ignore-daemonsets --delete-emptydir-data --force
root@k8s-master:~# kubectl get nodes
NAME         STATUS                     ROLES                  AGE    VERSION
k8s-master   Ready                      control-plane,master   675d   v1.22.0
k8s-node1    Ready,SchedulingDisabled   <none>                 675d   v1.22.0
k8s-node2    Ready                      <none>                 675d   v1.22.0

安全地清空一个节点 | Kubernetes

升级 kubeadm

需求:

升级 master 节点为1.22.2,升级前确保drain master 节点,不要升级worker node 、容器 manager、 etcd、 CNI插件、DNS 等内容;(⾸先 cordon、drain master节点,其次升级 kubeadm 并 apply 到1.22.2版本,升级 kubelet 和 kubectl)

root@k8s-master:~# kubectl cordon k8s-master
root@k8s-master:~# kubectl drain k8s-master --ignore-daemonsets --force
root@k8s-master:~# apt-mark unhold kubelet kubectl kubeadm && \
> apt-get update && apt-get install -y kubelet='1.22.2-00' kubectl='1.22.2-00' kubeadm='1.22.2-00'&& \
> apt-mark hold kubelet kubectl kubeadm

kubeadm upgrade plan
kubeadm upgrade apply v1.22.2 --etcd-upgrade=false
sudo systemctl daemon-reload
sudo systemctl restart kubelet
root@k8s-master:~# kubectl uncordon k8s-master
node/k8s-master uncordoned
root@k8s-master:~# kubectl get node
NAME         STATUS                     ROLES                  AGE    VERSION
k8s-master   Ready                      control-plane,master   675d   v1.22.2
k8s-node1    Ready,SchedulingDisabled   <none>                 675d   v1.22.0
k8s-node2    Ready                      <none>                 675d   v1.22.0

官方文档

备份还原 etcd

需求:

备份 https://127.0.0.1:2379 上的 etcd 数据到 /var/lib/backup/etcd-snapshot.db,使⽤之前的⽂ 件 /data/backup/etcd-snapshot-previous.db 还原 etcd,使⽤指定的 ca.crt 、 etcd-client.crt 、 etcd-client.key

#备份
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
  --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
  snapshot save /var/lib/backup/etcd-snapshot.db
  
  
#恢复
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 \
  --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
  snapshot restore /data/backup/etcd-snapshot-previous.db

官方文档

配置⽹络策略 NetworkPolicy

需求:

在命名空间 fubar 中创建⽹络策略 allow-port-from-namespace,只允许 ns my-app 中的 pod 连上 fubar 中 pod 的 80 端⼝,注意:这⾥有 2 个 ns ,⼀个为 fubar(⽬标pod的ns),另外⼀个为 my-app(访 问源pod的ns)

#查看命名空间
root@k8s-master:~# kubectl get ns --show-labels
NAME              STATUS   AGE    LABELS
app-team1         Active   660d   kubernetes.io/metadata.name=app-team1
default           Active   675d   kubernetes.io/metadata.name=default
fubar             Active   660d   kubernetes.io/metadata.name=fubar
ing-internal      Active   660d   kubernetes.io/metadata.name=ing-internal
ingress-nginx     Active   660d   app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,kubernetes.io/metadata.name=ingress-nginx
kube-node-lease   Active   675d   kubernetes.io/metadata.name=kube-node-lease
kube-public       Active   675d   kubernetes.io/metadata.name=kube-public
kube-system       Active   675d   kubernetes.io/metadata.name=kube-system
my-app            Active   660d   kubernetes.io/metadata.name=my-app,name=my-app
#新建yml文件

networkPolicy.yml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-from-namespace
  namespace: fubar
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: my-app
      podSelector:
         matchLabels: {}
    ports:
    - protocol: TCP
      port: 80


官方文档

创建Service

需求:

重新配置已有的 deployment front-end,添加⼀个名称为 http 的端⼝,暴露80/TCP,创建名 称为 front-end-svc 的 service,暴露容器的 http 端⼝,配置service 的类别为NodePort

root@k8s-master:~# kubectl expose deployment front-end --type=NodePort --port=80 --target-port=80 --name=front-end-svc
service/front-end-svc exposed
root@k8s-master:~# kubectl get svc
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
front-end-svc   NodePort    10.109.251.245   <none>        80:31315/TCP   8s
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP        677d

root@k8s-master:~# curl 10.109.251.245
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

官方文档

创建 Ingress 资源

需求:

创建⼀个新的 Ingress 资源,名称 ping,命名空间 ing-internal,使⽤ /hello 路径暴露服务 hello 的 5678 端⼝

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ping
  namespace: ing-internal
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /hello
        pathType: Prefix
        backend:
          service:
            name: hello
            port:
              number: 5678
root@k8s-master:~# kubectl get ingress,svc -n ing-internal
NAME                             CLASS    HOSTS   ADDRESS                           PORTS   AGE
ingress.networking.k8s.io/ping   <none>   *       192.168.123.151,192.168.123.152   80      56s

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/hello   ClusterIP   10.106.182.229   <none>        5678/TCP   660d
root@k8s-master:~# curl 192.168.123.151/hello
hello

官方文档

扩容Deployment

需求:

扩容 deployment guestbook 为 6个pod

root@k8s-master:~# kubectl get deploy
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
front-end                1/1     1            1           660d
guestbook                2/2     2            2           660d
nfs-client-provisioner   1/1     1            1           660d
root@k8s-master:~# kubectl scale deployment --replicas=6 guestbook
deployment.apps/guestbook scaled

调度 pod 到指定节点

需求:

创建pod名称nginx-kusc0041,镜像nginx,调度该pod到disk=ssd的节点上

apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc0041
spec:
  containers:
  - name: nginx
    image: nginx
  nodeSelector:
    disk: ssd
    

官方文档

统计ready 状态节点数量

需求:

统计ready状态节点 要求不包括NoSchedule的节点( describe node过滤NoSchedule的节点,统计数量输⼊指定⽂档即可)

root@k8s-master:~# kubectl get nodes

官方文档

创建多容器Pod

需求:

创建名称为kucc1的pod,pod中运⾏nginx和redis两个容器

apiVersion: v1
kind: Pod
metadata:
  name: kucc1
spec:
  containers:
  - name: nginx
    image: nginx
  - name: redis
    image: redis   
    

官方文档

创建PV

需求:

创建⼀个名为app-config的PV,PV的容量为2Gi,访问模式为ReadWriteMany,volume的类型 为hostPath,pv映射的hostPath为/srv/app-config⽬录

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/srv/app-config"
root@k8s-master:~# kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
app-config   2Gi        RWX            Retain           Available                                   61s

配置 Pod 以使用 PersistentVolume 作为存储 | Kubernetes

创建和使⽤PVC

需求:

使⽤指定storageclass csi-hostpath-sc创建⼀个名称为pv-volume的 pvc,容量为10Mi 创建名称为web-server的pod,将nginx 容器的/usr/share/nginx/html⽬录使⽤该pvc挂载 将上述pvc的⼤⼩从10Mi更新为70Mi,并记录本次变更;

  • 创建pvc pvc.yml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pv-volume
    spec:
      storageClassName: csi-hostpath-sc
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Mi
    
  • 创建Pod pvc-pod.yml

    apiVersion: v1
    kind: Pod
    metadata:
      name: web-server
    spec:
      volumes:
        - name: mypv
          persistentVolumeClaim:
            claimName: pv-volume
      containers:
        - name: nginx
          image: nginx
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: mypv
    
root@k8s-master:~# kubectl apply -f pvc.yml
persistentvolumeclaim/pv-volume created
root@k8s-master:~# kubectl apply -f pvc-pod.yml
pod/web-server created
#pvc的⼤⼩从10Mi更新为70Mi,并记录本次变更
root@k8s-master:~# kubectl edit pvc pv-volume --record

配置 Pod 以使用 PersistentVolume 作为存储 | Kubernetes

监控pod的日志

需求:

监控foobar pod中的⽇志 获取包含unable-to-access-website的⽇志,并将⽇志写⼊到/opt/KUTR00101/foobar

kubectl logs foobar | grep unable-to-access-website >
/opt/KUTR00101/foobar

查看 cpu 使⽤率最⾼的 pod

需求:

查找label为name=cpu-loader的pod,筛选出cpu负载最⾼的那个pod,并将名称追加 到/opt/KUTR00401/KUTR00401.txt

root@k8s-master:~# kubectl top pod -l name=cpu-loader -A --sort-by='cpu'
NAMESPACE     NAME                                       CPU(cores)   MEMORY(bytes)
kube-system   kube-apiserver-k8s-master                  42m          328Mi
kube-system   calico-node-4l4ll                          23m          129Mi
kube-system   calico-node-q9h5n                          21m          129Mi
kube-system   calico-node-d8ct6                          18m          129Mi
kube-system   etcd-k8s-master                            11m          58Mi
kube-system   kube-controller-manager-k8s-master         11m          60Mi
kube-system   metrics-server-576fc6cd56-vwfkb            3m           28Mi
kube-system   coredns-7f6cbbb7b8-kzfsd                   2m           13Mi
kube-system   coredns-7f6cbbb7b8-h92ch                   2m           14Mi
kube-system   kube-scheduler-k8s-master                  2m           24Mi
kube-system   calico-kube-controllers-6b9fbfff44-xvqxg   2m           25Mi
kube-system   kube-proxy-9pq8z                           1m           30Mi
kube-system   kube-proxy-nqhpr                           1m           20Mi
kube-system   kube-proxy-pkz69                           1m           17Mi
root@k8s-master:~# echo kube-apiserver-k8s-master  >> /opt/KUTR00401/KUTR00401.txt

你可能感兴趣的:(云计算,#,容器,linux,运维,服务器,kubernetes)