一:为部署管道创建一个新的 ClusterRole
并将其绑定到范围为特定 namespace 的特定 ServiceAccount
要求:创建一个名字为 deployment-clusterrole
且仅允许创建以下(Deployment
,StatefulSet
,DaemonSet)资源类型的新 ClusterRole
;在现有的 namespace app-team1
中创建一个名为 cicd-token
的新 ServiceAccount
;限于 namespace app-team1
,将新的 ClusterRole deployment-clusterrole
绑定到新的 ServiceAccount cicd-token
。
1.创建app-team1的namespace
[root@k8s-master1 ~]# kubectl create namespace app-team1
namespace/app-team1 created
2.创建一个名字为 deployment-clusterrole
且仅允许创建以下(Deployment
,StatefulSet
,DaemonSet)资源类型的新 ClusterRole
[root@k8s-master1 CKA]# kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployments,StatefulSets,DaemonSets
clusterrole.rbac.authorization.k8s.io/deployment-clusterrole created
3.在现有的 namespace app-team1
中创建一个名为 cicd-token
的新 ServiceAccount
;限于 namespace app-team1
,将新的 ClusterRole deployment-clusterrole
绑定到新的 ServiceAccount cicd-token
。
创建 serviceaccount
[root@k8s-master1 CKA]# kubectl create serviceaccount cicd-token -n app-team1
serviceaccount/cicd-token created
绑定 rolebinding
[root@k8s-master1 CKA]# kubectl create rolebinding cicd-token-deployment-clusterrole --role=deployment-clusterrole --serviceaccount=app-team1:cicd-token -n app-team1
rolebinding.rbac.authorization.k8s.io/cicd-token-deployment-clusterrole created
二:将名为 k8s-master3
的 node 设置为不可用, 并重新调度该 node 上所有运行的 pods
查看节点
[root@k8s-master1 CKA]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master3 Ready <none> 76d v1.22.1
k8s-node1 Ready <none> 94d v1.22.1
驱逐应用,并设置节点不可调度
[root@k8s-master1 CKA]# kubectl drain k8s-master3
node/k8s-master3 cordoned
[root@k8s-master1 CKA]# kubectl drain k8s-master3 --ignore-daemonsets --delete-emptydir-data
node/k8s-master3 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-28x78, prometheus/node-exporter-l9fq9
evicting pod app/ingress-nginx-controller-74974c55bd-6ljtj
pod/ingress-nginx-controller-74974c55bd-6ljtj evicted
检查验证
[root@k8s-master1 CKA]# kubectl get pods -A -o wide |grep k8s-master3
kube-system calico-node-28x78 1/1 Running 11 (23m ago) 8d 192.168.21.122 k8s-master3 <none> <none>
prometheus node-exporter-l9fq9 1/1 Running 6 (23m ago) 3d22h 192.168.21.122 k8s-master3 <none> <none>
三:备份ETCD的快照,并恢复
1.备份快照
2.重命名数据文件夹,模拟文件删除的操作
3.还原数据
四:按如下要求调度一个 pod
名称: nginx-kusc00401
image: nginx
Node selector: disk=spinning
设置k8s-node1的disk=spinning的标签
[root@master1 CKA]# kubectl label nodes k8s-node1 disk=spinning
node/k8s-node1 labeled
[root@master1 CKA]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-node1 Ready <none> 102d v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=spinning,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
创建一个名称nginx-kusc00401,image=nginx ,Node selector: disk=spinning 的POD
[root@master1 CKA]# cat nginx-kusc00401.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
namespace: app
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: spinning
[root@master1 CKA]# kubectl get pods -n app -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-kusc00401 1/1 Running 0 54s 10.10.36.96 k8s-node1 <none> <none>
五:创建一个名字为kucc1
的 pod, 在pod里面分别为以下每个images单独运行一个app container(可能会有 1-4 个 images),容器名称和镜像如下:
nginx
+ redis
+ memcached
+ consul
[root@master1 CKA]# cat kucc1.yaml
apiVersion: v1
kind: Pod
metadata:
name: kucc1
namespace: app
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: memcached
image: memcached
- name: consul
image: consul
[root@master1 CKA]# kubectl apply -f kucc1.yaml
pod/kucc1 created
[root@master1 CKA]# kubectl get pods -n app
NAME READY STATUS RESTARTS AGE
kucc1 4/4 Running 0 87s
六:创建名为 app-data
的 persistent volume, 容量为 1Gi
, 访问模式为 ReadWriteMany
。 volume类型为 hostPath
, 位于/srv/app-data
[root@master1 CKA]# cat app-data-PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-data
namespace: app
spec:
capacity:
storage: 10G
accessModes:
- ReadWriteMany
hostPath:
path: "/srv/app-data"
type: DirectoryOrCreate
[root@master1 CKA]# kubectl apply -f app-data-PV.yaml
persistentvolume/app-data created
[root@master1 CKA]# kubectl get pv -n app
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
app-data 10G RWX Retain Available 9s
七:
创建一个新的 PersistentVolumeClaim
:
pv-volume
managed-nfs-storage
10Mi
创建一个新的 pod, 此 pod 将作为 volume 挂载到 PersistentVolumeClaim
:
web-server
nginx
/usr/share/nginx/html
配置新的 pod, 以对 volume 具有 ReadWriteOnce
权限。
最后, 使用 kubectl edit
或者 kubectl patch
将 PersistentVolumeClaim
的容量扩展为 70Mi
, 并记录此次更改。
创建PV
[root@master1 CKA]# kubectl get pvc -n app
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-zookeeper-0 Bound pvc-e7d3d926-4fa4-4044-bf2f-30df06eef4aa 8Gi RWO nfs-client 60d
pv-volume Bound pvc-9fe1e7d3-f964-4dea-afe6-7a904fceb31d 10Mi RWO managed-nfs-storage 6s
创建 pvc
[root@master1 CKA]# cat pv-volume.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
namespace: app
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-nfs-storage
resources:
requests:
storage: 10Mi
创建Pod
[root@master1 CKA]# kubectl get pods -n app -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kucc1 4/4 Running 0 60m 10.10.36.97 k8s-node1 <none> <none>
nginx-kusc00401 1/1 Running 0 67m 10.10.36.96 k8s-node1 <none> <none>
web-server 1/1 Running 0 11s 10.10.36.95 k8s-node1 <none> <none>
zookeeper-0 1/1 Running 4 (7d23h ago) 60d 10.10.36.82 k8s-node1 <none> <none>
[root@master1 CKA]# cat pod_pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
namespace: app
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv-volume
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
允许扩容
[root@master1 CKA]# kubectl edit sc managed-nfs-storage
# 添加1行
allowVolumeExpansion: true