k8s_CKA考试_学习笔记

k3s默认无法使用docker,导入docker的镜像加载到k3s中

[root@worker1 php]# k3s crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/library/busybox latest 9211bbaa0dbd6 2.23MB
docker.io/library/nginx 1.7.9 35d28df486f61 39.9MB
docker.io/library/php 0.1 274213c9683f4 1.19GB

[root@worker1 php]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
php 0.1 274213c9683f About an hour ago 1.17GB
busybox latest beae173ccac6 2 years ago 1.24MB
centos latest 5d0da3dc9764 2 years ago 231MB
nginx 1.7.9 84581e99d807 9 years ago 91.7MB

[root@worker1 php]# docker save php:0.1 | k3s ctr images import -
unpacking docker.io/library/php:0.1 (sha256:def9ebfbff8750b2cb858e95c7e13a604d038a19f37d851b2e2ce282cb77e778)...done

设置tab补全命令
[root@worker1 system]# yum -y install bash-completion
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Package 1:bash-completion-2.1-8.el7.noarch already installed and latest version
Nothing to do
[root@worker1 system]# source /usr/share/bash-completion/bash_completion
[root@worker1 system]# source <(kubectl completion bash)
[root@worker1 system]# echo "source <(kubectl completion bash)" >> ~/.bashrc

一、命名空间namespace

pod可能是分布在不同worker上的,不过作为管理员,我们只需要在命名空间对pod进行操作即可,不需要关心pod是在那个worker上

1.查看有多少命名空间

[root@worker1 ~]# kubectl get ns
NAME              STATUS   AGE
kube-system       Active   3d16h
kube-public       Active   3d16h
kube-node-lease   Active   3d16h
default           Active   3d16h

2.创建一个新的命名空间ns1

[root@worker1 ~]# kubectl create ns ns1
namespace/ns1 created

3.切换命名空间ns1

[root@worker1 ~]# kubectl config set-context --current --namespace=ns1
Context "default" modified.

4.删除命名空间ns1

[root@worker1 ~]# kubectl delete namespaces ns1
namespace "ns1" deleted

二、Pod

pod是k8s里面最小调度单位

  1. 查看pod
[root@worker1 system]# kubectl get pods -n kube-system 
NAME                                      READY   STATUS      RESTARTS   AGE
coredns-6799fbcd5-qlnln                   1/1     Running     0          69m
local-path-provisioner-84db5d44d9-wfspp   1/1     Running     0          69m
helm-install-traefik-crd-7pwlz            0/1     Completed   0          69m
helm-install-traefik-vbg4z                0/1     Completed   1          69m
metrics-server-67c658944b-rvqhs           1/1     Running     0          69m
svclb-traefik-e11f2c27-t9tjq              2/2     Running     0          64m
traefik-f4564c4f4-hg4bb                   1/1     Running     0          64m
svclb-traefik-e11f2c27-f7xkr              2/2     Running     0          60m

[root@worker1 system]# kubectl get pods --all-namespaces     #或者-A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-6799fbcd5-qlnln                   1/1     Running     0          70m
kube-system   local-path-provisioner-84db5d44d9-wfspp   1/1     Running     0          70m

[root@worker1 ~]# kubectl get pods -A -o wide
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE     IP          NODE      NOMINATED NODE   READINESS GATES
kube-system   local-path-provisioner-84db5d44d9-7d2gz   1/1     Running     0          7h43m   10.42.0.4   worker1              
kube-system   coredns-6799fbcd5-4wmgt                   1/1     Running     0          7h43m   10.42.0.5   worker1              
kube-system   helm-install-traefik-crd-l98f6            0/1     Completed   0          7h43m   10.42.0.6   worker1              
kube-system   helm-install-traefik-n6xwm                0/1     Completed   1          7h43m   10.42.0.3   worker1              



  1. 创建pod 要提前下载好镜像
[root@worker1 ~]# kubectl run nginx1 --image=nginx:1.7.9 --port=80 --env=bianliangming=zhi 
pod/nginx1 created

  1. 删除pod
[root@worker1 ~]# kubectl delete pod nginx1 --force
#force可选,可以加快删除速度
  1. 生成yaml文件创建pod

Pod的基本操作

如在pod里执行命令,查看pod属性,查看pod里的日志

  1. 在容器里执行命令
[root@worker1 ~]# kubectl exec nginx1 -- ls /usr/share/nginx/html    pod名后面要有 -- 
50x.html
index.html
  1. 拷贝文件
[root@worker1 ~]# kubectl cp abc.txt nginx1:/usr/share/nginx/html
  1. 进入pod里并获取bash
[root@worker1 ~]# kubectl exec -it nginx1 -- bash
root@nginx1:/# 

如果pod里有多个容器的话,默认是进入第一个容器里,如果想进第二个容器,用-c指定容器名

[root@worker1 ~]# kubectl exec -it nginx1 -c c2 -- bash
  1. 查看pod的具体属性用describe
  2. 查看pod里的输出用logs

Pod的生命周期

k8s对pod的删除有个延期删除,这个时间默认是30s。等待pod继续处理任务
当发出删除pod的命令后,这个pod的状态被标记为terminating (终止)。这个宽限期可以通过参数terminatingGracePeriodSeconds来指定

不过,如果pod里运行的是nginx进程那就不一样了,因为nginx处理信号的方式和kubernetes处理信号的方式不一样。 当我们对nginx的pod发出删除信号的时候,pod里的nginx进程会被很快关闭(fast shutdown),之后pod也很快被删除。

  1. pod hook(钩子)
    在整个pod生命期内(lifecycle),有两个hook是可用的
    (1) postStart:当创建pod时,会随着pod里的主进程同时运行,没有先后顺序
    (2) preStop:当删除pod的时候,要先运行preStop里的程序,之后再关闭pod
    对于preStop来说,也必须要在pod宽限期内完成,否则也会被强制删除

初始化容器

将容器A和B设置为C的初始化容器,若A或B运行失败,则C不会运行
initContainers

静态Pod

指不是由master创建启动,在node上只要启动kubelet,就会自动的创建pod

手动指定pod运行位置

给节点设置标签

在节点上设置一些标签,然后指定pod运行在特定标签的节点上。

查看节点标签

[root@worker1 system]# kubectl get nodes --show-labels 
NAME      STATUS   ROLES                  AGE   VERSION        LABELS
worker1   Ready    control-plane,master   23h   v1.28.5+k3s1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s
worker2   Ready                     20h   v1.28.5+k3s1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker2,kubernetes.io/os=linux,node.kubernetes.io/instance-type=k3s

给worker2节点设置一个标签 key=value

[root@worker1 system]# kubectl label node worker2 biaoq=work2
node/worker2 labeled
[root@worker1 system]# kubectl get node worker2 --show-labels 
NAME      STATUS   ROLES    AGE   VERSION        LABELS
worker2   Ready       21h   v1.28.5+k3s1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,biaoq=work2,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker2,kubernetes.io/os=linux,node.kubernetes.io/instance-type=k3s

取消节点标签

[root@worker1 system]# kubectl label node worker2 biaoq-
node/worker2 unlabeled

为所有节点设置标签

[root@worker1 system]# kubectl label node --all key=value
node/worker1 labeled
node/worker2 labeled

有一个特殊的标签,格式为node-role.kubernetes.io/名字。这个标签用于设置ROLES那列值的,比如master节点会显示control-plane,master,其他节点会显示
这个键有没有值都无所谓,不设置值的话,value部分直接用""替代即可

[root@worker1 system]# kubectl label node worker1 node-role.kubernetes.io/worker1=""
node/worker1 labeled
[root@worker1 system]# kubectl label node worker2 node-role.kubernetes.io/worker2=""
node/worker2 labeled
[root@worker1 system]# kubectl get nodes
NAME      STATUS   ROLES                          AGE   VERSION
worker1   Ready    control-plane,master,worker1   23h   v1.28.5+k3s1
worker2   Ready    worker2                        21h   v1.28.5+k3s1
[root@worker1 system]# 
创建在特定节点上运行的pod

在pod里通过nodeSelector可以让pod在含有特定标签的节点上运行

[root@worker1 ~]# cat podlable.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: web1
  labels:
    role: myrole
spec:
  nodeSelector: 
    biaoq: work2
  containers:
    - name: web
      image: nginx
      imagePullPolicy: IfNotPresent
[root@worker1 ~]# kubectl apply -f podlable.yaml 
pod/web1 created
Annotations设置

不管是node还是pod还是deployment,都还有1个属性Annotations,这个属性可以理解为注释
查看Annotations属性

[root@worker1 ~]# kubectl describe node worker2 | grep Annotations
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"26:3a:46:88:c3:c4"}

设置Annotations属性

[root@worker1 ~]# kubectl annotate node worker2 abc=123
node/worker2 annotated
[root@worker1 ~]# kubectl describe node worker2 | grep Annotations
Annotations:        abc: 123

取消annotations属性

[root@worker1 ~]# kubectl annotate node worker2 abc-
node/worker2 annotated
[root@worker1 ~]# kubectl describe node worker2 | grep Annotations
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"26:3a:46:88:c3:c4"}

节点的cordon与drain

如果某个节点要进行维护,希望此节点不再被分配pod,可以对节点实施cordon或drain的操作,这样节点就会被标记为ScheduingDisabled,新创建的pod就不会被分配到这些节点上了

创建一个deployment测试

[root@worker1 ~]# kubectl create deployment nginx --image=nginx:1.7.9 --dry-run=client -o yaml > d1.yaml
[root@worker1 ~]# cat d1.yaml        #把replicas改为3
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.7.9
        name: nginx
        resources: {}
status: {}
#创建deployment
[root@worker1 ~]# kubectl apply -f d1.yaml 
deployment.apps/nginx created
[root@worker1 ~]# kubectl get pods -owide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
nginx-54bbf55b54-tkc4k   1/1     Running   0          86s   10.42.1.4    worker2              
nginx-54bbf55b54-2g68m   1/1     Running   0          86s   10.42.0.10   worker1              
nginx-54bbf55b54-mjlvc   1/1     Running   0          86s   10.42.1.5    worker2              

现在通过cordon把worker2标记为不可用

[root@worker1 ~]# kubectl cordon worker2
node/worker2 cordoned
[root@worker1 ~]# kubectl get nodes
NAME      STATUS                     ROLES                          AGE    VERSION
worker1   Ready                      control-plane,master,worker1   2d9h   v1.28.5+k3s1
worker2   Ready,SchedulingDisabled   worker2                        2d7h   v1.28.5+k3s1
#发现worker2状态为SchedulingDisabled

#扩展此deployment的副本数为6个
[root@worker1 ~]# kubectl scale deployment nginx --replicas=6
deployment.apps/nginx scaled
[root@worker1 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
nginx-54bbf55b54-tkc4k   1/1     Running   0          25m   10.42.1.4    worker2              
nginx-54bbf55b54-2g68m   1/1     Running   0          25m   10.42.0.10   worker1              
nginx-54bbf55b54-mjlvc   1/1     Running   0          25m   10.42.1.5    worker2              
nginx-54bbf55b54-flqs8   1/1     Running   0          10s   10.42.0.13   worker1              
nginx-54bbf55b54-h8694   1/1     Running   0          10s   10.42.0.12   worker1              
nginx-54bbf55b54-mlfh5   1/1     Running   0          10s   10.42.0.11   worker1              
#可以看到新增pod只会在worker1上。但是原来在worker2上的pod仍然在worker2上

#如果要恢复worker2,需要进行uncordon
[root@worker1 ~]# kubectl uncordon worker2 
node/worker2 uncordoned
#将deploy副本数设置为0
[root@worker1 ~]# kubectl scale deployment nginx --replicas=0
deployment.apps/nginx scaled

节点的drain:
drain的操作和cordon是一样的,但是drain多了一个驱逐效果
创建4个副本

[root@worker1 ~]# kubectl scale deployment nginx --replicas=4
deployment.apps/nginx scaled
[root@worker1 ~]# kubectl get pods -o wide --no-headers 
nginx-54bbf55b54-jvz72   1/1   Running   0     24s   10.42.1.7    worker2      
nginx-54bbf55b54-4pbh7   1/1   Running   0     24s   10.42.1.6    worker2      
nginx-54bbf55b54-28qsf   1/1   Running   0     24s   10.42.0.14   worker1      
nginx-54bbf55b54-d4jbh   1/1   Running   0     24s   10.42.0.15   worker1      

对worker2进行drain操作,由于worker2中运行了一些由daemonset控制的pod,但此时worker已经被标记为不可调度

[root@worker1 ~]# kubectl drain worker2
node/worker2 cordoned
error: unable to drain node "worker2" due to error:cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/svclb-traefik-ed552cc8-rmtzc, continuing command...
There are pending nodes to be drained:
 worker2
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/svclb-traefik-ed552cc8-rmtzc

取消drain操作,仍然用的uncordon 重新对worker2进行drain操作,使用–ignore-daemonsets to ignore

[root@worker1 ~]# kubectl uncordon worker2
node/worker2 uncordoned

[root@worker1 ~]# kubectl drain worker2 --ignore-daemonsets --delete-emptydir-data 
node/worker2 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/svclb-traefik-ed552cc8-rmtzc
evicting pod ns1/nginx-54bbf55b54-4pbh7
evicting pod ns1/nginx-54bbf55b54-jvz72
pod/nginx-54bbf55b54-4pbh7 evicted
pod/nginx-54bbf55b54-jvz72 evicted
node/worker2 drained

[root@worker1 ~]# kubectl get pod -owide
NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE      NOMINATED NODE   READINESS GATES
nginx-54bbf55b54-28qsf   1/1     Running   0          8m40s   10.42.0.14   worker1              
nginx-54bbf55b54-d4jbh   1/1     Running   0          8m40s   10.42.0.15   worker1              
nginx-54bbf55b54-h2c8z   1/1     Running   0          41s     10.42.0.17   worker1              
nginx-54bbf55b54-k7g5r   1/1     Running   0          41s     10.42.0.16   worker1              

此时原来在worker2上的pod全倒了worker1上(新生成的pod)

节点taint及pod的tolerations

三、存储管理

emptyDir 、 hostPath 、 NFS后端存储 、 持久性存储 、 动态卷供应

容器中的文件在磁盘上是临时存放的,这给容器中运行比较重要的应用程序带来一些问题。
• 问题1:当容器升级或者崩溃时,kubelet会重建容器,容器内文件会丢失
• 问题2:一个Pod中运行多个容器需要共享文件 Kubernetes 卷(Volume) 这一抽象概念能够解决这两个问题。

创建emptyDir类型的卷、挂载卷

emptyDir卷:是一个临时存储卷(Pod所在节点) ,与Pod生命 周期绑定一起,如果Pod删除了卷也会被删除 。由于容器崩溃不会导致 Pod 被删除,因此 emptyDir 卷中的数据在容器崩溃时是安全的。
应用场景:Pod中容器之间数据共享
常用语同pod内容器直接的数据传输

[root@worker1 ~]# mkdir volume
[root@worker1 ~]# cd volume/
[root@worker1 volume]# vim emp.yaml
[root@worker1 volume]# cat emp.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: demo
  labels:
    aa: aa
spec:
  volumes:
  - name: volum

你可能感兴趣的:(kubernetes,linux,容器,笔记)