官方文档:升级 kubeadm 集群
升级工作的基本流程如下:
升级主控制平面节点
升级其他控制平面节点
升级工作节点
首先选择一个要先行升级的控制面节点。该节点上必须拥有 /etc/kubernetes/admin.conf 文件。
确定要升级到哪个版本
使用操作系统的包管理器找到最新的补丁版本 Kubernetes 1.22
[root@k8s1 ~]# yum install -y kubeadm-1.22.2-0
[root@k8s1 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s1 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.22.1
[upgrade/versions] kubeadm version: v1.22.2
I0420 18:17:29.418051 14782 version.go:255] remote version is much newer: v1.23.5; falling back to: stable-1.22
[upgrade/versions] Target version: v1.22.8
[upgrade/versions] Latest version in the v1.22 series: v1.22.8
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.22.1 v1.22.8
Upgrade to the latest version in the v1.22 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.22.1 v1.22.8
kube-controller-manager v1.22.1 v1.22.8
kube-scheduler v1.22.1 v1.22.8
kube-proxy v1.22.1 v1.22.8
CoreDNS v1.8.4 v1.8.4
etcd 3.5.0-0 3.5.0-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.22.8
Note: Before you can perform this upgrade, you have to update kubeadm to v1.22.8.
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
[root@k8s1 ~]# kubeadm upgrade apply v1.22.2
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.22.2"
[upgrade/versions] Cluster version: v1.22.1
[upgrade/versions] kubeadm version: v1.22.2
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.2"...
Static pod: kube-apiserver-k8s1 hash: ede0fd45afb180ebfdf550e00c4205ff
Static pod: kube-controller-manager-k8s1 hash: 38ce0f3c67dad9543cd2b77fe2535391
Static pod: kube-scheduler-k8s1 hash: 059f507ac11831e865a5bbde108257cd
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s1 hash: 0ec5717bc681762beb1d11ba2da3aa34
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests011680339"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-04-20-18-19-39/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s1 hash: ede0fd45afb180ebfdf550e00c4205ff
Static pod: kube-apiserver-k8s1 hash: ede0fd45afb180ebfdf550e00c4205ff
Static pod: kube-apiserver-k8s1 hash: ede0fd45afb180ebfdf550e00c4205ff
Static pod: kube-apiserver-k8s1 hash: bffe53f30eb1fe1d65832eb152b1d1aa
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-04-20-18-19-39/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s1 hash: 38ce0f3c67dad9543cd2b77fe2535391
Static pod: kube-controller-manager-k8s1 hash: 38ce0f3c67dad9543cd2b77fe2535391
Static pod: kube-controller-manager-k8s1 hash: ccc9a7c02cb35434c2475dc0600f58f4
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-04-20-18-19-39/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s1 hash: 059f507ac11831e865a5bbde108257cd
Static pod: kube-scheduler-k8s1 hash: 0f92a29a90afeff2dcccce64190f29e6
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.2". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.2". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
kubeadm upgrade node
kubeadm upgrade apply
此外,不需要执行 kubeadm upgrade plan
此时的节点状态 Ready
[root@k8s1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 Ready control-plane,master 23m v1.22.1
k8s2 Ready <none> 10m v1.22.1
k8s3 Ready <none> 9m38s v1.22.1
[root@k8s1 ~]# kubectl drain k8s1 --ignore-daemonsets
node/k8s1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-jjnbp, kube-system/kube-proxy-r2z2t
evicting pod kube-system/coredns-7f6cbbb7b8-z42dt
evicting pod kube-system/coredns-7f6cbbb7b8-44762
pod/coredns-7f6cbbb7b8-44762 evicted
pod/coredns-7f6cbbb7b8-z42dt evicted
node/k8s1 evicted
[root@k8s1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 Ready,SchedulingDisabled control-plane,master 25m v1.22.1
k8s2 Ready <none> 12m v1.22.1
k8s3 Ready <none> 11m v1.22.1
[root@k8s1 ~]# yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0
[root@k8s1 ~]# systemctl daemon-reload
[root@k8s1 ~]# systemctl restart kubelet.service
[root@k8s1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 Ready,SchedulingDisabled control-plane,master 27m v1.22.1
k8s2 Ready <none> 14m v1.22.1
k8s3 Ready <none> 13m v1.22.1
[root@k8s1 ~]# kubectl uncordon k8s1
node/k8s1 uncordoned
[root@k8s1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 Ready control-plane,master 27m v1.22.2
k8s2 Ready <none> 14m v1.22.1
k8s3 Ready <none> 13m v1.22.1
工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点, 以不影响运行工作负载所需的最小容量。
因为当前我只有 2 个工作节点,所以,不能同时腾空 k8s2 和 k8s3。否则pod不知道如何运行。
[root@k8s2 ~]# yum install -y kubeadm-1.22.2-0
[root@k8s2 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
[root@k8s1 ~]# kubectl drain k8s2 --ignore-daemonsets
node/k8s2 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-q2lkh, kube-system/kube-proxy-9fwlc
evicting pod kube-system/coredns-7f6cbbb7b8-8g8v5
pod/coredns-7f6cbbb7b8-8g8v5 evicted
node/k8s2 evicted
[root@k8s2 ~]# yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0
[root@k8s2 ~]# systemctl daemon-reload
[root@k8s2 ~]# systemctl restart kubelet.service
[root@k8s1 ~]# kubectl uncordon k8s2
node/k8s2 uncordoned
[root@k8s1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 Ready control-plane,master 34m v1.22.2
k8s2 Ready <none> 21m v1.22.2
k8s3 Ready <none> 20m v1.22.1
[root@k8s3 ~]# yum install -y kubeadm-1.22.2-0
[root@k8s3 ~]# kubeadm upgrade node
[root@k8s1 ~]# kubectl drain k8s3 --ignore-daemonsets
node/k8s3 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-d9r55, kube-system/kube-proxy-vwp45
evicting pod kube-system/coredns-7f6cbbb7b8-n7m5m
evicting pod kube-system/coredns-7f6cbbb7b8-mx7qr
pod/coredns-7f6cbbb7b8-mx7qr evicted
pod/coredns-7f6cbbb7b8-n7m5m evicted
node/k8s3 evicted
[root@k8s3 ~]# yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0
[root@k8s3 ~]# systemctl daemon-reload
[root@k8s3 ~]# systemctl restart kubelet.service
[root@k8s1 ~]# kubectl uncordon k8s3
node/k8s3 uncordoned
[root@k8s1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 Ready control-plane,master 36m v1.22.2
k8s2 Ready <none> 23m v1.22.2
k8s3 Ready <none> 22m v1.22.2
考试主要做 控制平面节点 的升级!不需要动 工作节点!
不要求的东西,不能动!