kubbernetes版本兼容性
在升级之前你需要了解各版本间的关系:
- kubernetes版本命名方式表示为XYZ,其中X表示主要版本,Y表示次要版本,Z表示补丁版本。
比如 1.16.0 - K8s所有组件 kube-controller,kube-scheduler,kubelet的版本号不得高于kube-apiserver的版本号。
- 这些组件的版本号可低于kube-apiserver的1个次要版本,比如kube-apierver是1.16.0,其它组件的版本可以为1.16.x和1.15.x。
- 在一个HA集群中,多个kube-apiserver间的版本号最多只能相差一个次版本号,比如 1.16和1.15。
- 最好所有组件与kube-apiserver版本号完全一致。
- 因此升级Kubernetes集群时,最先升级的核心组件就是kube-apiserver。
- 且只能向上升级为一个次要版本。
- kubectl版本最多只能比kube-apiserver高或低一个次版本号。
宏观升级流程
- 升级主控制平面节点。
- 升级其他控制平面节点。
- 升级Node节点。
微观升级步骤
- 先升级kubeadm版本
- 升级第一个主控制平面节点Master组件。
- 升级第一个主控制平面节点上的kubelet及kubectl。
- 升级其它控制平面节点。
- 升级Node节点
- 验证集群。
升级注意事项
- 确定升级前的的kubeadm集群版本。
- kubeadm upgrade不会影响到工作负载,仅涉及k8s内部的组件,但是备份etcd数据库是最佳实践。
- 升级后,所有容器都会重启动,因为容器的hash值已更改。
- 由于版本的兼容性,只能从一个次要版本升级到另外一个次要版本,不能跳跃升级。
- 集群控制平面应使用静态Pod和etcd pod或外部etcd。
kubeadm upgrade 集群升级命令详解
通过查询命令行帮助:
$ kubeadm upgrade -h
Upgrade your cluster smoothly to a newer version with this command.
Usage:
kubeadm upgrade [flags]
kubeadm upgrade [command]
`
Available Commands:
apply Upgrade your Kubernetes cluster to the specified version.
diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
node Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.
plan Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter.
命令解析:
- apply: 升级Kubernetes集群到指定版本。
- diff: 即将运行的静态Pod文件清单与当前正运行的静态Pod清单文件的差异。
- node: 升级集群中的node,当前(v1.16)仅支持升级kubelet的配置文件(/var/lib/kubelet/config.yaml),而非kubelet本身。
- plan: 检测当前集群是否可升级,并支持升级到哪些版本。
其中node子命令又支持如下子命令和选项:
$ kubeadm upgrade node -h
Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.
Usage:
kubeadm upgrade node [flags]
kubeadm upgrade node [command]
Available Commands:
config Downloads the kubelet configuration from the cluster ConfigMap kubelet-config-1.X, where X is the minor version of the kubelet.
experimental-control-plane Upgrades the control plane instance deployed on this node. IMPORTANT. This command should be executed after executing `kubeadm upgrade apply` on another control plane instance
Flags:
-h, --help help for node
Global Flags:
--log-file string If non-empty, use this log file
--rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
--skip-headers If true, avoid header prefixes in the log messages
-v, --v Level number for the log level verbosity
命令解析:
- config: 从集群configmap中下载kubelet的配置文件kubelet-config-1.x,其中x是kubelet的次要版本。
- experimental-control-plane: 升级部署在此节点的控制平面各组件, 通常在第一个控制平面实例上执行"kubeadm upgrade apply"后,应执行此命令。
操作环境说明:
- OS: Ubuntu16.04
- k8s: 一个Master,一个Node
kubernetes之从1.13.x升级到1.14.x
由于当前环境中的集群是由kubeadm创建的,其版本为1.13.1,所以本次实验将其升级至1.14.0。
执行升级流程
升级第一个控制平面节点
首先,在第一个控制平面节点也就是主控制平面上操作:
1. 确定升级前集群版本:
root@k8s-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
2. 查找可升级的版本:
apt update
apt-cache policy kubeadm
# find the latest 1.14 version in the list
# it should look like 1.14.x-00, where x is the latest patch
1.14.0-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
3. 先升级kubeadm到1.14.0
# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \
apt-get update && apt-get install -y kubeadm=1.14.0-00 && \
apt-mark hold kubeadm
如上升级kubeadm到1.14版本,Ubuntu系统有可能会自动升级kubelet到当前最新版本的1.16.0,所以此时就把kubelet也升级下:
apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00
如果确实发生这种情况,导致了kubeadm和kubelet版本不一致,最终致使后面的升级集群操作失败,此时可以删除kubeadm、kubelet
删除:
apt-get remove kubelet kubeadm
再次安装预期版本:
apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00
确定kubeadm已升级到预期版本:
root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master:~#
4. 运行升级计划命令:检测集群是否可以升级,及获取到的升级的版本。
kubeadm upgrade plan
输出如下:
root@k8s-master:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0
Awesome, you're up-to-date! Enjoy!
告诉你集群可以升级。
5. 升级控制平面各组件,包含etcd。
root@k8s-master:~# kubeadm upgrade apply v1.14.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.14.0"
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0
//输出 y 确认之后,开始进行升级。
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.0"...
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests696355120"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: bb799a8d323c1577bf9e10ede7914b30
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-controller-manager-k8s-master hash: 54146492ed90bfa147f56609eee8005a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
Static pod: kube-scheduler-k8s-master hash: 58272442e226c838b193bbba4c44091e
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.3.1.20]
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
root@k8s-master:~#
在最后两行中,可以看到,集群升级成功。
kubeadm upgrade apply 执行了如下操作:
- 检测集群是否可以升级:
API Service是否可用、
所有的Node节点是不是处于Ready、
控制平面是否healthy。 - 强制实施版本的skew policies。
- 确保控制平面镜像可用且拉取到机器上。
- 通过更新/etc/kubernetes/manifests下清单文件来升级控制平面组件,如果升级失败,则将清单文件还原。
- 应用新的kube-dns和kube-proxy配置清单文件,及创建相关的RBAC规则。
- 为API Server创建新的证书和key,并把旧的备份一份(如果它们将在180天后过期)。
到v1.16版本为止,kubeadm upgrade apply必须在主控制平面节点上执行。
6. 运行完之后,验证集群版本:
root@k8s-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
可以看到,虽然kubectl版本是在1.13.1,而服务端的控制平面已经升级到了1.14.0
Master组件已正常运行:
root@k8s-master:~# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
到这里,第一台控制平面的Master组件已升级完成,控制平面节点上通常还有kubelet和kubectl,所以这两个也要做升级。
7. 升级CNI插件。
这一步是可选的,查询CNI插件是否可以升级。
8. 升级该控制平面上的kubelet和kubectl
现在可以升级kubelet了,在升级过程中,不影响业务Pod的运行。
8.1. 升级kubelet、kubectl
# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.14.0-00 kubectl=1.14.0-00 && \
apt-mark hold kubelet kubectl
8.2. 重启kubelet:
sudo systemctl restart kubelet
9. 查看kubectl版本,与预期一致。
root@k8s-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master:~#
第一台控制平面节点已完成升级。
升级其它控制平面节点
10. 升级其它控制平面节点。
在其它控制平面上执行,与第一个控制平面节点相同,但使用:
sudo kubeadm upgrade node experimental-control-plane
代替:
sudo kubeadm upgrade apply
而 sudo kubeadm upgrade plan 没有必要再执行了。
kubeadm upgrade node experimental-control-plane执行如下操作:
- 从集群中获取kubeadm的ClusterConfiguration。
- 备份kube-apiserver证书(可选)。
- 升级控制平面上的三个核心组件的静态Pod清单文件。
升级Node
现在开始升级Node上的各组件:kubeadm、kubelet、kube-proxy。
在不影响集群访问的情况下,一个节点一个节点的执行。
1.将Node标记为维护状态。
现在Node还原来的1.13:
root@k8s-master:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 292d v1.14.0
k8s-node01 Ready node 292d v1.13.1
升级Node之前先将Node标记为不可用,并逐出所有Pod:
kubectl drain $NODE --ignore-daemonsets
2. 升级kubeadm和kubelet
现在在各Node上同样的安装kubeadm、kubelet,因为使用kubeadm升级kubelet。
# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \
apt-get update && apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00 && \
apt-mark hold kubeadm kubelet
3. 升级kubelet的配置文件
$ kubeadm upgrade node config --kubelet-version v1.14.0
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
root@k8s-master:~#
4. 重新启动kubelet
$ sudo systemctl restart kubelet
5. 最后将节点标记为可调度来使其重新加入集群
kubectl uncordon $NODE
至此,该Node升级完毕,可以查看kubelet、kube-proxy的版本已变为预期版本v1.14.0
验证集群版本
root@k8s-master:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 292d v1.14.0
k8s-node01 Ready node 292d v1.14.0
该STATUS列应所有节点显示Ready,并且版本号已更新。
到这里,所有升级流程已完美攻克。
从故障状态中恢复
如果kubeadm upgrade失败并且无法 回滚(例如由于执行期间意外关闭),则可以再次运行kubeadm upgrade。此命令是幂等的,并确保实际状态最终是您声明的状态。
要从不良状态中恢复,您可以在不更改集群运行版本的情况下运行:
kubeadm upgrade --force。
更多升级信息查看官方升级文档
kubernetes之从1.14.x升级到1.15.x
从1.14.0升级到1.15.0的升级流程也大致相同,只是升级命令稍有区别。
升级主控制平面节点
升级流程 与 从1.13升级至 1.14.0 相同。
1. 查询可升级版本,安装kubeadm到预期版本v1.15.0
apt-cache policy kubeadm
apt-mark unhold kubeadm kubelet
apt-get install -y kubeadm=1.15.0-00
kubeadm已达预期版本:
root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
2. 执行升级计划
由于v1.15版本中,证书到期会自动续费,kubeadm在控制平面升级期间更新所有证书,即 v1.15发布的kubeadm upgrade,会自动续订它在该节点上管理的证书。如果不想自动更新证书,可以加上参数:--certificate-renewal=false。
升级计划:
kubeadm upgrade plan
可以看到如下输出:
root@k8s-master:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
I1005 20:45:04.474363 38108 version.go:248] remote version is much newer: v1.16.1; falling back to: stable-1.15
[upgrade/versions] Latest stable version: v1.15.4
[upgrade/versions] Latest version in the v1.14 series: v1.14.7
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.14.0 v1.14.7
1 x v1.15.0 v1.14.7
Upgrade to the latest version in the v1.14 series:
COMPONENT CURRENT AVAILABLE
API Server v1.14.0 v1.14.7
Controller Manager v1.14.0 v1.14.7
Scheduler v1.14.0 v1.14.7
Kube Proxy v1.14.0 v1.14.7
CoreDNS 1.3.1 1.3.1
Etcd 3.3.10 3.3.10
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.14.7
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.14.0 v1.15.4
1 x v1.15.0 v1.15.4
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
API Server v1.14.0 v1.15.4
Controller Manager v1.14.0 v1.15.4
Scheduler v1.14.0 v1.15.4
Kube Proxy v1.14.0 v1.15.4
CoreDNS 1.3.1 1.3.1
Etcd 3.3.10 3.3.10
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.15.4
Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.4.
_____________________________________________________________________
3. 升级控制平面
根据任务指引,升级控制平面:
kubeadm upgrade apply v1.15.0
由于kubeadm的版本是v1.15.0,所以集群版本也只能为v1.15.0。
输出如下信息:
root@k8s-master:~# kubeadm upgrade apply v1.15.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.0"
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
...
##正在拉取镜像
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
...
##已经拉取所有组件的镜像
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
...
...
##如下自动续订了所有证书
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests353124264"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
...
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
4. 升级成功,验证。
可以看到,升级成功,此时,再次查询集群核心组件版本:
root@k8s-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
查该node版本:
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 295d v1.14.0
k8s-node01 Ready node 295d v1.14.0
5. 升级该控制平面上的kubelet和kubectl
控制平面核心组件已升级为v1.15.0,现在升级该节点上的kubelet及kubectl了,在升级过程中,不影响业务Pod的运行。
# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.15.0-00 kubectl=1.15.0-00 && \
apt-mark hold kubelet kubectl
6. 重启kubelet:
sudo systemctl restart kubelet
7. 验证kubelet、kubectl版本,与预期一致。
root@k8s-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
查该node版本:
root@k8s-master:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 295d v1.15.0
k8s-node01 Ready node 295d v1.14.0
升级其它控制平面
升级其它控制平面上的三个组件的命令有所不同,使用:
1. 升级其它控制平面组件,但是使用如下命令:
$ sudo kubeadm upgrade node
2. 然后,再升级kubelet和kubectl。
# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \
apt-mark hold kubelet kubectl
3. 重启kubelet
$ sudo systemctl restart kubelet
升级Node
升级Node与前面一致,此处简写。
在所有Node上执行。
1. 升级kubeadm:
# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.15.x-00 && \
apt-mark hold kubeadm
查询kubeadm版本:
root@k8s-node01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
2. 设置node为维护状态:
kubectl cordon $NODE
3. 更新kubelet配置文件
$ sudo kubeadm upgrade node
upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
4. 升级kubelet组件和kubectl。
# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \
apt-mark hold kubelet kubectl
5. 重启kubelet
sudo systemctl restart kubelet
此时kube-proxy也会自动升级并重启。
6. 取消维护状态
kubectl uncordon $NODE
Node升级完成。
验证集群版本
root@k8s-master:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 295d v1.15.0
k8s-node01 NotReady node 295d v1.15.0
kubeadm upgrade node详解
在这次升级流程中,升级其它控制平面和升级Node 用的都是 kubeadm upgrade node。
kubeadm upgrade node 在其它控制平面节点执行时:
- 从集群中获取kubeadm的ClusterConfiguration。
- 备份kube-apiserver证书(可选)。
- 升级控制平面上的三个核心组件的静态Pod清单文件。
- 升级该控制平面上的kubelet配置。
kubeadm upgrade node 在Node节点上执行以下操作:
- 从集群中获取kubeadm的ClusterConfiguration。
- 升级该Node节点的kubelet配置。
kubernetes之从1.15.x升级到1.16.x
从1.15.x升级到1.16.x 与 前面的 从1.14.x升级到1.15.x,升级命令完全相同,此处就不再重复。