kubernetes集群计算节点的升级和扩容

kuernetes集群计算节点升级

  1. 首先查看集群的节点状态
    Last login: Thu Mar 14 09:39:26 2019 from 10.83.2.89
    [root@kubemaster ~]# 
    [root@kubemaster ~]# kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    kubemaster   Ready    master   17d   v1.13.3
    kubenode1    Ready       17d   v1.13.3
    kubenode2    Ready       17d   v1.13.3
    [root@kubemaster ~]#
  2. 查看哪些POD运行在kubenode1节点上面
    [root@kubemaster ~]# kubectl get pods -o wide|grep kubenode1
    account-summary-689d96d949-49bjr                                  1/1     Running            0          7d15h   10.244.1.17    kubenode1               
    compute-interest-api-5f54cc8dd9-44g9p                             1/1     Running            0          7d15h   10.244.1.15    kubenode1               
    send-notification-fc7c8ffc4-rk5wl                                 1/1     Running            0          7d15h   10.244.1.16    kubenode1               
    transaction-generator-7cfccbbd57-8ts5s                            1/1     Running            0          7d15h   10.244.1.18    kubenode1               
    [root@kubemaster ~]#  
    # 如果别的命名空间也有pods,也可以加上命名空间,比如 kubectl get pods -n kube-system -o wide|grep kubenode1
  3. 使用kubectl cordon命令将kubenode1节点配置为不可调度状态;
    [root@kubemaster ~]# kubectl cordon kubenode1
    node/kubenode1 cordoned
    [root@kubemaster ~]#
  4. 继续查看运行的Pod,发现Pod还是运行在kubenode1上面。其实kubectl crodon的用途只是说后续的pod不运行在kubenode1上面,但是仍然在kubenode1节点上面运行的Pod还是没有驱逐
    [root@kubemaster ~]# kubectl get node
    NAME         STATUS                     ROLES    AGE   VERSION
    kubemaster   Ready                      master   17d   v1.13.3
    kubenode1    Ready,SchedulingDisabled      17d   v1.13.3
    kubenode2    Ready                         17d   v1.13.3
    [root@kubemaster ~]# kubectl get pods -n kube-system -o wide|grep kubenode1
    kube-flannel-ds-amd64-7ghpg            1/1     Running   1          17d     10.83.32.138   kubenode1               
    kube-proxy-2lfnm                       1/1     Running   1          17d     10.83.32.138   kubenode1               
    [root@kubemaster ~]# 
  5. 现在需要驱逐Pod,使用的命令是kubectl drain 如果节点上面还有一些DaemonSet的Pod在运行的话,需要加上参数 --ignore-daemonsets
    [root@kubemaster ~]# kubectl drain kubenode1 --ignore-daemonsets
    node/kubenode1 already cordoned
    WARNING: Ignoring DaemonSet-managed pods: node-exporter-s5vfc, kube-flannel-ds-amd64-7ghpg, kube-proxy-2lfnm
    pod/traefik-ingress-controller-7899bfbd87-wsl64 evicted
    pod/grafana-57f7d594d9-vw5mp evicted
    pod/tomcat-deploy-5fd9ffbdc7-cdnj8 evicted
    pod/myapp-deploy-6b56d98b6b-rrb5b evicted
    pod/transaction-generator-7cfccbbd57-8ts5s evicted
    pod/prometheus-848d44c7bc-rtq7t evicted
    pod/send-notification-fc7c8ffc4-rk5wl evicted
    pod/compute-interest-api-5f54cc8dd9-44g9p evicted
    pod/account-summary-689d96d949-49bjr evicted
    node/kubenode1 evicted
    [root@kubemaster ~]# 
  6. 再次查看Pod,是否还有Pod在kubenode1上面运行。没有的话开始关机升级配置,增加配置之后启动计算节点。
    [root@kubemaster ~]# kubectl get nodes
    NAME         STATUS                     ROLES    AGE   VERSION
    kubemaster   Ready                      master   17d   v1.13.3
    kubenode1    Ready,SchedulingDisabled      17d   v1.13.3
    kubenode2    Ready                         17d   v1.13.3
    [root@kubemaster ~]#
    #发现这个节点还是无法调度的状态
    [root@kubemaster ~]# kubectl uncordon kubenode1
    #设置这个计算节点为可调度
    node/kubenode1 uncordoned
    [root@kubemaster ~]# kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    kubemaster   Ready    master   17d   v1.13.3
    kubenode1    Ready       17d   v1.13.3
    kubenode2    Ready       17d   v1.13.3
    [root@kubemaster ~]#
  7. 至此升级一台k8s集群计算节点的任务就此完成了。现在我们再来实现k8s集群增加一台计算节点;

kuernetes集群计算节点扩容

  1. 首先参考我以前的一篇关于通过kubeadm安装k8s集群的博客:
      https://blog.51cto.com/zgui2000/2354852
      设置好yum源仓库,安装好docker-ce、安装好kubelet等;
[root@kubenode3 yum.repos.d]# cat /etc/yum.repos.d/docker-ce.repo 
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[root@kubenode3 yum.repos.d]#   
#准备docker-ce yum仓库文件
[root@kubenode3 yum.repos.d]# cat /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
[root@kubenode3 yum.repos.d]#
#准备kubernetes.repo yum仓库文件
[root@kubenode3 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.83.32.146 kubemaster
10.83.32.138 kubenode1
10.83.32.133 kubenode2
10.83.32.144 kubenode3
#准备hosts文件
[root@kubenode3 yum.repos.d]# getenforce 
Disabled
#禁用selinux,可以通过设置/etc/selinux/config文件
systemctl stop firewalld
systemctl disable firewalld
#禁用防火墙
yum install docker-ce kubelet kubeadm kubectl
#安装docker、kubelet等
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
#安装docker镜像加速器,需要重启docker服务。systemctl restart Docker
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.3
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull carlziess/coredns-1.2.6
docker pull quay.io/coreos/flannel:v0.11.0-amd64
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag carlziess/coredns-1.2.6 k8s.gcr.io/coredns:1.2.6
#将运行的镜像提前下载到本地,因为使用kubeadm安装的k8s集群,api-server、controller-manager、kube-scheduler、etcd、flannel等组件需要运行为容器的形式,所以提前把镜像下载下来;
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl -p
[root@kubenode3 yum.repos.d]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@kubenode3 yum.repos.d]#
  1. 现在开始扩容计算节点

  每个token只有24小时的有效期,如果没有有效的token,可以使用如下命令创建

[root@kubemaster ~]# kubeadm token create
fv93ud.33j7oxtdmodwfn7f
[root@kubemaster ~]#
#创建token
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
c414ceda959552049efccc2d9fd1fc1a2006689006a5f3b05e6ca05b3ff1a93e
#查看Kubernetes认证的SHA256加密字符串
swapoff -a
#关闭swap分区
kubeadm join 10.83.32.146:6443 --token fv93ud.33j7oxtdmodwfn7f --discovery-token-ca-cert-hash sha256:c414ceda959552049efccc2d9fd1fc1a2006689006a5f3b05e6ca05b3ff1a93e --ignore-preflight-errors=Swap
#加入kubernetes集群
[root@kubemaster ~]# kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
kubemaster   Ready    master   18d     v1.13.3
kubenode1    Ready       17d     v1.13.3
kubenode2    Ready       17d     v1.13.3
kubenode3    Ready       2m22s   v1.13.4
#查看节点状态,发现已经成功加入kubenode3节点

推荐关注我的个人微信公众号 “云时代IT运维”,周期性更新最新的应用运维类技术文档。关注虚拟化和容器技术、CI/CD、自动化运维等最新前沿运维技术和趋势;

kubernetes集群计算节点配置升级和扩容计算节点_第1张图片