基于 kube-vip 部署 kubernetes 高可用集群

kube-vip 简介

kube-vip 是一个开源项目,旨在简化为 Kubernetes 集群提供负载均衡服务。

kube-vip 为 Kubernetes 集群提供虚拟 IP 和负载均衡器,用于控制平面(用于构建高可用集群)和 Kubernetes 服务类型,而无需依赖LoadBalancer任何外部硬件或软件。

kube-vip 的最初目的是简化高可用(HA)Kubernetes 集群的构建,当时该集群涉及一些需要管理的组件和配置。随着项目的发展,它现在可以使用相同的技术为 LoadBalancer类型的 Kubernetes Service资源提供负载均衡功能。

基于 kube-vip 部署 kubernetes 高可用集群_第1张图片
官方网站:https://kube-vip.io/

项目地址:https://github.com/kube-vip/kube-vip

基于kube-vip部署kubernetes集群

本示例中的安装部署Kubernetes集群将基于以下环境进行:

  • OS: Ubuntu Server 22.04 LTS
  • Kubernetes:v1.28.1
  • Container Runtime: Containerd

前置要求:

  • 至少2个CPU、2G内存
  • 禁用swap交换分区
  • 允许 iptables 检查桥接流量

本次使用kubeadm部署4节点的 Kubernetes集群,包含3个master节点及1个worker节点。采用 kube-vip 来实现高可用,所有节点访问 VIP 来通讯。而且 kube-vip x 3 都运行在 Control Plane 所在的主机中,以减少集群主机数量。

注意:使用kube-vip管理vip需要保证vip和节点ip在同一子网中。

节点清单:

节点名称 节点IP 节点角色 CPU 内存 磁盘 OS
master1 192.168.72.30 master 2C 4G 100G Ubutnu 22.04 LTS
master2 192.168.72.31 master 2C 4G 100G Ubutnu 22.04 LTS
master3 192.168.72.32 master 2C 4G 100G Ubutnu 22.04 LTS
node1 192.168.72.33 worker 2C 4G 100G Ubutnu 22.04 LTS
kube-vip 192.168.72.200

节点初始化

说明:以下操作在所有节点执行。

1、配置主机名

hostnamectl set-hostname master1
hostnamectl set-hostname master2
hostnamectl set-hostname master3
hostnamectl set-hostname node1

2、配置hosts解析

cat >> /etc/hosts << EOF
192.168.72.200 lb.k8s.local
192.168.72.30 master1
192.168.72.31 master2
192.168.72.32 master3
192.168.72.33 node1
EOF

3、关闭swap

sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
mount -a
swapoff -a

4、确认时间同步

apt install -y chrony
systemctl enable --now chrony
chronyc sources

5、加载ipvs内核模块
参考:https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy/ipvs

另外,针对Linux kernel 4.19以上的内核版本使用nf_conntrack 代替nf_conntrack_ipv4。

cat <<EOF | tee /etc/modules-load.d/ipvs.conf
# Load IPVS at boot
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

#确认内核模块加载成功
lsmod | grep -e ip_vs -e nf_conntrack

#安装ipset和ipvsadm
apt install -y ipset ipvsadm

安装containerd

说明:以下操作在所有节点执行。

1、安装containerd容器运行时的前置条件

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置必需的 sysctl 参数,这些参数在重新启动后仍然存在。
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 应用 sysctl 参数而无需重新启动
sudo sysctl --system

2、安装containerd容器运行时,如果网络较差,建议使用浏览器下载到本地,在上传到服务器。

下载地址:https://github.com/containerd/nerdctl/releases

wget https://github.com/containerd/nerdctl/releases/download/v1.5.0/nerdctl-full-1.5.0-linux-amd64.tar.gz
tar Cxzvvf /usr/local nerdctl-full-1.5.0-linux-amd64.tar.gz

3、创建containerd配置文件

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

4、配置使用 systemd cgroup 驱动程序

sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g" /etc/containerd/config.toml

5、修改基础设施镜像,安装kubeadm后使用kubeadm config images list命令确认pause默认镜像tag

old_image='sandbox_image = "registry.k8s.io/pause:.*"'
new_image='sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"'
sed -i "s|$old_image|$new_image|" /etc/containerd/config.toml

6、启动containerd服务

systemctl enable --now containerd

7、查看containerd运行状态

systemctl status containerd

安装kubeadm

说明:以下操作在所有节点执行。

1、添加kubernetes源,使用阿里云apt源进行替换:

apt update -y
apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

2、查看可安装的版本

apt-get update
apt-cache madison kubectl | more

3、安装指定版本kubeadm、kubelet及kubectl

export KUBERNETES_VERSION=1.28.1-00
apt update -y
apt-get install -y kubelet=${KUBERNETES_VERSION} kubeadm=${KUBERNETES_VERSION} kubectl=${KUBERNETES_VERSION}
apt-mark hold kubelet kubeadm kubectl

4、启动kubelet服务

systemctl enable --now kubelet

生成kube-api配置文件

说明:以下操作在第一个master节点执行。

1、生成kube-vip 静态yaml配置文件

export VIP=192.168.72.200
export INTERFACE=ens33
export KVVERSION=v0.6.2
nerdctl run -it --rm --net=host ghcr.io/kube-vip/kube-vip:$KVVERSION manifest pod \
  --interface $INTERFACE \
  --address $VIP \
  --controlplane \
  --services \
  --arp \
  --enableLoadBalancer \
  --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml

2、修改镜像拉取策略

sed -i "s#imagePullPolicy: Always#imagePullPolicy: IfNotPresent#g" \
/etc/kubernetes/manifests/kube-vip.yaml

3、复制kube-vip.yaml到其他master节点

scp -r /etc/kubernetes/manifests/kube-vip.yaml 192.168.72.31:/etc/kubernetes/manifests/
scp -r /etc/kubernetes/manifests/kube-vip.yaml 192.168.72.32:/etc/kubernetes/manifests/

4、提前在所有master节点拉取kube-vip镜像

nerdctl -n k8s.io pull ghcr.io/kube-vip/kube-vip:v0.6.2

提前拉取k8s镜像

备注:以下操作在所有节点执行。

1、查看可安装的kubernetes版本

kubectl version --short

2、查看对应kubernetes版本的容器镜像

kubeadm config images list \
  --kubernetes-version=v1.28.1 \
  --image-repository registry.aliyuncs.com/google_containers

3、在所有节点执行以下命令,提前拉取镜像

kubeadm config images pull \
  --kubernetes-version=v1.28.1 \
  --image-repository registry.aliyuncs.com/google_containers

4、查看拉取的镜像

root@master1:~# nerdctl -n k8s.io images |grep -v none
REPOSITORY                                                         TAG        IMAGE ID        CREATED               PLATFORM       SIZE         BLOB SIZE
ghcr.io/kube-vip/kube-vip                                          v0.6.2     d9271fa975a4    2 minutes ago         linux/amd64    40.8 MiB     40.8 MiB
registry.aliyuncs.com/google_containers/coredns                    v1.10.1    90d3eeb2e210    About a minute ago    linux/amd64    51.1 MiB     15.4 MiB
registry.aliyuncs.com/google_containers/etcd                       3.5.9-0    b124583790d2    About a minute ago    linux/amd64    283.8 MiB    98.1 MiB
registry.aliyuncs.com/google_containers/kube-apiserver             v1.28.1    1e9a3ea7d1d4    2 minutes ago         linux/amd64    123.1 MiB    33.0 MiB
registry.aliyuncs.com/google_containers/kube-controller-manager    v1.28.1    f6838231cb74    2 minutes ago         linux/amd64    119.5 MiB    31.8 MiB
registry.aliyuncs.com/google_containers/kube-proxy                 v1.28.1    feb6017bf009    About a minute ago    linux/amd64    73.6 MiB     23.4 MiB
registry.aliyuncs.com/google_containers/kube-scheduler             v1.28.1    b76ea016d6b9    2 minutes ago         linux/amd64    60.6 MiB     17.9 MiB
registry.aliyuncs.com/google_containers/pause                      3.9        7031c1b28338    About a minute ago    linux/amd64    728.0 KiB    314.0 KiB

创建集群配置文件

说明:以下操作仅在第一个master节点执行。

1、生成默认的集群初始化配置文件。

kubeadm config print init-defaults --component-configs KubeProxyConfiguration > kubeadm.yaml

2、修改集群配置文件

$ cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.72.30
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master1
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs:
  - lb.k8s.local
  - master1
  - master2
  - master3
  - node1
  - 192.168.72.30
  - 192.168.72.31
  - 192.168.72.32
  - 192.168.72.33
controlPlaneEndpoint: lb.k8s.local:6443
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.1
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
---
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 0
clusterCIDR: ""
configSyncPeriod: 0s
conntrack:
  maxPerCore: null
  min: null
  tcpCloseWaitTimeout: null
  tcpEstablishedTimeout: null
detectLocal:
  bridgeInterface: ""
  interfaceNamePrefix: ""
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
  localhostNodePorts: null
  masqueradeAll: false
  masqueradeBit: null
  minSyncPeriod: 0s
  syncPeriod: 0s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  strictARP: false
  syncPeriod: 0s
  tcpFinTimeout: 0s
  tcpTimeout: 0s
  udpTimeout: 0s
kind: KubeProxyConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
metricsBindAddress: ""
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
winkernel:
  enableDSR: false
  forwardHealthCheckVip: false
  networkName: ""
  rootHnsEndpointName: ""
  sourceVip: ""

3、在默认值基础之上需要配置的参数说明:

InitConfiguration

kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
localAPIEndpoint:
  advertiseAddress: 192.168.72.30
  bindPort: 6443
nodeRegistration:

  name: master1

ClusterConfiguration

kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
controlPlaneEndpoint: lb.k8s.local:6443
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: 1.28.1
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
apiServer:
  certSANs:
  - lb.k8s.local
  - master1
  - master2
  - master3
  - node1
  - 192.168.72.30
  - 192.168.72.31
  - 192.168.72.32
  - 192.168.72.33

KubeletConfiguration

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""

KubeProxyConfiguration

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
mode: "ipvs"

初始化第一个master节点

1、在第一个master节点运行以下命令开始初始化master节点:

kubeadm init --upload-certs --config kubeadm.yaml

如果初始化报错可以执行以下命令检查kubelet相关日志。

journalctl -xeu kubelet

2、记录日志输出中的join control-plane和join worker命令。

3、查看master1节点生成的vip地址

root@master1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:aa:75:9f brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.72.30/24 brd 192.168.72.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.72.200/32 scope global deprecated dynamic ens33
       valid_lft 58sec preferred_lft 0sec
    inet6 fe80::250:56ff:feaa:759f/64 scope link 
       valid_lft forever preferred_lft forever
3: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether a2:de:63:b2:34:ce brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever

4、master节点初始化完成后参考最后提示配置kubectl客户端连接

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

5、查看节点状态,当前还未安装网络插件节点处于NotReady状态

root@master1:~# kubectl get nodes
NAME   STATUS     ROLES           AGE   VERSION
node   NotReady   control-plane   40s   v1.28.1

6、查看pod状态,当前还未安装网络插件coredns pod处于Pending状态

root@master1:~# kubectl get pods -A
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-66f779496c-44fcc       0/1     Pending   0          32s
kube-system   coredns-66f779496c-9cjmf       0/1     Pending   0          32s
kube-system   etcd-node                      1/1     Running   1          44s
kube-system   kube-apiserver-node            1/1     Running   1          44s
kube-system   kube-controller-manager-node   1/1     Running   1          44s
kube-system   kube-proxy-g4kns               1/1     Running   0          32s
kube-system   kube-scheduler-node            1/1     Running   1          44s
kube-system   kube-vip-node                  1/1     Running   1          46s

安装calico网络插件

说明:以下操作仅在第一个master节点执行。

参考:https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart

1、在第一个master节点安装helm

version=v3.12.3
curl -LO https://repo.huaweicloud.com/helm/${version}/helm-${version}-linux-amd64.tar.gz
tar -zxvf helm-${version}-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm && rm -rf linux-amd64

2、添加calico helm 仓库

helm repo add projectcalico https://projectcalico.docs.tigera.io/charts

3、部署calico,如果无法访问dockerhub,可能需要提前拉取镜像

helm install calico projectcalico/tigera-operator \
--namespace tigera-operator --create-namespace

4、查看节点变为Ready状态

root@master1:~# kubectl get nodes
NAME   STATUS   ROLES           AGE   VERSION
node   Ready    control-plane   17m   v1.28.1

5、查看coredns pod状态变为Running

root@master1:~# kubectl get pods -A
NAMESPACE          NAME                                      READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-c7b9c94d5-ck6n8          1/1     Running   0          4m6s
calico-apiserver   calico-apiserver-c7b9c94d5-fjbbh          1/1     Running   0          4m6s
calico-system      calico-kube-controllers-9c4d4576f-rtbm5   1/1     Running   0          11m
calico-system      calico-node-vwkhs                         1/1     Running   0          11m
calico-system      calico-typha-76c649f99d-7gdsv             1/1     Running   0          11m
calico-system      csi-node-driver-fzlck                     2/2     Running   0          11m
kube-system        coredns-66f779496c-44fcc                  1/1     Running   0          17m
kube-system        coredns-66f779496c-9cjmf                  1/1     Running   0          17m
kube-system        etcd-node                                 1/1     Running   1          17m
kube-system        kube-apiserver-node                       1/1     Running   1          17m
kube-system        kube-controller-manager-node              1/1     Running   1          17m
kube-system        kube-proxy-g4kns                          1/1     Running   0          17m
kube-system        kube-scheduler-node                       1/1     Running   1          17m
kube-system        kube-vip-node                             1/1     Running   1          17m
tigera-operator    tigera-operator-94d7f7696-lzrrz           1/1     Running   0          11m

master节点加入集群

1、分别在第2和第3个master节点执行以下命令

kubeadm join lb.k8s.local:6443 --token abcdef.0123456789abcdef \
      --discovery-token-ca-cert-hash sha256:7e25d6bc9368f5ab747c8c0b3349c5441005e0e15914cd4be1eac6351fb1e320 \
      --control-plane --certificate-key fbae5ace48cf806fe5df818174859e3a837cf316156cd6e26ec14c7fca28fba3

2、如果master初始化后未记录节点加入集群命令,可以通过运行以下命令重新生成:

kubeadm token create --print-join-command --ttl 0

worker节点加入集群

1\在node1节点上执行如下命令,将其注册到 Cluster 中:

kubeadm join lb.k8s.local:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:7e25d6bc9368f5ab747c8c0b3349c5441005e0e15914cd4be1eac6351fb1e320 

2、通过 kubectl get nodes 查看节点的状态。

root@master1:~# kubectl get nodes -o wide
NAME      STATUS   ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master1   Ready    control-plane   22m     v1.28.1   192.168.72.30   <none>        Ubuntu 22.04.2 LTS   5.15.0-76-generic   containerd://1.7.3
master2   Ready    control-plane   4m34s   v1.28.1   192.168.72.31   <none>        Ubuntu 22.04.2 LTS   5.15.0-76-generic   containerd://1.7.3
master3   Ready    control-plane   3m48s   v1.28.1   192.168.72.32   <none>        Ubuntu 22.04.2 LTS   5.15.0-76-generic   containerd://1.7.3
node1     Ready    <none>          3m41s   v1.28.1   192.168.72.33   <none>        Ubuntu 22.04.2 LTS   5.15.0-76-generic   containerd://1.7.3

3、最终运行的pods

root@master1:~# kubectl get pods -A -o wide
NAMESPACE          NAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE      NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-866fdb498d-2jndc          1/1     Running   0             18m   10.244.137.69    master1   <none>           <none>
calico-apiserver   calico-apiserver-866fdb498d-672gj          1/1     Running   0             18m   10.244.137.70    master1   <none>           <none>
calico-system      calico-kube-controllers-57b4847fc8-zlf8t   1/1     Running   0             18m   10.244.137.66    master1   <none>           <none>
calico-system      calico-node-bxjhp                          1/1     Running   0             18m   192.168.72.30    master1   <none>           <none>
calico-system      calico-node-j2zxw                          1/1     Running   0             12m   192.168.72.31    master2   <none>           <none>
calico-system      calico-node-kjfhh                          1/1     Running   0             11m   192.168.72.33    node1     <none>           <none>
calico-system      calico-node-zmngb                          1/1     Running   0             11m   192.168.72.32    master3   <none>           <none>
calico-system      calico-typha-548dd44bd-hzj8b               1/1     Running   0             18m   192.168.72.30    master1   <none>           <none>
calico-system      calico-typha-548dd44bd-pjpn5               1/1     Running   0             11m   192.168.72.32    master3   <none>           <none>
calico-system      csi-node-driver-48xkx                      2/2     Running   0             18m   10.244.137.65    master1   <none>           <none>
calico-system      csi-node-driver-rtxt2                      2/2     Running   0             11m   10.244.136.1     master3   <none>           <none>
calico-system      csi-node-driver-ssfbh                      2/2     Running   0             11m   10.244.166.129   node1     <none>           <none>
calico-system      csi-node-driver-v8h9l                      2/2     Running   0             12m   10.244.180.1     master2   <none>           <none>
kube-system        coredns-66f779496c-ldfpv                   1/1     Running   0             23m   10.244.137.68    master1   <none>           <none>
kube-system        coredns-66f779496c-zx8rs                   1/1     Running   0             23m   10.244.137.67    master1   <none>           <none>
kube-system        etcd-master1                               1/1     Running   0             23m   192.168.72.30    master1   <none>           <none>
kube-system        etcd-master2                               1/1     Running   0             12m   192.168.72.31    master2   <none>           <none>
kube-system        etcd-master3                               1/1     Running   0             12m   192.168.72.32    master3   <none>           <none>
kube-system        kube-apiserver-master1                     1/1     Running   0             23m   192.168.72.30    master1   <none>           <none>
kube-system        kube-apiserver-master2                     1/1     Running   0             12m   192.168.72.31    master2   <none>           <none>
kube-system        kube-apiserver-master3                     1/1     Running   1 (12m ago)   12m   192.168.72.32    master3   <none>           <none>
kube-system        kube-controller-manager-master1            1/1     Running   1 (12m ago)   23m   192.168.72.30    master1   <none>           <none>
kube-system        kube-controller-manager-master2            1/1     Running   0             12m   192.168.72.31    master2   <none>           <none>
kube-system        kube-controller-manager-master3            1/1     Running   0             11m   192.168.72.32    master3   <none>           <none>
kube-system        kube-proxy-4dzjw                           1/1     Running   0             23m   192.168.72.30    master1   <none>           <none>
kube-system        kube-proxy-cqkbr                           1/1     Running   0             11m   192.168.72.32    master3   <none>           <none>
kube-system        kube-proxy-vcr8w                           1/1     Running   0             11m   192.168.72.33    node1     <none>           <none>
kube-system        kube-proxy-x6tkn                           1/1     Running   0             12m   192.168.72.31    master2   <none>           <none>
kube-system        kube-scheduler-master1                     1/1     Running   1 (12m ago)   23m   192.168.72.30    master1   <none>           <none>
kube-system        kube-scheduler-master2                     1/1     Running   0             12m   192.168.72.31    master2   <none>           <none>
kube-system        kube-scheduler-master3                     1/1     Running   0             12m   192.168.72.32    master3   <none>           <none>
kube-system        kube-vip-master1                           1/1     Running   1 (12m ago)   23m   192.168.72.30    master1   <none>           <none>
kube-system        kube-vip-master2                           1/1     Running   0             12m   192.168.72.31    master2   <none>           <none>
kube-system        kube-vip-master3                           1/1     Running   0             11m   192.168.72.32    master3   <none>           <none>
tigera-operator    tigera-operator-94d7f7696-b5fbl            1/1     Running   1 (12m ago)   19m   192.168.72.30    master1   <none>           <none>

验证集群高可用

1、关闭master1节点

root@master1:~# shutdown -h now

2、查看vip地址自动转移到master3节点

root@master3:~# ip a |grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.72.32/24 brd 192.168.72.255 scope global ens33
    inet 192.168.72.200/32 scope global ens33

3、在master3节点配置kubectl客户端连接信息

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

4、在master3节点验证依然能够正常访问集群

root@master3:~# kubectl get nodes
NAME      STATUS     ROLES           AGE   VERSION
master1   NotReady   control-plane   39m   v1.28.1
master2   Ready      control-plane   27m   v1.28.1
master3   Ready      control-plane   27m   v1.28.1
node1     Ready      <none>          27m   v1.28.1

至此,基于kube-vip的高可用集群部署完成。

你可能感兴趣的:(kubernetes,容器,云原生)