使用nginx+keepalived搭建kubernetes高可用集群

使用nginx+keepalived搭建kubernetes高可用集群

本文使用 nginx+keepalived 搭建 kubernetes 高可用集群。

当使用 nginx 作为应用服务器前端软负载的时候,可以通过 keepalived 来实现虚拟IP(Virtual IP,VIP)在主、备

节点之前的漂移,其中VIP需要在申请服务器的时候进行创建。

1)、当主节点 nginx 服务无法启动,或者主节点服务器宕机,VIP 将漂移到备用节点;

2)、当主节点服务恢复(服务器启动、keepalived 和 nginx 服务正常运行),备用节点将会进行备用状态,并移除

VIP,VIP将漂移回主节点。

在这个切换过程中,正常情况下,前端用户是无感知的。

1、环境准备

服务器规划(本实验采用虚拟机):

ip hostname 说明
192.168.43.200 master master
192.168.43.201 slave1 slave
192.168.43.202 slave2 slave
192.168.43.203 master2 master
192.168.43.200(复用) nginx+keepalived nginx+keepalived
192.168.43.203(复用) nginx+keepalived nginx+keepalived
192.168.43.205(虚拟IP) VIP VIP

2、系统初始化(master&&slave)

2.1 关闭防火墙

# 第1步
# 临时关闭
systemctl stop firewalld
# 永久关闭
systemctl disable firewalld

2.2 关闭 selinux

# 第2步
# 临时关闭
setenforce 0
# 永久关闭
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

2.3 关闭 swap

# 第3步
# 临时关闭
swapoff -a
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab

2.4 设置主机名称

使用命令 hostnamectl set-hostname hostname 设置主机名称,如下四台主机分别设置为:

# 第4步
# 设置
hostnamectl set-hostname master
hostnamectl set-hostname slave1
hostnamectl set-hostname slave2
hostnamectl set-hostname master2
# 查看当前主机名称
hostname

2.5 添加hosts

在每个节点中添加 hosts,即节点IP地址+节点名称。

# 第5步
cat >> /etc/hosts << EOF
192.168.43.200 master
192.168.43.201 slave1
192.168.43.202 slave2
192.168.43.203 master2
EOF

2.6 将桥接的IPv4流量传递到iptables的链

# 第6步
# 设置
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 使其生效
sysctl --system

2.7 时间同步

让各个节点(虚拟机)中的时间与本机时间保持一致。

# 第7步
yum install ntpdate -y
ntpdate time.windows.com

注意:虚拟机不管关机还是挂起,每次重新操作都需要更新时间进行同步。

3、Docker的安装(master&&slave)

3.1 卸载旧版本

# 第8步
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

3.2 设置镜像仓库

# 第9步
# 默认是国外的,这里使用阿里云的镜像
yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.3 安装需要的插件

# 第10步
yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

3.4 更新yum软件包索引

# 第11步
# 更新yum软件包索引
yum makecache fast

3.5 安装docker引擎

# 第12步
# 安装特定版本 
# 查看有哪些版本
yum list docker-ce --showduplicates | sort -r
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
yum install docker-ce-20.10.21 docker-ce-cli-20.10.21 containerd.io
# 安装最新版本
yum install docker-ce docker-ce-cli containerd.io

3.6 启动Docker

# 第13步
systemctl enable docker && systemctl start docker

3.7 配置Docker镜像加速

# 第14步
vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
# 重启
systemctl restart docker

3.8 查看加速是否生效

# 第15步
docker info

3.9 验证Docker信息

# 第16步
docker -v

3.10 其它Docker命令

# 停止docker
systemctl stop docker

# 查看docker状态
systemctl status docker

3.11 卸载Docker的命令

yum remove docker-ce-20.10.21 docker-ce-cli-20.10.21 containerd.io
rm -rf /var/lib/docker
rm -rf /var/lib/containerd

4、添加阿里云yum源(master&&slave)

所有节点都需要执行,nginx节点不需要执行。

# 第17步
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[Kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5、kubeadm、kubelet、kubectl的安装(master&&slave)

所有节点都需要执行,nginx节点不需要执行。

# 第18步
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 --disableexcludes=kubernetes

6、启动kubelet服务(master&&slave)

所有节点都需要执行,nginx节点不需要执行。

# 第19步
systemctl enable kubelet && systemctl start kubelet

7、nginx+keepalived安装(master&&master2)

  • Nginx 是一个主流 Web 服务和反向代理服务器,这里用四层实现对 apiserver 实现负载均衡。

  • Keepalived 是一个主流高可用软件,基于 VIP 绑定实现服务器双机热备,Keepalived 主要根据 Nginx 运行状

    态判断是否需要故障转移(漂移VIP),例如当 Nginx 主节点挂掉,VIP 会自动绑定在 Nginx 备节点,从而保证

    VIP 一直可用,实现 Nginx 高可用。

  • 如果你是在公有云上,一般都不支持 keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡

    多台 master kube-apiserver。

下面的操作在两台 master 节点上进行操作。

7.1 安装软件包(master/master2)

# 第20步
yum install epel-release -y
yum install nginx keepalived -y

7.2 Nginx配置文件(master和master2相同)(两台master分别做为主备)

# 第21步
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.43.200:6443;   # master APISERVER IP:PORT
       server 192.168.43.203:6443;   # master2 APISERVER IP:PORT
    }
    
    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF

7.3 keepalived配置文件(master和master2相同)

# 第22步
cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虚拟IP
    virtual_ipaddress { 
        192.168.43.205/24 # 虚拟IP
    } 
    track_script {
        check_nginx
    } 
}
EOF
  • vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)

  • virtual_ipaddress:虚拟IP(VIP)

准备上述配置文件中检查Nginx运行状态的脚本

# 第23步
cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -C nginx --no-heading | wc -l)

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF

chmod +x /etc/keepalived/check_nginx.sh

说明:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。

7.4 Nginx增加Steam模块(在master2上操作)

7.4.1 查看Nginx版本模块

如果已经安装--with-stream模块,后面的步骤可以跳过。

# 第24步
[root@k8s-master2 nginx-1.20.1]# nginx -V
nginx version: nginx/1.20.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) 
configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --with-stream
# --with-stream代表安装

7.4.2 下载同一个版本的nginx

下载地址:http://nginx.org/download/

这里下载:http://nginx.org/download/nginx-1.20.1.tar.gz

7.4.3 备份原Nginx文件

# 第25步
mv /usr/sbin/nginx /usr/sbin/nginx.bak
cp -r /etc/nginx{,.bak}

7.4.4 重新编译Nginx

# 根据第1步查到已有的模块,加上本次需新增的模块: --with-stream
# 检查模块是否支持,比如这次添加limit限流模块和stream模块
# -without-http_limit_conn_module disable表示已有该模块,编译时,不需要添加
./configure -help | grep limit
# -with-stream enable表示不支持,编译时要自己添加该模块
./configure -help | grep stream

编译环境准备:

# 第26步
yum -y install libxml2 libxml2-dev libxslt-devel 
yum -y install gd-devel 
yum -y install perl-devel perl-ExtUtils-Embed 
yum -y install GeoIP GeoIP-devel GeoIP-data
yum -y install pcre-devel
yum -y install openssl openssl-devel
yum -y install gcc make

编译:

# 第27步
tar -xf nginx-1.20.1.tar.gz
cd nginx-1.20.1/
./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf  --with-stream
make

说明:make完成后不要继续输入make install,以免现在的nginx出现问题。以上完成后,会在objs目录下生成

一个nginx文件,先验证:

# 第28步
[root@k8s-master2 nginx-1.20.1]# ./objs/nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

7.4.5 替换nginx到master1/master2

# 第29步
cp ./objs/nginx /usr/sbin/ 
scp objs/nginx [email protected]:/usr/sbin/

7.4.6 修改nginx服务文件(master和master2)

# 第30步
vim /usr/lib/systemd/system/nginx.service
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/bin/rm -rf /run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecStop=/usr/sbin/nginx -s stop
ExecReload=/usr/sbin/nginx -s reload
PrivateTmp=true
[Install]
WantedBy=multi-user.target

7.5 启动并设置开机自启(master1/master2)

# 第31步
systemctl daemon-reload
systemctl start nginx keepalived
systemctl enable nginx keepalived
systemctl status nginx keepalived
[root@master ~]# systemctl status nginx keepalived
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2023-06-21 09:01:49 CST; 2s ago
  Process: 69549 ExecStop=/usr/sbin/nginx -s stop (code=exited, status=0/SUCCESS)
  Process: 69865 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 69857 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 69854 ExecStartPre=/usr/bin/rm -rf /run/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 69868 (nginx)
    Tasks: 5
   Memory: 2.6M
   CGroup: /system.slice/nginx.service
           ├─69868 nginx: master process /usr/sbin/nginx
           ├─69870 nginx: worker process
           ├─69871 nginx: worker process
           ├─69873 nginx: worker process
           └─69875 nginx: worker process

621 09:01:49 master systemd[1]: Starting The nginx HTTP and reverse proxy server...
621 09:01:49 master nginx[69857]: nginx: [alert] could not open error log file: open() "/usr/share/nginx/logs/e...ctory)
6月 21 09:01:49 master nginx[69857]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
6月 21 09:01:49 master nginx[69857]: nginx: configuration file /etc/nginx/nginx.conf test is successful
6月 21 09:01:49 master nginx[69865]: nginx: [alert] could not open error log file: open() "/usr/share/nginx/logs/e...ctory)
621 09:01:49 master systemd[1]: Started The nginx HTTP and reverse proxy server.

● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2023-06-21 09:01:49 CST; 2s ago
  Process: 69855 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 69858 (keepalived)
    Tasks: 3
   Memory: 1.5M
   CGroup: /system.slice/keepalived.service
           ├─69858 /usr/sbin/keepalived -D
           ├─69859 /usr/sbin/keepalived -D
           └─69861 /usr/sbin/keepalived -D

621 09:01:49 master systemd[1]: Starting LVS and VRRP High Availability Monitor...
621 09:01:49 master systemd[1]: Started LVS and VRRP High Availability Monitor.
Hint: Some lines were ellipsized, use -l to show in full.
[root@master2 ~]# systemctl status nginx keepalived
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2023-06-21 09:01:46 CST; 8s ago
  Process: 7614 ExecStop=/usr/sbin/nginx -s stop (code=exited, status=0/SUCCESS)
  Process: 7853 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 7843 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 7838 ExecStartPre=/usr/bin/rm -rf /run/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 7855 (nginx)
    Tasks: 5
   Memory: 2.6M
   CGroup: /system.slice/nginx.service
           ├─7855 nginx: master process /usr/sbin/nginx
           ├─7856 nginx: worker process
           ├─7857 nginx: worker process
           ├─7858 nginx: worker process
           └─7859 nginx: worker process

621 09:01:46 master2 systemd[1]: Starting The nginx HTTP and reverse proxy server...
621 09:01:46 master2 nginx[7843]: nginx: [alert] could not open error log file: open() "/usr/share/nginx/logs/e...ctory)
6月 21 09:01:46 master2 nginx[7843]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
6月 21 09:01:46 master2 nginx[7843]: nginx: configuration file /etc/nginx/nginx.conf test is successful
6月 21 09:01:46 master2 nginx[7853]: nginx: [alert] could not open error log file: open() "/usr/share/nginx/logs/e...ctory)
621 09:01:46 master2 systemd[1]: Started The nginx HTTP and reverse proxy server.

● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2023-06-21 09:01:46 CST; 8s ago
  Process: 7839 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 7840 (keepalived)
    Tasks: 4
   Memory: 1.8M
   CGroup: /system.slice/keepalived.service
           ├─  7840 /usr/sbin/keepalived -D
           ├─  7841 /usr/sbin/keepalived -D
           ├─  7842 /usr/sbin/keepalived -D
           └─120419 ps -ef

621 09:01:46 master2 Keepalived_vrrp[7842]: SECURITY VIOLATION - scripts are being executed but script_security ...bled.
621 09:01:46 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) removing protocol VIPs.
621 09:01:46 master2 Keepalived_vrrp[7842]: Using LinkWatch kernel netlink reflector...
621 09:01:46 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) Entering BACKUP STATE
621 09:01:46 master2 Keepalived_vrrp[7842]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
621 09:01:46 master2 Keepalived_vrrp[7842]: /etc/keepalived/check_nginx.sh exited with status 1
621 09:01:47 master2 Keepalived_vrrp[7842]: VRRP_Script(check_nginx) succeeded
621 09:01:50 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) Transition to MASTER STATE
621 09:01:50 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) Received advert with higher priority 100, ours 90
621 09:01:50 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) Entering BACKUP STATE
Hint: Some lines were ellipsized, use -l to show in full.

7.6 查看keepalived工作状态

# 第32步
[root@master ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.200/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet 192.168.43.205/24 scope global secondary ens33
    inet6 2409:8903:304:bb7:41b4:9f94:9bc6:3a50/64 scope global noprefixroute dynamic
    inet6 fe80::c8e0:482b:7618:82bb/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
    inet6 fe80::42:fbff:fe1e:7fb7/64 scope link
    inet6 fe80::2c4c:e9ff:fee8:6134/64 scope link
    inet 10.96.0.10/32 scope global kube-ipvs0
    inet 10.96.0.1/32 scope global kube-ipvs0
    inet 10.101.110.138/32 scope global kube-ipvs0

[root@master2 ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.203/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet6 2409:8903:304:bb7:1d19:410:2404:9753/64 scope global noprefixroute dynamic
    inet6 fe80::9bc0:3f5:d3cd:a77b/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

可以看到,在 ens33 网卡绑定了 192.168.43.205 虚拟IP,说明工作正常。

8、部署k8s-master

8.1 kubeadm初始化(master node)

1.21.0 版本在初始化过程中会报错,是因为阿里云仓库中不存在 coredns/coredns 镜像,也就是

registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0镜像不存在。

解决方法:

# 第33步
# master节点执行
# 该步骤需要提前执行,否则的话在初始化的时候由于找不到镜像会报错
[root@master ~]# docker pull coredns/coredns:1.8.0
1.8.0: Pulling from coredns/coredns
c6568d217a00: Pull complete
5984b6d55edf: Pull complete
Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
Status: Downloaded newer image for coredns/coredns:1.8.0
docker.io/coredns/coredns:1.8.0
[root@master ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@master ~]# docker rmi coredns/coredns:1.8.0
Untagged: coredns/coredns:1.8.0
Untagged: coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e

在 master 节点中执行以下命令,注意将 master 节点 IP 和 kubeadm 版本号和 --control-plane-endpoint 修改为

自己主机中所对应的。

# 第34步
# master节点执行
[root@master ~]# kubeadm init \
 --apiserver-advertise-address=192.168.43.200 \
 --image-repository registry.aliyuncs.com/google_containers \
 --control-plane-endpoint=192.168.43.205:16443 \
 --kubernetes-version v1.21.0 \
 --service-cidr=10.96.0.0/12 \
 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.43.200 192.168.43.205]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.43.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.43.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 63.524903 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ukkdz8.qbzye91bxbv7kb8e
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
        --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
        --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97

查看命令执行后的提示信息,看到 Your Kubernetes control-plane has initialized successfully!

明我们 master 节点上的 k8s 集群已经搭建成功。

8.2 开启kubectl工具的使用(master node)

# 第35步
# master节点执行
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群的节点:

# 第36步
# master节点执行
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   NotReady    control-plane,master   3m50s   v1.21.0

8.3 slave节点加入集群(slave node)

# 第37步
# slave1节点执行
[root@slave1 ~]# systemctl status nginx keepalived
Unit nginx.service could not be found.
Unit keepalived.service could not be found.
[root@slave1 ~]# kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
>         --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 第38步
# slave2节点执行
[root@slave2 ~]# kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
>         --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看集群的节点:

# 第39步
# master节点执行
[root@master ~]# kubectl get nodes
NAME     STATUS   	 ROLES                  AGE     VERSION
master   NotReady    control-plane,master   5m35s   v1.21.0
slave1   NotReady    <none>                 50s     v1.21.0
slave2   NotReady    <none>                 45s     v1.21.0

8.4 master2节点加入集群(master2 node)

# 第40步
# master2节点执行
# 镜像下载
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
# 1.21.0版本的k8s中,阿里云镜像中没有registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0镜像,所以需要从别的地方下载镜像,然后再进行处理
[root@master2 ~]# docker pull coredns/coredns:1.8.0
[root@master2 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@master2 ~]# docker rmi coredns/coredns:1.8.0

证书拷贝:

# 第41步
# master2节点执行
# 创建目录
[root@master2 ~]# mkdir -p /etc/kubernetes/pki/etcd
# 第42步
# master节点执行
# 将master节点上的证书拷贝到master2节点上
[root@master ~]# scp -rp /etc/kubernetes/pki/ca.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/sa.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/front-proxy-ca.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/etcd/ca.* master2:/etc/kubernetes/pki/etcd
[root@master ~]# scp -rp /etc/kubernetes/admin.conf master2:/etc/kubernetes

加入集群:

# 第43步
# master2节点执行
[root@master2 ~]# kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
>         --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97 \
>         --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [10.96.0.1 192.168.43.203 192.168.43.205]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [192.168.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [192.168.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
# 第44步
# master2节点执行
[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看节点:

# 第45步
# master节点执行
[root@master ~]# kubectl get nodes
NAME      STATUS      ROLES                  AGE     VERSION
master    NotReady    control-plane,master   9m18s   v1.21.0
master2   NotReady    control-plane,master   104s    v1.21.0
slave1    NotReady    <none>                 4m33s   v1.21.0
slave2    NotReady    <none>                 4m28s   v1.21.0
# 第46步
# master2节点执行
[root@master2 ~]# kubectl get nodes
NAME      STATUS      ROLES                  AGE     VERSION
master    NotReady    control-plane,master   9m18s   v1.21.0
master2   NotReady    control-plane,master   104s    v1.21.0
slave1    NotReady    <none>                 4m33s   v1.21.0
slave2    NotReady    <none>                 4m28s   v1.21.0

注:由于网络插件还没有部署,所有节点还没有准备就绪,状态为 NotReady,下面安装网络插件。

9、安装网络插件fannel(master node)

查看集群的信息:

# 第47步
# master节点执行
[root@master ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    NotReady    control-plane,master   13m     v1.21.0
master2   NotReady    control-plane,master   2m50s   v1.21.0
slave1    NotReady    <none>                 6m43s   v1.21.0
slave2    NotReady    <none>                 6m35s   v1.21.0

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-545d6fc579-2wmjs          1/1     Running   0          12m
kube-system   coredns-545d6fc579-lp4dg          1/1     Running   0          12m
kube-system   etcd-master                       1/1     Running   0          12m
kube-system   etcd-master2                      1/1     Running   0          2m53s
kube-system   kube-apiserver-master             1/1     Running   1          13m
kube-system   kube-apiserver-master2            1/1     Running   0          2m56s
kube-system   kube-controller-manager-master    1/1     Running   1          12m
kube-system   kube-controller-manager-master2   1/1     Running   0          2m56s
kube-system   kube-proxy-6dtsk                  1/1     Running   0          2m57s
kube-system   kube-proxy-hc5tl                  1/1     Running   0          6m50s
kube-system   kube-proxy-kc824                  1/1     Running   0          6m42s
kube-system   kube-proxy-mltbt                  1/1     Running   0          12m
kube-system   kube-scheduler-master             1/1     Running   1          12m
kube-system   kube-scheduler-master2            1/1     Running   0          2m57
# 第48步
# master节点执行
# 获取fannel的配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 如果出现无法访问的情况,可以直接用下面的flannel网络的官方github地址
wget https://github.com/flannel-io/flannel/tree/master/Documentation/kube-flannel.yml
# 第49步
# master节点执行
# 修改文件内容
net-conf.json: |
    {
      "Network": "10.244.0.0/16", #这里的网段地址需要与master初始化的必须保持一致
      "Backend": {
        "Type": "vxlan"
      }
    }
# 第50步
# master节点执行
[root@master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看节点情况:

# 第51步
# master节点执行
[root@master ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   15m     v1.21.0
master2   Ready    control-plane,master   4m58s   v1.21.0
slave1    Ready    <none>                 8m51s   v1.21.0
slave2    Ready    <none>                 8m43s   v1.21.0
# 第52步
# master2节点执行
[root@master2 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   15m     v1.21.0
master2   Ready    control-plane,master   4m58s   v1.21.0
slave1    Ready    <none>                 8m51s   v1.21.0
slave2    Ready    <none>                 8m43s   v1.21.0

查看 pod 情况:

# 第53步
# master节点执行
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-8z6gt             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-j7dt6             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-xrb5p             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-xs6rr             1/1     Running   0          53s
kube-system    coredns-545d6fc579-2wmjs          1/1     Running   0          15m
kube-system    coredns-545d6fc579-lp4dg          1/1     Running   0          15m
kube-system    etcd-master                       1/1     Running   0          15m
kube-system    etcd-master2                      1/1     Running   0          5m20s
kube-system    kube-apiserver-master             1/1     Running   1          15m
kube-system    kube-apiserver-master2            1/1     Running   0          5m23s
kube-system    kube-controller-manager-master    1/1     Running   1          15m
kube-system    kube-controller-manager-master2   1/1     Running   0          5m23s
kube-system    kube-proxy-6dtsk                  1/1     Running   0          5m24s
kube-system    kube-proxy-hc5tl                  1/1     Running   0          9m17s
kube-system    kube-proxy-kc824                  1/1     Running   0          9m9s
kube-system    kube-proxy-mltbt                  1/1     Running   0          15m
kube-system    kube-scheduler-master             1/1     Running   1          15m
kube-system    kube-scheduler-master2            1/1     Running   0          5m24s
# 第54步
# master2节点执行
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-8z6gt             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-j7dt6             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-xrb5p             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-xs6rr             1/1     Running   0          53s
kube-system    coredns-545d6fc579-2wmjs          1/1     Running   0          15m
kube-system    coredns-545d6fc579-lp4dg          1/1     Running   0          15m
kube-system    etcd-master                       1/1     Running   0          15m
kube-system    etcd-master2                      1/1     Running   0          5m20s
kube-system    kube-apiserver-master             1/1     Running   1          15m
kube-system    kube-apiserver-master2            1/1     Running   0          5m23s
kube-system    kube-controller-manager-master    1/1     Running   1          15m
kube-system    kube-controller-manager-master2   1/1     Running   0          5m23s
kube-system    kube-proxy-6dtsk                  1/1     Running   0          5m24s
kube-system    kube-proxy-hc5tl                  1/1     Running   0          9m17s
kube-system    kube-proxy-kc824                  1/1     Running   0          9m9s
kube-system    kube-proxy-mltbt                  1/1     Running   0          15m
kube-system    kube-scheduler-master             1/1     Running   1          15m
kube-system    kube-scheduler-master2            1/1     Running   0          5m24s

10、测试

# 第55步
[root@master ~]# curl -k https://192.168.43.205:16443/version
[root@slave1 ~]# curl -k https://192.168.43.205:16443/version
[root@slave2 ~]# curl -k https://192.168.43.205:16443/version
[root@master2 ~]# curl -k https://192.168.43.205:16443/version
{
  "major": "1",
  "minor": "21",
  "gitVersion": "v1.21.0",
  "gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
  "gitTreeState": "clean",
  "buildDate": "2021-04-08T16:25:06Z",
  "goVersion": "go1.16.1",
  "compiler": "gc",
  "platform": "linux/amd64"
}

通过虚拟 ip 可以正常访问。

11、nginx+keepalived高可用测试

关闭主节点 nginx,测试 VIP 是否漂移到备节点服务器。 在 nginx master 执行 systemctl stop nginx;在 nginx

备节点,ip addr 命令查看已成功绑定 VIP。

# 第56步
[root@master ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.200/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet 192.168.43.205/24 scope global secondary ens33
    inet6 2409:8903:304:bb7:41b4:9f94:9bc6:3a50/64 scope global noprefixroute dynamic
    inet6 fe80::c8e0:482b:7618:82bb/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
    inet6 fe80::42:fbff:fe1e:7fb7/64 scope link
    inet6 fe80::2c4c:e9ff:fee8:6134/64 scope link
    inet 10.96.0.10/32 scope global kube-ipvs0
    inet 10.96.0.1/32 scope global kube-ipvs0
    inet 10.101.110.138/32 scope global kube-ipvs0
    inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0

[root@master2 ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.203/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet6 2409:8903:304:bb7:1d19:410:2404:9753/64 scope global noprefixroute dynamic
    inet6 fe80::9bc0:3f5:d3cd:a77b/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

[root@master ~]# systemctl stop nginx

[root@master ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.200/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet6 2409:8903:304:bb7:41b4:9f94:9bc6:3a50/64 scope global noprefixroute dynamic
    inet6 fe80::c8e0:482b:7618:82bb/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
    inet6 fe80::42:fbff:fe1e:7fb7/64 scope link
    inet6 fe80::2c4c:e9ff:fee8:6134/64 scope link
    inet 10.96.0.10/32 scope global kube-ipvs0
    inet 10.96.0.1/32 scope global kube-ipvs0
    inet 10.101.110.138/32 scope global kube-ipvs0
    inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0

[root@master2 ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.203/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet 192.168.43.205/24 scope global secondary ens33
    inet6 2409:8903:304:bb7:1d19:410:2404:9753/64 scope global noprefixroute dynamic
    inet6 fe80::9bc0:3f5:d3cd:a77b/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

keepalived 可以正常切换。

12、访问负载均衡器测试

找 k8s 集群中任意一个节点,使用 curl 查看 K8s 版本测试,使用 VIP 访问:

# 第57步
[root@master ~]# curl -k https://192.168.43.205:16443/version
[root@slave1 ~]# curl -k https://192.168.43.205:16443/version
[root@slave2 ~]# curl -k https://192.168.43.205:16443/version
[root@master2 ~]# curl -k https://192.168.43.205:16443/version
{
  "major": "1",
  "minor": "21",
  "gitVersion": "v1.21.0",
  "gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
  "gitTreeState": "clean",
  "buildDate": "2021-04-08T16:25:06Z",
  "goVersion": "go1.16.1",
  "compiler": "gc",
  "platform": "linux/amd64"
}

可以正确获取到 K8s 版本信息,说明负载均衡器搭建正常,该请求数据流程:

curl -> vip(nginx) -> apiserver ,通过查看Nginx日志也可以看到转发apiserver IP:

# 第58步
[root@master2 ~]# tailf /var/log/nginx/k8s-access.log
192.168.43.203 192.168.43.203:6443 - [21/Jun/2023:10:07:57 +0800] 200 1092
192.168.43.203 192.168.43.200:6443 - [21/Jun/2023:10:07:57 +0800] 200 1092
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:07:57 +0800] 200 1091
192.168.43.203 192.168.43.203:6443 - [21/Jun/2023:10:07:58 +0800] 200 1092
192.168.43.202 192.168.43.200:6443 - [21/Jun/2023:10:07:58 +0800] 200 1092
192.168.43.200 192.168.43.200:6443 - [21/Jun/2023:10:07:58 +0800] 200 1471
192.168.43.202 192.168.43.203:6443 - [21/Jun/2023:10:07:58 +0800] 200 1092
192.168.43.200 192.168.43.200:6443 - [21/Jun/2023:10:07:58 +0800] 200 1471
192.168.43.203 192.168.43.203:6443 - [21/Jun/2023:10:07:58 +0800] 200 375
192.168.43.202 192.168.43.203:6443 - [21/Jun/2023:10:07:58 +0800] 200 1092
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:49 +0800] 200 424
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:09:51 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:51 +0800] 200 424
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:09:52 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:52 +0800] 200 424
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:09:53 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:53 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:54 +0800] 200 424
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:09:54 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:55 +0800] 200 424

至此,通过 kubeadm 工具就实现了 Kubernetes 高可用集群的快速搭建。

你可能感兴趣的:(kubernetes,kubernetes)