一、服务器环境及部署图
- 操作系统 CentOS 7.6
- 服务器类型,运行组件及配置如下
10.2.2.10 2C/2G MySQL 5.7 容器方式运行,Rancher UI (用于集群存储及提供ui界面)
10.2.2.11 2C/512M Haproxy-1,keepalived-1 (用于集群负载均衡及高可用)
10.2.2.12 2C/512M Haproxy-2,keepalived-2 (用于集群负载均衡及高可用)
10.2.2.13 2C/2G k3s-server-1
10.2.2.14 2C/2G k3s-server-2
10.2.2.15 2C/2G k3s-agent-1
10.2.2.16 2C/2G k3s-agent-2
10.2.2.100 Keepalived-VIP (虚拟VIP,并非服务器) -
部署图如下
二、服务器初始化脚本
执行范围:所有主机
执行完成后重启服务器。
#!/bin/sh
# 服务器初始化脚本(所有主机执行)
# 设置当前主机ip地址环境(带d的为开发用,带p的为预发布用,带t的为测试用,带a的为生产用)
IP_ENV=t
# 关闭防火墙、selinux
sudo systemctl stop firewalld && sudo systemctl disable firewalld
sudo systemctl stop iptables && sudo systemctl disable iptables
sudo setenforce 0
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
sudo sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/selinux/config
# 禁用swap
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 关闭networkmanager
sudo systemctl stop NetworkManager && sudo systemctl disable NetworkManager
# 修改时区语言
sudo ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sudo sh -c 'echo 'LANG="en_US.UTF-8"' >> /etc/profile'
source /etc/profile
# 修改最大文件打开数
# sudo sh -c ''
sudo sh -c 'echo '* soft nofile 65535' >> /etc/security/limits.conf'
sudo sh -c 'echo '* hard nofile 65535' >> /etc/security/limits.conf'
# 启用ipvs内核模块
sudo /sbin/modprobe ip_vs_dh
sudo /sbin/modprobe ip_vs_fo
sudo /sbin/modprobe ip_vs_ftp
sudo /sbin/modprobe ip_vs
sudo /sbin/modprobe ip_vs_lblc
sudo /sbin/modprobe ip_vs_lblcr
sudo /sbin/modprobe ip_vs_lc
sudo /sbin/modprobe ip_vs_nq
sudo /sbin/modprobe ip_vs_ovf
sudo /sbin/modprobe ip_vs_pe_sip
sudo /sbin/modprobe ip_vs_rr
sudo /sbin/modprobe ip_vs_sed
sudo /sbin/modprobe ip_vs_sh
sudo /sbin/modprobe ip_vs_wlc
sudo /sbin/modprobe ip_vs_wrr
# 将桥接的IPv4流量传递到iptables的链:
# 如果有配置,则修改
sudo sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sudo sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sudo sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sudo sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sudo sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sudo sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sudo sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
# 可能没有,追加
sudo sh -c 'echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf'
sudo sh -c 'echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf'
# 此参数为elasticsearch运行需要调整的参数
sudo sh -c 'echo "vm.max_map_count = 655300" >> /etc/sysctl.conf'
# 使得上述配置生效
sudo sysctl -p
# 修改主机名
# 获取ip后两组数字(带d的为开发用,带p的为预发布用,带t的为测试用,带a的为生产用)
ipNumlast2=`ip addr|egrep '[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+'|grep -v '127'|awk -F'[ ]+' '{print $3}'|cut -d / -f 1|cut -d . -f 3-4|tr "\." "${IP_ENV}"`
# 设置hostname
sudo hostnamectl set-hostname $ipNumlast2.cluster
# 修改yum源
# 我默认使用163的源
sudo rm -rf /etc/yum.repos.d/*
sudo sh -c 'cat > /etc/yum.repos.d/163.repo <> /etc/chrony.conf <
三、将IP和对应的主机名写入hosts文件
执行范围: 所有主机
sudo sh -c 'cat >>/etc/hosts<
四、docker 的安装与配置
安装方式: 二进制离线安装
执行范围: 所有主机
1. 下载docker二进制安装包
https://download.docker.com/linux/static/stable/
2. 上传到服务器并解压缩
[demo@2t16 docker]$ ls # 查看
docker-20.10.9.tgz
[demo@2t16 docker]$ tar -xvf docker-20.10.9.tgz # 解压缩
docker/
docker/containerd-shim-runc-v2
docker/dockerd
docker/docker-proxy
docker/ctr
docker/docker
docker/runc
docker/containerd-shim
docker/docker-init
docker/containerd
3. 将docker二进制程序文件拷贝到指定位置
[demo@2t16 docker]$ sudo mv docker/* /usr/bin/ # 拷贝
[demo@2t16 docker]$ ls /usr/bin/docker* # 查看
/usr/bin/docker /usr/bin/dockerd /usr/bin/docker-init /usr/bin/docker-proxy
4. 创建配置文件
[demo@2t16 docker]$ sudo mkdir /etc/docker # 先创建一个配置文件目录
[demo@2t16 docker]$ sudo sh -c 'cat >/etc/docker/daemon.json</etc/systemd/system/docker.service<
五、启动一个5.7版本mysql数据库容器
作用:用于为k3s提供存储,k3s支持除etcd外的集群数据存储方式
执行范围: 10.2.2.10 服务器主机
说明:选用5.7版本是因为该版本是rancher官方推荐。
第5步中,只需要对server节点所在的IP创建用户并授权就可以了。
1. 拉取mysql5.7镜像
[demo@2t10 ~]$ sudo docker pull mysql:5.7
2. 编写启动脚本
[demo@2t10 ~]$ cat >/home/demo/start-k3s-mysql.sh<3306/tcp mysql-service
5. 进入容器操作
[demo@2t10 ~]$ sudo docker exec -it mysql-service /bin/sh
# mysql -uroot -p # ---- > 登录
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.35 MySQL Community Server (GPL)
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database k3s default charset utf8mb4; # ---- > 创建k3s库
Query OK, 1 row affected (0.00 sec)
mysql> create user k3s@'10.2.2.10' identified by 'testdbk3s'; # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)
mysql> create user k3s@'10.2.2.11' identified by 'testdbk3s'; # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)
mysql> create user k3s@'10.2.2.12' identified by 'testdbk3s'; # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)
mysql> create user k3s@'10.2.2.13' identified by 'testdbk3s'; # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)
mysql> create user k3s@'10.2.2.14' identified by 'testdbk3s'; # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)
mysql> create user k3s@'10.2.2.15' identified by 'testdbk3s'; # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)
mysql> create user k3s@'10.2.2.16' identified by 'testdbk3s'; # ---- > 创建集群用户并设置密码
Query OK, 0 rows affected (0.00 sec)
mysql> grant all on k3s.* to k3s@'10.2.2.10'; # ---- > 授权
Query OK, 0 rows affected (0.01 sec)
mysql> grant all on k3s.* to k3s@'10.2.2.11'; # ---- > 授权
Query OK, 0 rows affected (0.00 sec)
mysql> grant all on k3s.* to k3s@'10.2.2.12'; # ---- > 授权
Query OK, 0 rows affected (0.00 sec)
mysql> grant all on k3s.* to k3s@'10.2.2.13'; # ---- > 授权
Query OK, 0 rows affected (0.00 sec)
mysql> grant all on k3s.* to k3s@'10.2.2.14'; # ---- > 授权
Query OK, 0 rows affected (0.00 sec)
mysql> grant all on k3s.* to k3s@'10.2.2.15'; # ---- > 授权
Query OK, 0 rows affected (0.00 sec)
mysql> grant all on k3s.* to k3s@'10.2.2.16'; # ---- > 授权
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges; # ---- > 刷新权限
Query OK, 0 rows affected (0.00 sec
六、安装k3s server节点
对于server-1 10.2.2.13节点
1. 下载k3s离线安装文件
[demo@2t13 k3s]$ pwd
/home/demo/k3s
[demo@2t13 k3s]$ ls -l
total 462584
-rw-rw-r-- 1 demo demo 26929 Oct 16 00:57 install.sh
-rw-rw-r-- 1 demo demo 56553472 Oct 16 00:57 k3s
-rw-rw-r-- 1 demo demo 417101824 Oct 16 00:57 k3s-airgap-images-amd64.tar
说明(下面是本文档用到的程序版本):
install.sh 脚本内容地址:https://get.k3s.io/
k3s 是k3s主程序。下载地址:https://github.com/k3s-io/k3s/releases/tag/v1.19.15+k3s2
k3s-airgap-images-amd64.tar 是k3s用到的镜像。 下载地址:https://github.com/k3s-io/k3s/releases/tag/v1.19.15+k3s2
2. docker导入k3s-airgap-images-amd64.tar镜像
[demo@2t13 k3s]$ sudo docker load -i k3s-airgap-images-amd64.tar
3. 给k3s执行权限并复制到指定目录
[demo@2t13 k3s]$ chmod +x k3s && sudo cp k3s /usr/local/bin/
4. 执行安装
# 将下面两行加入到k3s的安装脚本中,加到最上面
[demo@2t13 k3s]$ vim install.sh
... ...
export INSTALL_K3S_SKIP_DOWNLOAD=true
export INSTALL_K3S_EXEC="server --datastore-endpoint=mysql://k3s:testdbk3s@tcp(10.2.2.10:13306)/k3s --docker --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 10.2.2.100 --kube-apiserver-arg service-node-port-range=10000-65000 --no-deploy traefik --write-kubeconfig ~/.kube/config --write-kubeconfig-mode 666"
... ...
注意: --tls-san参数后面的地址是为集群提供SLB的地址,对应为下文中keepalived的虚拟VIP地址
# 执行脚本
[demo@2t13 k3s]$ sudo ./install.sh
[INFO] Skipping k3s download and verify
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /bin/ctr
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
5. 从root用户家目录拷贝kubeconfig文件到当前用户家目录
[demo@2t13 k3s]$ sudo cp -ar /root/.kube/config /home/demo/
6. 查看节点
[demo@2t13 k3s]$ kubectl get node
NAME STATUS ROLES AGE VERSION
2t13.cluster.ydca Ready master 2m48s v1.19.15+k3s2
7. 查看token
[demo@2t13 k3s]$ sudo cat /var/lib/rancher/k3s/server/node-token
K10e1b1fcb4caf1f726580e0fb22d15ff4fcb48e5a26c0841b4c63b8176169a66f2::server:447912a715c422f1cce5893c37572280
对于server-2 10.2.2.14节点
该节点与server-1操作只有1处不同,如下
在server-1的第4步骤中,我们向install.sh 脚本中添加了两个环境变量
该变量在server-2节点应该改为如下:
export INSTALL_K3S_SKIP_DOWNLOAD=true
export INSTALL_K3S_EXEC="server --token K10e1b1fcb4caf1f726580e0fb22d15ff4fcb48e5a26c0841b4c63b8176169a66f2::server:447912a715c422f1cce5893c37572280 --datastore-endpoint=mysql://k3s:testdbk3s@tcp(10.2.2.10:13306)/k3s --docker --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 10.2.2.100 --kube-apiserver-arg service-node-port-range=10000-65000 --no-deploy traefik --write-kubeconfig ~/.kube/config --write-kubeconfig-mode 666"
说明:相比server-1中,只是增加了--token 选项,该选项的数据来自server-1中的第7步骤。
两个server节点部署完成后,查看当前集群节点状况
[demo@2t13 k3s]$ kubectl get node
NAME STATUS ROLES AGE VERSION
2t13.cluster.ydca Ready master 18m v1.19.15+k3s2
2t14.cluster.ydca Ready master 12s v1.19.15+k3s2
七、使用 haproxy+keepalived 实现 server 节点负载均衡及高可用
1、haproxy的部署和配置
部署 haproxy-2.4.7
部署方式:二进制
执行范围:10.2.2.11 10.2.2.12 (两台服务器操作完全一致)
1. 下载二进制包
下载地址:http://www.haproxy.org/
2. 安装gcc
[demo@2t11 haproxy]$ sudo yum install gcc -y
2. 上传二进制包到服务器并解压
[demo@2t11 haproxy]$ tar -xvf haproxy-2.4.7.tar.gz #------------------->解压
[demo@2t11 haproxy]$ ls -lth #------------------->查看
total 3.5M
-rw-rw-r-- 1 demo demo 3.5M Oct 16 21:37 haproxy-2.4.7.tar.gz
drwxrwxr-x 13 demo demo 4.0K Oct 4 20:56 haproxy-2.4.7
3. 查看系统参数
[demo@2t11 haproxy]$ uname -a
Linux 2t11.cluster 4.4.246-1.el7.elrepo.x86_64 #1 SMP Tue Nov 24 09:26:59 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
4. 安装
[demo@2t11 haproxy]$ cd haproxy-2.4.7 #------------------->进入解压后目录
[demo@2t11 haproxy-2.4.7]$ sudo make TARGET=linux-glibc ARCH=x86_64 PREFIX=/usr/local/haproxy #------------------->编译
[demo@2t11 haproxy-2.4.7]$ sudo make install PREFIX=/usr/local/haproxy #------------------->安装
5. haproxy配置文件
[demo@2t11 haproxy]$ sudo mkdir /usr/local/haproxy/cfg #------------------->创建配置文件目录
[demo@2t11 haproxy]$ cat /usr/local/haproxy/cfg/haproxy.cfg #------------------->配置文件内容
global
daemon
maxconn 4000
pidfile /usr/local/haproxy/haproxy.pid
defaults
log global
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
listen admin_stats #------------>开启监控页面UI配置段
stats enable
bind *:18090
mode http
option httplog
log global
maxconn 10
stats refresh 5s
stats uri /admin #------------>访问URI配置
stats realm haproxy
stats auth admin:HaproxyProd1212!@2021 #------------>登录名及密码配置
stats hide-version
stats admin if TRUE
frontend k3s-apiserver #------------>定义代理入口配置段
bind *:6443 #------------>定义代理端口
mode tcp
option tcplog
default_backend k3s-apiserver #------------>定义代理后端
backend k3s-apiserver #------------>定义代理后端配置段
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 #------------>定义负载均衡规则
server k3s-apiserver-13 10.2.2.13:6443 check #------------>定义负载目标
server k3s-apiserver-14 10.2.2.14:6443 check #------------>定义负载目标
6. 启动脚本(注意脚本要有执行权限)
[demo@2t11 haproxy]$ cat /usr/lib/systemd/system/haproxy.service #------------------->启动脚本内容
[Unit]
Description=HAProxy
After=network.target
[Service]
User=root
Type=forking
ExecStart=/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/cfg/haproxy.cfg
ExecStop=/usr/bin/kill `/usr/bin/cat /usr/local/haproxy/haproxy.pid`
[Install]
WantedBy=multi-user.target
7. 启动haproxy
[demo@2t12 haproxy-2.4.7]$ sudo systemctl daemon-reload
[demo@2t12 haproxy-2.4.7]$ sudo systemctl start haproxy
[demo@2t12 haproxy-2.4.7]$ sudo systemctl enable haproxy
8. 查看
[demo@2t11 haproxy]$ sudo netstat -tnlp|grep haproxy
tcp 0 0 0.0.0.0:18090 0.0.0.0:* LISTEN 9340/haproxy
tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 9340/haproxy
访问haproxy-UI http://10.2.2.11:18090/admin 监控页面
2、 keepalived的部署和配置
部署 keepalived-2.1.5
部署方式:二进制
执行范围:10.2.2.11 10.2.2.12 (两台服务器keepalived配置文件有所差异,下文会标明)
1. 下载二进制包
https://www.keepalived.org/download.html
2. 上传到服务器并解压
[demo@2t11 keepalived]$ tar -xvf keepalived-2.1.5.tar.gz
3. 安装依赖
[demo@2t11 keepalived]$ sudo yum install curl gcc openssl-devel libnl3-devel net-snmp-devel -y
4. 配置
[demo@2t11 keepalived]$ cd keepalived-2.1.5 #------------------->进入解压后目录
[demo@2t11 keepalived]$ sudo ./configure --prefix=/usr/local/keepalived/ --sysconfdir=/etc #------------------->配置
5. 编译、安装
[demo@2t11 keepalived]$ sudo make && sudo make install
6. 查看安装目录,并将相关文件复制到指定位置
[demo@2t11 keepalived-2.1.5]$ ls /usr/local/keepalived/
bin etc sbin share
[demo@2t11 keepalived-2.1.5]$ pwd #------------------->当前所在目录为源码包解压后的目录(编译安装时的目录)
/home/demo/keepalived/keepalived-2.1.5
[demo@2t11 keepalived-2.1.5]$ sudo cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[demo@2t11 keepalived-2.1.5]$ sudo cp /usr/local/keepalived/bin/genhash /usr/sbin/
[demo@2t11 keepalived-2.1.5]$ sudo cp keepalived/keepalived.service /usr/lib/systemd/system/
[demo@2t11 keepalived-2.1.5]$ sudo cp keepalived/etc/init.d/keepalived.rh.init /etc/sysconfig/keepalived.sysconfig
7. 编写配置文件
# 对于10.2.2.11
[demo@2t11 keepalived]$ cd /etc/keepalived #------------>进入配置文件目录
[demo@2t11 keepalived]$ sudo mv keepalived.conf keepalived.conf.bak #------------>备份默认配置文件,使用下面的配置
[demo@2t11 keepalived]$ cat keepalived.conf #------------>配置文件
global_defs {
notification_email {
[email protected] # 指定keepalived在发生切换时需要发送email到的对象,一行一个
}
notification_email_from [email protected] # 指定发件人
smtp_server smtp.qq.com # smtp 服务器地址
smtp_connect_timeout 30 # smtp 服务器连接超时时间
router_id 2t11.cluster # 标识本节点的字符串,通常为hostname,但不一定非得是hostname,故障发生时,邮件通知会用到
script_user root
enable_script_security
}
vrrp_script check_haproxy {
script /etc/keepalived/check_haproxy.sh # haproxy状态监控脚本
interval 3
}
vrrp_instance VI_1 {
state BACKUP # 节点角色,下面配置了不争抢模式,故设置为BACKUP
nopreempt # 不争抢模式
interface ens33 # 节点固有IP(非VIP)的网卡,用来发VRRP包做心跳检测
virtual_router_id 62 # 虚拟路由ID,取值在0-255之间,用来区分多个instance的VRRP组播,同一网段内ID不能重复;主备必须为一样;
priority 100 # 用来选举master的,要成为master那么这个选项的值最好高于其他机器50个点,该项取值范围是1-255(在此范围之外会被识别成默认值100)
advert_int 1 # 检查间隔默认为1秒,即1秒进行一次master选举(可以认为是健康查检时间间隔)
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.2.2.100 # 虚拟VIP地址,允许多个
}
track_script {
check_haproxy
}
}
# 对于10.2.2.12
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server smtp.qq.com
smtp_connect_timeout 30
router_id 2t11.cluster
script_user root
enable_script_security
}
vrrp_script check_haproxy {
script /etc/keepalived/check_haproxy.sh
interval 3
}
vrrp_instance VI_1 {
state BACKUP
nopreempt
interface ens33
virtual_router_id 62
priority 99 # -----------------------> 此处与10.2.2.11不一样
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.2.2.100
}
track_script {
check_haproxy
}
}
8. check_haproxy脚本(注意脚本要有执行权限)
[demo@2t12 keepalived-2.1.5]$ cat /etc/keepalived/check_haproxy.sh
#!/bin/bash
haproxy_status=`/usr/sbin/pidof haproxy|wc -l`
if [ $haproxy_status -lt 1 ];then
systemctl stop keepalived
fi
9. 启停管理
[demo@2t12 keepalived-2.1.5]$ sudo systemctl daemon-reload
[demo@2t12 keepalived-2.1.5]$ sudo systemctl start keepalived
[demo@2t12 keepalived-2.1.5]$ sudo systemctl stop keepalived
[demo@2t12 keepalived-2.1.5]$ sudo systemctl enable keepalived
3、通过keepalived虚拟VIP http://10.2.2.100:18090/admin 访问haproxy监控页面
4、检测keepalived+haproxy高可用
检测方法:
第一步:停止10.2.2.11和10.2.2.12的haproxy和keepalived
第二步:启动10.2.2.11和10.2.2.12的haproxy
第三步:启动10.2.2.11和10.2.2.12的keepalived,查看keepalived虚拟VIP
第四步:停止绑定了虚拟VIP节点(10.2.2.11)的haproxy,查看VIP是否漂移到另一节点
第五步:再次启动之前停止了haproxy和keepalived节点的haproxy和keepalived程序,并停止另一节点的haproxy,查看VIP是否漂移回本节点
第六步:验证无误,说明keepalived+haproxy高可用节点已经部署完毕,可以为k3s集群提供高可用服务。
八、安装k3s agent节点
执行范围: 10.2.2.15 10.2.2.16
登录10.2.2.13(即上述server-1节点),拷贝k3s的3个文件到10.2.2.15和10.2.2.16两台主机上
[demo@2t13 k3s]$ cd /home/demo/k3s/ # ---> 进入k3s文件目录
[demo@2t13 k3s]$ ls # ---> 查看
install.sh k3s k3s-airgap-images-amd64.tar # ---> 就是这3个文件
[demo@2t13 k3s]$ scp ./* 10.2.2.15:/home/demo/k3s/ # ---> 拷贝到10.2.2.15
[demo@2t13 k3s]$ scp ./* 10.2.2.16:/home/demo/k3s/ # ---> 拷贝到10.2.2.16
修改install.sh文件,如下(10.2.2.15和10.2.2.16改动都一样)
[demo@2t15 k3s]$ vim install.sh
... ...
export INSTALL_K3S_SKIP_DOWNLOAD=true
export K3S_TOKEN=K10e1b1fcb4caf1f726580e0fb22d15ff4fcb48e5a26c0841b4c63b8176169a66f2::server:447912a715c422f1cce5893c37572280
export K3S_URL=https://10.2.2.100:6443
export INSTALL_K3S_EXEC="agent --datastore-endpoint=mysql://k3s:testdbk3s@tcp(10.2.2.10:13306)/k3s --docker --kube-apiserver-arg service-node-port-range=10000-65000 --no-deploy traefik --write-kubeconfig ~/.kube/config --write-kubeconfig-mode 666"
... ...
安装
1. 给k3s执行权限并复制到指定目录
[demo@2t15 k3s]$ chmod +x k3s && sudo cp k3s /usr/local/bin/
2. 执行
[demo@2t15 k3s]$ sudo ./install.sh
[sudo] password for demo:
[INFO] Skipping k3s download and verify
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /bin/ctr
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s-agent.service to /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent
登录10.2.2.13查看集群节点
[demo@2t13 k3s]$ kubectl get node
NAME STATUS ROLES AGE VERSION
2t16.cluster Ready 15m v1.19.15+k3s2
2t14.cluster Ready master 33m v1.19.15+k3s2
2t13.cluster Ready master 77m v1.19.15+k3s2
2t15.cluster Ready 16m v1.19.15+k3s2
九、安装rancher-ui界面
操作范围:10.2.2.10
[demo@2t10 ~]$ sudo docker run --privileged -d -v /home/demo/rancherUI-data/:/var/lib/rancher --restart=unless-stopped --name rancher -p 80:80 -p 9443:443 rancher/rancher:v2.4.17
c93d4d3f1a273cb693d6caf3f515d88797172a81f36a3acf5ce2f75138e46e9e
访问
继续按下图所示导入k3s集群到rancher
复制下图中红框部分到10.2.2.13或10.2.2.14节点并执行