kubeadm搭建高可用k8s集群(v1.19.4)

文章目录

    • 前言
    • 一、搭建说明
      • 1、关于HA的误区
      • 2、Master节点的组件
        • 2.1 apiserver
        • 2.2 controller-manager和scheduler
        • 2.3 各组件的失败容忍度
        • 2.4 高可用集群负载方式
          • 2.4.1 将请求负载到各Master节点
          • 2.4.2 本文采用的方式
      • 2.5 高可用负载
      • 2.6 本文搭建的内容
    • 二、搭建过程
      • 1、配置环境
        • 1.1 上传操作
        • 1.2 安装软件
        • 1.3配置环境信息
        • 1.4 安装docker、Harbor
        • 1.5 安装Harbor
        • 1.6 安装Keepalived
        • 1.7 安装HAProxy
      • 2、执行Kubeadm
        • 2.1 第一个Master操作
        • 2.2 第二个Master操作
        • 2.3 第三个Master操作
        • 2.4加入Node节点
        • 2.5遇到些小问题
          • 二、
      • 3、安装Rancher
    • 三、获取离线安装包
      • 1、 基础环境
        • 1.1 我们来看一下,整体搭建过程中我们基础需要哪些内容。
        • 1.2 获取基础环境工具
      • 2、 内核
        • 2.1 升级内核(必须,重要)
        • 2.2 附上内核升级操作
      • 3、Kubeadm安装包
      • 4、docker安装包
      • 5、keepalived配置模板文件
      • 6、haproxy配置模板文件
      • 7、获取calico网络插件
      • 8、获取kubeadm.conf文件
      • 9、获取kubedm安装需要的镜像
      • 10、获取calico插件所需的镜像
      • 11、所有文件保存至本地
      • 12、获取Harbor私仓&Rancher

前言

文章整体分为两部分

  • 第一部分为离线安装操作
  • 第二部分为如何获取离线安装包
  • 两部分也可以混在一起作为在线的操作
  • 有不对的地方留言私信,第一时间回复,修改

演示环境

CPU i7-8750H
内存 16G
磁盘 128固态+1T机械(虚拟机等软件均运行在机械硬盘)
虚拟化工具 VMware Workstation Pro15
Shell工具 SecureCRTPortable/SecureFXPortable
系统/版本号 Windows家庭中文版/2004

虚拟机规划(机器资源有限,如果你的配置足够,可以给虚拟机分配更多的资源)

IP 角色 系统版本 配置
192.168.91.221 Master Centos7.6 1P/2C/2G/50G
192.168.91.132 Master Centos7.6 1P/2C/2G/50G
192.168.91.133 Master Centos7.6 1P/2C/2G/50G
192.168.91.163 Node Centos7.6 2P/2C/4G/70G
192.168.91.220 VIP

注:

配置信息说明:1P/2C/2G/40G 表示 处理器数量1,核心数为2,内存2G,40G磁盘

Centos系统版本尽量与我保持一致,或不低于Centos7.5

一、搭建说明

K8S高可用 所有的组件不能出现单节点故障

Kubeadm设置K8S HA集群的两种方式:

  • 使用堆叠(Stacked)控制平面节点,其中 ETCD 节点与控制平面节点共存
  • 使用外部 ETCD 节点,其中 ETCD 在与控制平面不同的节点上运行

两种方式的对比参考文档:

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/ha-topology/

拓扑图:

  • ETCD集成K8S内部的方式
    kubeadm搭建高可用k8s集群(v1.19.4)_第1张图片

  • 外部ETCD方式

kubeadm搭建高可用k8s集群(v1.19.4)_第2张图片

1、关于HA的误区

很多人认为K8S高可用的集群Master必须至少为三节点且为奇数,并不是这样的,高可用至少为两节点且偶数节点也是可行的。

https://www.kubernetes.org.cn/7569.html

https://cloud.tencent.com/developer/article/1551687

2、Master节点的组件

  • apiServer:资源CRUD,集群控制请求的入口
  • controller-manager:管理所有资源的爸爸
  • scheduler:资源调度大总管

2.1 apiserver

​ 负载请求转发,不存在单个节点就无法工作的情况,可以水平扩展。

2.2 controller-manager和scheduler

一个master集群中这两个组件只会有一个节点处于工作状态,当工作的节点不可用时,会触发其他节点尝试将自己设置为工作节点的情况。

2.3 各组件的失败容忍度

Master的各组件的最小可用节点数:1 只要有一个节点正常,就可以用提供服务。

ETCD 组件需要leader选举,采用Raft算法,遵循过半原则,最小可用节点数:(n/2)+1。

节点总数 最小存活 失败容忍
1 1 0
2 2 0
3 2 1
4 3 1
5 3 1
6 4 2
n (n/2)+1向下取整 n-(n/2)+1

当ETCD节点数为1的时候,就会出现点单故障。为两个节点时,任意节点不可用都会造成ETCD集群不可用,所以ETCD高可用最少为三节点。而ETCD为四节点的时候失败容忍度与三节点是相同的,所以搭建四节点和三节点从可用性方面来看是一样的,这样四节点就没有必要了,后续节点增加均是以奇数位提高可用性级别。所以ETCD集群个数建议为奇数位,且最少位三节点。

  • 当我们选择ETCD节点与平面控制节点(Master各组件)共存时,由于ETCD的集群要求,所以我们的MASTER HA最少为三节点,且建议为奇数位。

2.4 高可用集群负载方式

2.4.1 将请求负载到各Master节点
  • SLB https://www.kubernetes.org.cn/7033.html
  • LVS https://cloud.tencent.com/developer/article/1747833
  • HAProxy
  • Nginx
2.4.2 本文采用的方式
  • HAProxy

kubeadm搭建高可用k8s集群(v1.19.4)_第3张图片

其中HAProxy将会与Master的APIServer绑定。

2.5 高可用负载

  • 目的是集群请求均衡负载到组件APIServer,只要实现这个即可。

2.6 本文搭建的内容

  • 搭建工具Kubeadm
  • K8S图形工具Rancher
  • 负载Keepalived+HAProxy
  • 集群网络Calico

二、搭建过程

Centos7 镜像下载地址:http://isoredirect.centos.org/centos/7/isos/x86_64/

1、配置环境

  • SecureCRT 连接四台服务器(也可以使用别的Shell工具,直接在虚拟机上操作也行)
  • 将离线安装包上传至四台服务器。

1.1 上传操作

将离线安装包上传至其中一台服务器,其他服务器通过ssh拷贝文件

kubeadm搭建高可用k8s集群(v1.19.4)_第4张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第5张图片

  • 将安装包拷贝至其他服务器(并不是所有文件都需要拷贝)
scp -r haproxy.cfg  ipvs.conf  k8s-master1.19.4/ kernel/ softrpm/ keepalived.conf  192.168.91.132:/root/

scp -r haproxy.cfg  ipvs.conf  k8s-master1.19.4/ kernel/ softrpm/ keepalived.conf  192.168.91.133:/root/

scp -r ipvs.conf kernel/ softrpm/ k8s-master1.19.4/ 192.168.91.163:/root/

1.2 安装软件

所有服务器执行
rpm -ivh k8s-master1.19.4/* kernel/* softrpm/* --force  

1.3配置环境信息

  • 如何你也是本地虚拟机操作,那么则需要设置每一台主机的网卡信息,指定静态IP。因为这里我们会关闭系统的NetworkManager服务
这里使用m1的网卡信息作为示例
[root@m1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32 
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens32"
UUID="a6690222-c1e5-45cf-93e5-2cfa7bf54960"
DEVICE="ens32"
ONBOOT="yes"
IPADDR=192.168.91.221
GATEWAY=192.168.91.2
NETMASK=255.255.255.0
DNS1=114.114.114.114
[root@m1 ~]# 
  • 设置每台机器的Hostname
hostnamectl set-hostname m1
hostnamectl set-hostname m2
hostnamectl set-hostname m3
hostnamectl set-hostname node1
  • 设置每台机器的hosts文件
192.168.91.221 m1
192.168.91.132 m2
192.168.91.133 m3
192.168.91.163 node1
192.168.91.220 k8s.vip
  • 设置基础环境,将以下内容保存至脚本中,放置到各节点执行
#!/bin/bash
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#关闭swap分区
swapoff -a &&  sed -ri 's/.*swap.*/#&/' /etc/fstab
#关闭ssh DNS解析
sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config
systemctl restart sshd
#清理iptables
iptables -P INPUT ACCEPT && iptables -P FORWARD ACCEPT && iptables -F && iptables -L -n && ipvsadm --clear
#关闭selinux
setenforce 0 &&  sed -i 's/^ *SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

#关闭NetworkManager
systemctl stop NetworkManager && systemctl disable NetworkManager 
#设置/etc/sysctl.conf
cat <<EOF > /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.ipv4.neigh.default.gc_thresh1=4096
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
net.ipv4.neigh.default.gc_interval=60
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
# 参考 https://github.com/prometheus/node_exporter#disabled-by-default
kernel.perf_event_paranoid=-1

#sysctls for k8s node config
net.ipv4.tcp_slow_start_after_idle=0
net.core.rmem_max=16777216
fs.inotify.max_user_watches=524288
kernel.softlockup_all_cpu_backtrace=1

kernel.softlockup_panic=0

kernel.watchdog_thresh=30
fs.file-max=2097152
fs.inotify.max_user_instances=8192
fs.inotify.max_queued_events=16384
vm.max_map_count=262144
fs.may_detach_mounts=1
net.core.netdev_max_backlog=16384
net.ipv4.tcp_wmem=4096 12582912 16777216
net.core.wmem_max=16777216
net.core.somaxconn=32768
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=8096
net.ipv4.tcp_rmem=4096 12582912 16777216

net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1

kernel.yama.ptrace_scope=0
vm.swappiness=0

# 可以控制core文件的文件名中是否添加pid作为扩展。
kernel.core_uses_pid=1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route=0
net.ipv4.conf.all.accept_source_route=0

# Promote secondary addresses when the primary address is removed
net.ipv4.conf.default.promote_secondaries=1
net.ipv4.conf.all.promote_secondaries=1

# Enable hard and soft link protection
fs.protected_hardlinks=1
fs.protected_symlinks=1

# 源路由验证
# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2

# see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets=5000
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_synack_retries=2
kernel.sysrq=1
vm.overcommit_memory=1
vm.panic_on_oom=0
vm.max_map_count=262144
EOF
#ipvs
cat <<EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
#limits
cat <<EOF > /etc/security/limits.d/kubernetes.conf
*       soft    nproc   131072
*       hard    nproc   131072
*       soft    nofile  131072
*       hard    nofile  131072
root    soft    nproc   131072
root    hard    nproc   131072
root    soft    nofile  131072
root    hard    nofile  131072
EOF
#加载br_netfilter
modprobe br_netfilter && lsmod |grep br_netfilter

systemctl enable --now systemd-modules-load

sysctl -p

systemctl enable --now kubelet
#内核设置
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
sed -i 's/GRUB_DEFAULT=saved/GRUB_DEFAULT=0/g' /etc/default/grub
sleep 2

reboot

  • 脚本执行完成后,虚拟机将重启加载环境配置。重启完成后我们先查看ipvs环境设置
[root@node1 ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 155648  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          155648  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
[root@node1 ~]# 
设置成功

1.4 安装docker、Harbor

将docker、haror安装包上传至服务器中。docker需要为每台服务器安装,Harbor我们选择安装在Node1

kubeadm搭建高可用k8s集群(v1.19.4)_第6张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第7张图片

我们将Docker分发给每个节点
Harbor放置到Node1

scp -r docker/ 192.168.91.132:/root/
scp -r docker/ 192.168.91.133:/root/
scp -r docker/ 192.168.91.163:/root/

scp -r harbor1.5/ 192.168.91.163:/root/
  • 安装Docker
#解压压缩包
tar -zxvf docker-19.03.9.tgz 
#将解压的内容全部复制到/usr/bin
cp docker/* /usr/bin
#将docker.service 复制到/etc/systemd/system/
cp docker.service /etc/systemd/system/
------------------------------------------------------------
[root@m1 docker]# ll docker
总用量 195504
-rwxr-xr-x 1 1000 1000 32751272 5月  15 2020 containerd
-rwxr-xr-x 1 1000 1000  6012928 5月  15 2020 containerd-shim
-rwxr-xr-x 1 1000 1000 18194536 5月  15 2020 ctr
-rwxr-xr-x 1 1000 1000 61113382 5月  15 2020 docker
-rwxr-xr-x 1 1000 1000 68874208 5月  15 2020 dockerd
-rwxr-xr-x 1 1000 1000   708616 5月  15 2020 docker-init
-rwxr-xr-x 1 1000 1000  2928514 5月  15 2020 docker-proxy
-rwxr-xr-x 1 1000 1000  9600696 5月  15 2020 runc
[root@m1 docker]# ll
总用量 59312
drwxrwxr-x 2 1000 1000      138 5月  15 2020 docker
-rw-r--r-- 1 root root 60730088 10月 25 21:13 docker-19.03.9.tgz
-rw-r--r-- 1 root root     1146 8月  27 2019 docker.service
[root@m1 docker]# cp docker/** /usr/bin/
[root@m1 docker]# cp docker.service  /etc/systemd/system/
------------------------------------------------------------
每个节点重复操作

启动docker
------------------------------------------------------------
[root@m1 docker]# systemctl start docker && systemctl enable --now docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /etc/systemd/system/docker.service.
  • 设置daemon.json 其中192.168.91.163:88 是Harbor的地址和端口
  • 我会在Node1:192.168.91.163这个节点安装Harbor 并设置端口为88
cat < /etc/docker/daemon.json
{
    "insecure-registries": ["192.168.91.163:88"],
    "registry-mirrors": [
        "https://fz5yth0r.mirror.aliyuncs.com"
    ],
    "max-concurrent-downloads": 15,
    "max-concurrent-uploads": 15,
        "oom-score-adjust": -1000,
        "graph": "/var/lib/docker",
        "exec-opts": ["native.cgroupdriver=systemd"],
    "storage-driver": "overlay2",
    "storage-opts": [
        "overlay2.override_kernel_check=true"
    ],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m",
        "max-file": "3"
    }
}
EOF

systemctl daemon-reload && systemctl restart docker
  • 每台机器重复操作

1.5 安装Harbor

  • Node1 节点进入Harbor1.5 目录
[root@node1 ~]# ll
总用量 20
-rw-------. 1 root root 1253 11月 29 11:52 anaconda-ks.cfg
drwxr-xr-x  2 root root   54 11月 29 14:03 docker
-rwxr-xr-x. 1 root root 4053 11月 29 13:47 envMake.sh
drwxr-xr-x  4 root root  251 11月 29 14:05 harbor1.5
-rw-r--r--. 1 root root  108 11月 29 12:49 ipvs.conf
drwxr-xr-x. 2 root root 4096 11月 29 12:49 kernel
drwxr-xr-x. 2 root root 4096 11月 29 12:49 softrpm
[root@node1 ~]# 
[root@node1 ~]# 
[root@node1 ~]# cd harbor1.5/
[root@node1 harbor1.5]# ls
common                     docker-compose.yml    install.sh
docker-compose             ha                    LICENSE
docker-compose.clair.yml   harbor.cfg            NOTICE
docker-compose.notary.yml  harbor.v1.5.0.tar.gz  prepare
[root@node1 harbor1.5]# 
  • 修改harbor.cfg的hostname
[root@node1 harbor1.5]# vi harbor.cfg
## Configuration file of Harbor

#This attribute is for migrator to detect the version of the .cfg file, DO
 NOT MODIFY!
_version = 1.5.0
#The IP address or hostname to access admin UI and registry service.
#DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by
 external clients.
hostname = 192.168.91.163

​ hostname设置为当前机器IP

  • 修改单机模板配置文件 {port}端口改为88

kubeadm搭建高可用k8s集群(v1.19.4)_第8张图片

[root@node1 harbor1.5]# vi common/templates/registry/config.yml
version: 0.1
log:
  level: info
  fields:
    service: registry
storage:
  cache:
    layerinfo: inmemory
  $storage_provider_info
  maintenance:
    uploadpurging:
      enabled: false
  delete:
    enabled: true
http:
  addr: :5000
  secret: placeholder
  debug:
    addr: localhost:5001
auth:
  token:
    issuer: harbor-token-issuer
    realm: $public_url:88/service/token
    rootcertbundle: /etc/registry/root.crt
    service: harbor-registry
notifications:
  endpoints:
  - name: harbor
    disabled: false
    url: $ui_url/service/notifications
    timeout: 3000ms
    threshold: 5
    backoff: 1s
  • 修改docker-compose.yml 端口映射

kubeadm搭建高可用k8s集群(v1.19.4)_第9张图片

  • 执行install安装脚本
[root@node1 harbor1.5]# chmod +x * && cp docker-compose /usr/bin/
[root@node1 harbor1.5]# ./install.sh 

[Step 0]: checking installation environment ...

Note: docker version: 19.03.9

Note: docker-compose version: 1.25.0

[Step 1]: loading Harbor images ...
52ef9064d2e4: Loading layer [==================================================>]  135.9MB/135.9MB
c169f7c7a5ff: Loading layer [==================================================>]  
#镜像加载部分略过

[Step 2]: preparing environment ...
Clearing the configuration file: ./common/config/adminserver/env
Clearing the configuration file: ./common/config/db/env
Clearing the configuration file: ./common/config/jobservice/config.yml
Clearing the configuration file: ./common/config/jobservice/env
Clearing the configuration file: ./common/config/log/logrotate.conf
Clearing the configuration file: ./common/config/nginx/nginx.conf
Clearing the configuration file: ./common/config/registry/config.yml
Clearing the configuration file: ./common/config/registry/root.crt
Clearing the configuration file: ./common/config/ui/app.conf
Clearing the configuration file: ./common/config/ui/env
Clearing the configuration file: ./common/config/ui/private_key.pem
Generated and saved secret to file: /data/secretkey
Generated configuration file: ./common/config/nginx/nginx.conf
Generated configuration file: ./common/config/adminserver/env
Generated configuration file: ./common/config/ui/env
Generated configuration file: ./common/config/registry/config.yml
Generated configuration file: ./common/config/db/env
Generated configuration file: ./common/config/jobservice/env
Generated configuration file: ./common/config/jobservice/config.yml
Generated configuration file: ./common/config/log/logrotate.conf
Generated configuration file: ./common/config/jobservice/config.yml
Generated configuration file: ./common/config/ui/app.conf
Generated certificate, key file: ./common/config/ui/private_key.pem, cert file: ./common/config/registry/root.crt
The configuration files are ready, please use docker-compose to start the service.


[Step 3]: checking existing instance of Harbor ...


[Step 4]: starting Harbor ...
Creating network "harbor15_harbor" with the default driver
Creating harbor-log ... done
Creating registry           ... done
Creating redis              ... done
Creating harbor-db          ... done
Creating harbor-adminserver ... done
Creating harbor-ui          ... done
Creating harbor-jobservice  ... done
Creating nginx              ... done

✔ ----Harbor has been installed and started successfully.----

Now you should be able to visit the admin portal at http://192.168.91.163. 
For more details, please visit https://github.com/vmware/harbor .

  • 浏览器访问192.168.91.163:88

kubeadm搭建高可用k8s集群(v1.19.4)_第10张图片

  • 用户名admin 密码Harbor12345
  • 这个密码在harbor.cfg中设置

kubeadm搭建高可用k8s集群(v1.19.4)_第11张图片

  • 登录Harbor创建k8s.gcr.io,calico 两个目录

kubeadm搭建高可用k8s集群(v1.19.4)_第12张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第13张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第14张图片

  • 所有服务器的docker登录harbor
[root@m1 ~]# docker login 192.168.91.163:88
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@m1 ~]# 
  • 推送安装K8S所需要的镜像,在一开始的文件传输中我们已经准备好了
[root@m1 ~]# cd kubeadm/
[root@m1 kubeadm]# ls
calico.yaml  haproxy.cfg  k8simages         keepalived.conf  kubeadm.yaml
envMake.sh   ipvs.conf    k8s-master1.19.4  kernel           softrpm
[root@m1 kubeadm]# 
[root@m1 kubeadm]# ll
总用量 220
-rw-r--r--. 1 root root 187871 11月 26 17:41 calico.yaml
-rwxr-xr-x. 1 root root   3881 11月 29 13:37 envMake.sh
-rw-r--r--. 1 root root   1617 11月 26 17:24 haproxy.cfg
-rw-r--r--. 1 root root    108 11月 26 15:01 ipvs.conf
drwxr-xr-x. 2 root root   4096 11月 29 12:45 k8simages
drwxr-xr-x. 2 root root   4096 11月 29 12:45 k8s-master1.19.4
-rw-r--r--. 1 root root    667 11月 26 12:35 keepalived.conf
drwxr-xr-x. 2 root root   4096 11月 29 12:45 kernel
-rw-r--r--. 1 root root   1484 11月 26 17:28 kubeadm.yaml
drwxr-xr-x. 2 root root   4096 11月 29 12:45 softrpm
[root@m1 kubeadm]# 
  • 执行以下脚本推送镜像至harbor,需要修改几个变量信息
#!/bin/bash
#你的harbor地址  带上端口
harbor_address=192.168.91.163:88
#镜像存放的目录
imagesDir=/root/kubeadm/k8simages
for image in `ls $imagesDir`;do
   docker load -i $imagesDir/$image
done

for i in `docker images | awk  'NR>1{print $1":"$2}'`;do
 imageTag=$harbor_address/$i
 docker tag $i $imageTag 
 docker push $imageTag
done
  • Harbor中查看推送的镜像

kubeadm搭建高可用k8s集群(v1.19.4)_第15张图片

1.6 安装Keepalived

  • 修改最开始上传的keepalived.conf

img

[root@m1 ~]# cd kubeadm/
[root@m1 kubeadm]# vi keepalived.conf 
global_defs {
  notification_email {
    root@localhost
    }
}

vrrp_script check_haproxy {
        script "killall -0 haproxy"
        interval 3
        weight -2
        fall 10
        rise 2
    }
vrrp_instance VI_1 {
        state MASTER
        interface ens32 #你的网卡名称
        virtual_router_id 51
        priority 250
        advert_int 3
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        track_script {
            check_haproxy
        }
        unicast_src_ip 192.168.91.221 #本机IP
        unicast_peer { #另外两个Master节点IP
          192.168.91.132 
          192.168.91.133
        }
        virtual_ipaddress {
          192.168.91.220 #虚拟IP
        }
}
  • 文件复制到/etc/keepliaved/
#备份原来的文件
[root@m1 kubeadm]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak && mv keepalived.conf  /etc/keepalived/
#启动Keepalived
[root@m1 kubeadm]# systemctl restart keepalived.service && systemctl enable --now keepalived.service
#查看虚拟IP
[root@m1 kubeadm]# ip a | grep ens32
2: ens32:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.91.221/24 brd 192.168.91.255 scope global noprefixroute ens32
    inet 192.168.91.220/32 scope global ens32
  • 第二台Master
[root@m2 ~]# vi keepalived.conf 
global_defs {
  notification_email {
    root@localhost
global_defs {
  notification_email {
    root@localhost
    }
}

vrrp_script check_haproxy {
        script "killall -0 haproxy"
        interval 3
        weight -2
        fall 10
        rise 2
    }
vrrp_instance VI_1 {
        state BACKUP #表示从节点
        interface ens32 #当前机器网卡名称
        virtual_router_id 51
        priority 200 #比第一台master少50
        advert_int 3
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        track_script {
            check_haproxy
        }
        unicast_src_ip 192.168.91.132 #本机IP 
        unicast_peer { #另外两台IP
          192.168.91.221
          192.168.91.133
        }
        virtual_ipaddress {
          192.168.91.220 #VIP
        }
}
[root@m2 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak && mv keepalived.conf  /etc/keepalived/ && systemctl start keepalived && systemctl enable --now keepalived
  • 第三台Master
[root@m3 ~]# vi keepalived.conf 
global_defs {
  notification_email {
    root@localhost
    }
}

vrrp_script check_haproxy {
        script "killall -0 haproxy"
        interval 3
        weight -2
        fall 10
        rise 2
    }
vrrp_instance VI_1 {
        state BACKUP #表示从节点
        interface ens32 #当前机器网卡名称
        virtual_router_id 51
        priority 150 #比第二个Master少50
        advert_int 3
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        track_script {
            check_haproxy
        }
        unicast_src_ip 192.168.91.133 #本机IP
        unicast_peer { #其余MasterIP
          192.168.91.221
          192.168.91.132
        }
        virtual_ipaddress {
          192.168.91.220 #VIP
        }
}
[root@m3 ~]# mv /etc/keepalived/keepalived.conf  /etc/keepalived/keepalived.conf.bak && mv keepalived.conf  /etc/keepalived/ && systemctl start keepalived.service && systemctl enable --now keepalived.service 

1.7 安装HAProxy

  • 每个节点的HAProxy.conf的内容一样
  • 修改每台机器的hostname 和 ip。其他配置无需改动
[root@m1 kubeadm]# vi haproxy.cfg 
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    15s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 15s
frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats #http://ip:8006/stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin

frontend kubernetes
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-apiServer

backend k8s-apiServer
    mode tcp
    option tcplog
    option httpchk GET /healthz
    http-check expect string ok
    balance     roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256
 weight 100
   #修改每台机器的hostname 和 ip ,端口为Apiserver端口无需改动
   server  m1 192.168.91.221:6443 check check-ssl verify none
    server  m2 192.168.91.132:6443 check check-ssl verify none
    server  m3 192.168.91.133:6443 check check-ssl verify none
  • 安装haproxy.cfg
[root@m1 kubeadm]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak && mv haproxy.cfg  /etc/haproxy/ && systemctl restart haproxy.service  && systemctl enable --now haproxy.service 

Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service
[root@m1 kubeadm]# 
  • 每个MASTER重复操作

2、执行Kubeadm

2.1 第一个Master操作

  • 在任意Master节点编辑Kubeam.conf
[root@m1 kubeadm]# vi kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  kubeletExtraArgs:
    cgroup-driver: "systemd"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
controlPlaneEndpoint: "k8s.vip:8443"
#ip端口修改为harbor的
imageRepository: 192.168.91.163:88/k8s.gcr.io
kubernetesVersion: v1.19.4
networking: #设置集群网络
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.68.0.0/16
apiServer:
  certSANs:
  - "192.168.91.221"
  - "192.168.91.133"
  - "192.168.91.132"
  - "m1"
  - "m2"
  - "m3"
  - "192.168.91.220"
  - "k8s.vip"
etcd:
  local:
    dataDir: "/var/lib/etcd"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
systemReserved:
  cpu: "0.25"
  memory: 128Mi
imageGCHighThresholdPercent: 85 #磁盘使用率超过 出发GC
imageGCLowThresholdPercent: 80 #磁盘使用率低于 停止GC
imageMinimumGCAge: 2m0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
failSwapOn: false
clusterDomain: cluster.local
rotateCertificates: true #开启证书轮询
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
bindAddressHardFail: false
mode: "ipvs"
#iptables:
#  masqueradeAll: false
#  masqueradeBit: null
#  minSyncPeriod: 0s
#  syncPeriod: 0s
ipvs:
   # 如果node提供lvs服务,排除以下CIDR 不受kube-proxy管理,避免刷掉lvs规则
  excludeCIDRs: [1.1.1.0/24,2.2.2.0/24]
  minSyncPeriod: 1s
  scheduler: "wrr"
  syncPeriod: 10s

  • 执行结果
[root@m1 kubeadm]# kubeadm init --config=kubeadm.yaml 
W1206 19:00:17.887611    8000 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.vip kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1 m2 m3] and IPs [10.96.0.1 192.168.91.221 192.168.91.133 192.168.91.132 192.168.91.220]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m1] and IPs [192.168.91.221 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m1] and IPs [192.168.91.221 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 40.040448 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node m1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: eiqtnz.nxpb6y5e4x6ltzfk
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s.vip:8443 --token eiqtnz.nxpb6y5e4x6ltzfk \
    --discovery-token-ca-cert-hash sha256:edc5b4b19bb03d4241d856b57958fd30402d56c9ea4dd6ddeaad9c0f471b2c99 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s.vip:8443 --token eiqtnz.nxpb6y5e4x6ltzfk \
    --discovery-token-ca-cert-hash sha256:edc5b4b19bb03d4241d856b57958fd30402d56c9ea4dd6ddeaad9c0f471b2c99 
[root@m1 kubeadm]# 
  • 按照提示设置kube config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 查看节点 NotReady 因为没有安装网络插件
[root@m1 kubeadm]# kubectl get nodes
NAME   STATUS     ROLES    AGE   VERSION
m1     NotReady   master   81s   v1.19.4
[root@m1 kubeadm]# 
  • 安装calico网络插件,将准备好的calico文件上传到刚初始化完成的节点
[root@m1 kubeadm]# ls
busytest.yaml   calico.yaml  imagepush.sh  k8simages         kernel        softrpm
calicoimage.sh  envMake.sh   ipvs.conf     k8s-master1.19.4  kubeadm.yaml
[root@m1 kubeadm]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
[root@m1 kubeadm]# kubectl get nodes
NAME   STATUS   ROLES    AGE     VERSION
m1     Ready    master   4m23s   v1.19.4
  • 因为我们没有做端口的处理,这个时候scheduler和controller-manager的端口并未开放,状态为unhealthy,我们选择修改配置的方式来解决这个问题
[root@m1 kubeadm]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}
  • 执行下面两个命令
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-scheduler.yaml 
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-controller-manager.yaml 
  • 再次查看
[root@m1 kubeadm]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@m1 kubeadm]# systemctl restart kubelet

2.2 第二个Master操作

  • 安装第二个Master

将Kubeadm输出的加入Master命令在其余两个Master上执行

[root@m2 ~]#  kubeadm join k8s.vip:8443 --token 50hra4.a5du6uhvu3th535o --discovery-token-ca-cert-hash sha256:d5b613e37998fd3e239d9f2667e03913b7f351b05bac0179f20cded741d8b0d4 --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.

failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher
  • 报错了 找不到ca证书,我们将第一台master的复制过来

  • 这是新节点复制之前我们还需要创建证书的默认目录

[root@m2 kubernetes]# mkdir -p /etc/kubernetes/pki/etcd/
  • 复制证书
[root@m1 pki]# scp ca.* front-proxy-ca.* sa.* 192.168.91.132:/etc/kubernetes/pki/
[email protected]'s password: 
ca.crt                                                      100% 1066     1.1MB/s   00:00    
ca.key                                                      100% 1675     1.5MB/s   00:00    
front-proxy-ca.crt                                          100% 1078     1.1MB/s   00:00    
front-proxy-ca.key                                          100% 1675     1.7MB/s   00:00    
ca.crt                                                      100% 1058     1.2MB/s   00:00    
ca.key                                                      100% 1679     1.8MB/s   00:00    
sa.key                                                      100% 1675     2.1MB/s   00:00    
sa.pub                                                      100%  451   571.2KB/s   00:00                   
[root@m1 pki]# scp etcd/ca.* 192.168.91.132:/etc/kubernetes/pki/etcd/
[email protected]'s password: 
ca.crt                                                      100% 1058     1.0MB/s   00:00    
ca.key                                                      100% 1679     2.0MB/s   00:00    
[root@m1 pki]# 
  • 第二台Master再次执行加入命令
[root@m2 ~]#  kubeadm join k8s.vip:8443 --token 50hra4.a5du6uhvu3th535o --discovery-token-ca-cert-hash sha256:d5b613e37998fd3e239d9f2667e03913b7f351b05bac0179f20cded741d8b0d4 --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.vip kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m2] and IPs [10.96.0.1 192.168.91.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m2] and IPs [192.168.91.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m2] and IPs [192.168.91.132 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node m2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@m2 ~]#  mkdir -p $HOME/.kube
[root@m2 ~]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@m2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@m2 ~]# kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
m1     Ready    master   60m   v1.19.4
m2     Ready    master   33s   v1.19.4
[root@m2 ~]# 
  • 第二个Master节点解决scheduler和controller-manager端口问题
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-scheduler.yaml 
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-controller-manager.yaml

2.3 第三个Master操作

  • 与第二个节点一样,将ca证书复制过来执行命令
  • 第三个节点创建ca证书目录
[root@m3 ~]# mkdir -p /etc/kubernetes/pki/etcd/
  • 从第一个节点复制ca证书
[root@m1 pki]# scp ca.* front-proxy-ca.* sa.* 192.168.91.133:/etc/kubernetes/pki/
[email protected]'s password: 
ca.crt                                                                    100% 1066     1.2MB/s   00:00    
ca.key                                                                    100% 1675     2.0MB/s   00:00    
front-proxy-ca.crt                                                        100% 1078     1.2MB/s   00:00    
front-proxy-ca.key                                                        100% 1675     1.9MB/s   00:00    
sa.key                                                                    100% 1675     1.3MB/s   00:00    
sa.pub                                                                    100%  451   460.2KB/s   00:00    
[root@m1 pki]# scp etcd/ca.* 192.168.91.133:/etc/kubernetes/pki/etcd/
[email protected]'s password: 
ca.crt                                                                    100% 1058   995.1KB/s   00:00    
ca.key                                                                    100% 1679     2.0MB/s   00:00    
[root@m1 pki]# 
  • 加入集群
[root@m3 ~]# kubeadm join k8s.vip:8443 --token eiqtnz.nxpb6y5e4x6ltzfk     --discovery-token-ca-cert-hash sha256:edc5b4b19bb03d4241d856b57958fd30402d56c9ea4dd6ddeaad9c0f471b2c99     --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.vip kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1 m2 m3] and IPs [10.96.0.1 192.168.91.133 192.168.91.221 192.168.91.132 192.168.91.220]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m3] and IPs [192.168.91.133 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m3] and IPs [192.168.91.133 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node m3 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@m3 ~]# 
  • 第三个Master节点解决scheduler和controller-manager端口问题
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-scheduler.yaml 
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-controller-manager.yaml 

2.4加入Node节点

[root@node1 ~]# kubeadm join k8s.vip:8443 --token eiqtnz.nxpb6y5e4x6ltzfk \
>     --discovery-token-ca-cert-hash sha256:edc5b4b19bb03d4241d856b57958fd30402d56c9ea4dd6ddeaad9c0f471b2c99 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node1 ~]# 
  • 加入成功 查看集群状态
[root@m1 pki]# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
m1      Ready    master   31m     v1.19.4
m2      Ready    master   12m     v1.19.4
m3      Ready    master   9m1s    v1.19.4
node1   Ready       6m32s   v1.19.4
[root@m1 pki]# kubectl get  pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5dc87d545c-v6jlf   1/1     Running   0          29m
kube-system   calico-node-fnv6p                          1/1     Running   0          6m33s
kube-system   calico-node-jjhwm                          1/1     Running   0          29m
kube-system   calico-node-lczjb                          1/1     Running   0          9m2s
kube-system   calico-node-pzg4v                          1/1     Running   0          12m
kube-system   coredns-5b46ccdfcd-6zzv7                   1/1     Running   0          31m
kube-system   coredns-5b46ccdfcd-xt8vw                   1/1     Running   0          31m
kube-system   etcd-m1                                    1/1     Running   0          31m
kube-system   etcd-m2                                    1/1     Running   0          10m
kube-system   etcd-m3                                    1/1     Running   0          8m58s
kube-system   kube-apiserver-m1                          1/1     Running   0          31m
kube-system   kube-apiserver-m2                          1/1     Running   0          12m
kube-system   kube-apiserver-m3                          1/1     Running   0          9m1s
kube-system   kube-controller-manager-m1                 1/1     Running   1          25m
kube-system   kube-controller-manager-m2                 1/1     Running   0          12m
kube-system   kube-controller-manager-m3                 1/1     Running   0          7m53s
kube-system   kube-proxy-58xnn                           1/1     Running   0          12m
kube-system   kube-proxy-5fwkf                           1/1     Running   0          31m
kube-system   kube-proxy-5m6pw                           1/1     Running   0          6m33s
kube-system   kube-proxy-knwwx                           1/1     Running   0          9m2s
kube-system   kube-scheduler-m1                          1/1     Running   1          25m
kube-system   kube-scheduler-m2                          1/1     Running   0          12m
kube-system   kube-scheduler-m3                          1/1     Running   0          7m58s
[root@m1 pki]#

2.5遇到些小问题

  • 对于如何更新kubeadm安装的集群的apiserver证书查看:https://izsk.me/2020/04/29/Kubernetes-add-new-cert-into-apiserver/
二、
  • 如果你在加入新的mater集群之后又通过kubectl将新加入的master删除,且这时集群只剩余单个节点。这时原先的Master将无法工作,因为集群还记录了你新加入的节点的etcd信息,etcd无法工作,查看日志etcd一直在检查删除节点的etcd,容器不断重启,此时APIServer也将无法工作。kubelet命令等都无法使用。

  • 解决方法:将etcd改为单节点集群

  • etcd.yaml command命令加入以下内容

- --initial-cluster-state=new
- --force-new-cluster
[root@m1 kubeadm]# vi /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.91.221:2379
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://192.168.91.221:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://192.168.91.221:2380
    - --initial-cluster=m1=https://192.168.91.221:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://192.168.91.221:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://192.168.91.221:2380
    - --name=m1
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --initial-cluster-state=new
    - --force-new-cluster
  • 接着重启服务器,将集群正常时,在修改这个配置将加入的参数去掉

3、安装Rancher

img

  • 将准备好的安装包上传至任意节点

这里我选择上传到node1节点 root目录下

[root@node1 ~]# ll
总用量 131012
-rw-------. 1 root root     1241 11月 29 11:30 anaconda-ks.cfg
drwxr-xr-x  3 root root       68 11月 29 14:09 docker
drwxr-xr-x  4 root root      251 11月 29 13:58 harbor1.5
drwxr-xr-x. 6 root root      170 11月 29 19:09 kubeadm
-rw-r--r--  1 root root 99381248 11月 29 20:38 rancherv2.4.8.tgz
  • 加载镜像
[root@node1 ~]# docker load -i rancherv2.4.8.tgz 
805802706667: Loading layer [==================================================>]  65.61MB/65.61MB
3fd9df553184: Loading layer [==================================================>]  15.87kB/15.87kB
7a694df0ad6c: Loading layer [==================================================>]  3.072kB/3.072kB
42f2a536483d: Loading layer [==================================================>]  138.6MB/138.6MB
cf2e851ac598: Loading layer [==================================================>]  6.656kB/6.656kB
969fe7f2b01e: Loading layer [==================================================>]  82.71MB/82.71MB
d04e1a085efb: Loading layer [==================================================>]  83.85MB/83.85MB
893359a85ac8: Loading layer [==================================================>]   35.8MB/35.8MB
1a95e9dfb001: Loading layer [==================================================>]  88.59MB/88.59MB
b600fec8ae23: Loading layer [==================================================>]  75.46MB/75.46MB
41e46c093409: Loading layer [==================================================>]  175.7MB/175.7MB
0c18e90730a5: Loading layer [==================================================>]  3.072kB/3.072kB
907bb86f0d6e: Loading layer [==================================================>]  82.05MB/82.05MB
46a7fe6f1101: Loading layer [==================================================>]  117.2MB/117.2MB
1d0e33d7ff7e: Loading layer [==================================================>]  3.072kB/3.072kB
82d416efaa0c: Loading layer [==================================================>]   5.12kB/5.12kB
ebe78adda61d: Loading layer [==================================================>]  44.05MB/44.05MB
7c1635c814db: Loading layer [==================================================>]  3.584kB/3.584kB
629e772b907b: Loading layer [==================================================>]  1.168MB/1.168MB
Loaded image: rancher/rancher:stable
[root@m1 ~]# 
  • 运行
[root@node1 ~]# docker run -d --restart=unless-stopped \
 -p 81:80 -p 9443:443 \
 --privileged \
  rancher/rancher:stable
dffcf59585bca2d4673edcde1e0dc480a1b931ed02a1c754ef08cbaf43bf003f
[root@m1 ~]# 
  • 查看运行的容器
[root@node1 k8s-master1.19.4]# docker ps | grep rancher
2d801cbb2651        rancher/rancher:stable                    "entrypoint.sh"          9 seconds ago       Up 5 seconds                 0.0.0.0:81->80/tcp, 0.0.0.0:9443->443/tcp                            loving_swirles
[root@node1 k8s-master1.19.4]# 
  • 稍等片刻后,浏览器登录rancher 192.168.91.163:9443

kubeadm搭建高可用k8s集群(v1.19.4)_第16张图片

  • 设置密码

kubeadm搭建高可用k8s集群(v1.19.4)_第17张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第18张图片

  • 默认保存即可

img

kubeadm搭建高可用k8s集群(v1.19.4)_第19张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第20张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第21张图片

  • 选择第三行复制,在集群任意节点执行
[root@m3 ~]# curl --insecure -sfL https://192.168.91.163:9443/v3/import/lxj2rrm6pw97v2vzxcxnzm74m8zqpnnrklj2ths66psvhxknbmqnmf.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-f9d649b created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created
[root@m3 ~]# 

kubeadm搭建高可用k8s集群(v1.19.4)_第22张图片

  • 稍等片刻,右下角可设置语言

kubeadm搭建高可用k8s集群(v1.19.4)_第23张图片

  • 使用rancher部署一个服务测试网络连通性

kubeadm搭建高可用k8s集群(v1.19.4)_第24张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第25张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第26张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第27张图片

kubeadm搭建高可用k8s集群(v1.19.4)_第28张图片

  • 在不同的节点ping PODIP

  • m1

[root@m1 pki]# ping 10.68.166.133
PING 10.68.166.133 (10.68.166.133) 56(84) bytes of data.
64 bytes from 10.68.166.133: icmp_seq=1 ttl=63 time=0.453 ms
64 bytes from 10.68.166.133: icmp_seq=2 ttl=63 time=0.677 ms
^C
--- 10.68.166.133 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.453/0.565/0.677/0.112 ms
[root@m1 pki]# 
  • m2
[root@m2 ~]# ping 10.68.166.133
PING 10.68.166.133 (10.68.166.133) 56(84) bytes of data.
64 bytes from 10.68.166.133: icmp_seq=1 ttl=63 time=0.375 ms
64 bytes from 10.68.166.133: icmp_seq=2 ttl=63 time=0.268 ms
64 bytes from 10.68.166.133: icmp_seq=3 ttl=63 time=0.238 ms
^C
--- 10.68.166.133 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2088ms
rtt min/avg/max/mdev = 0.238/0.293/0.375/0.062 ms
[root@m2 ~]# 
  • m3
[root@m3 ~]# ping 10.68.166.133
PING 10.68.166.133 (10.68.166.133) 56(84) bytes of data.
64 bytes from 10.68.166.133: icmp_seq=1 ttl=63 time=0.329 ms
64 bytes from 10.68.166.133: icmp_seq=2 ttl=63 time=0.267 ms
64 bytes from 10.68.166.133: icmp_seq=3 ttl=63 time=0.360 ms
^C
--- 10.68.166.133 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2071ms
rtt min/avg/max/mdev = 0.267/0.318/0.360/0.043 ms
[root@m3 ~]# 
  • node1
[root@node1 ~]# ping 10.68.166.133
PING 10.68.166.133 (10.68.166.133) 56(84) bytes of data.
64 bytes from 10.68.166.133: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 10.68.166.133: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 10.68.166.133: icmp_seq=3 ttl=64 time=0.039 ms
64 bytes from 10.68.166.133: icmp_seq=4 ttl=64 time=0.034 ms
^C
--- 10.68.166.133 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3091ms
rtt min/avg/max/mdev = 0.034/0.041/0.054/0.009 ms
[root@node1 ~]# 
  • 测试结束

三、获取离线安装包


我们使用yum下载离线安装包时,本地环境如果已安装目标软件,则yum不会在进行下载

1、 基础环境

1.1 我们来看一下,整体搭建过程中我们基础需要哪些内容。

  • 我们选择使用ipvs模式
ipvsadm、ipset、conntrack-tools
  • 我们选择HAProxy 做高可用负载
所以haproxy、keepalived、psmisc是必须的 其中keepalived监测脚本使用的killall命令需要psmisc
  • 一般的网络管理工具
telnet、tcpdump、net-tools、conntrack、socat
  • 一般的进程管理工具
lsof、sysstat、htop
  • 时间同步工具
crontabs、chrony
  • DNS,域名解析工具
bind-utils
  • 其他工具
jq(json处理程序)、bash-completion(命令自动补全)、net-snmp-agent-libs、net-snmp-libs、wget、unzip、perl(内核依赖)、libseccomp、tree

1.2 获取基础环境工具

  • 我们拿到一台可以连接外网的Linux服务器,(也可以是本地自建的虚拟机)
  • 我们在/mnt下创建一个名softrpms的目录(可以是任意目录)
[root@localhost mnt]# mkdir softrpms
[root@localhost mnt]# cd ..
[root@localhost /]# tree mnt/
mnt/
└── softrpms

1 directory, 0 files
[root@localhost /]# 
  • 配置阿里yum源,如果网络状态很好可以跳过这一步(我们需要先安装wget)
[root@localhost /]# yum install -y wget
#备份默认的yum源
[root@localhost /]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
#安装阿里yum源
[root@localhost /]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@localhost /]# yum clean all
[root@localhost /]# yum makecache
  • 下载rpms软件包(直接复制命令即可)
[root@localhost /]# yum install --downloadonly --downloaddir=/mnt/softrpms/ conntrack-tools  psmisc jq socat bash-completion ipset perl ipvsadm conntrack  libseccomp net-tools crontabs sysstat  unzip bind-utils tcpdump telnet lsof  htop wget haproxy keepalived net-snmp-agent-libs net-snmp-libs chrony
  • 查看下载的内容
[root@localhost /]# ls /mnt/softrpms/
bash-completion-2.1-8.el7.noarch.rpm                       perl-File-Temp-0.23.01-3.el7.noarch.rpm
bind-export-libs-9.11.4-26.P2.el7_9.2.x86_64.rpm           perl-Filter-1.49-3.el7.x86_64.rpm
bind-libs-9.11.4-26.P2.el7_9.2.x86_64.rpm                  perl-Getopt-Long-2.40-3.el7.noarch.rpm
bind-libs-lite-9.11.4-26.P2.el7_9.2.x86_64.rpm             perl-HTTP-Tiny-0.033-3.el7.noarch.rpm
bind-license-9.11.4-26.P2.el7_9.2.noarch.rpm               perl-libs-5.16.3-297.el7.x86_64.rpm
bind-utils-9.11.4-26.P2.el7_9.2.x86_64.rpm                 perl-macros-5.16.3-297.el7.x86_64.rpm
chrony-3.4-1.el7.x86_64.rpm                                perl-parent-0.225-244.el7.noarch.rpm
dhclient-4.2.5-82.el7.centos.x86_64.rpm                    perl-PathTools-3.40-5.el7.x86_64.rpm
dhcp-common-4.2.5-82.el7.centos.x86_64.rpm                 perl-Pod-Escapes-1.04-297.el7.noarch.rpm
dhcp-libs-4.2.5-82.el7.centos.x86_64.rpm                   perl-podlators-2.5.1-3.el7.noarch.rpm
haproxy-1.5.18-9.el7.x86_64.rpm                            perl-Pod-Perldoc-3.20-4.el7.noarch.rpm
ipset-7.1-1.el7.x86_64.rpm                                 perl-Pod-Simple-3.28-4.el7.noarch.rpm
ipset-libs-7.1-1.el7.x86_64.rpm                            perl-Pod-Usage-1.63-3.el7.noarch.rpm
ipvsadm-1.27-8.el7.x86_64.rpm                              perl-Scalar-List-Utils-1.27-248.el7.x86_64.rpm
keepalived-1.3.5-19.el7.x86_64.rpm                         perl-Socket-2.010-5.el7.x86_64.rpm
libpcap-1.5.3-12.el7.x86_64.rpm                            perl-Storable-2.45-3.el7.x86_64.rpm
libseccomp-2.3.1-4.el7.x86_64.rpm                          perl-Text-ParseWords-3.29-4.el7.noarch.rpm
lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm  perl-threads-1.87-4.el7.x86_64.rpm
lsof-4.87-6.el7.x86_64.rpm                                 perl-threads-shared-1.43-6.el7.x86_64.rpm
net-snmp-agent-libs-5.7.2-49.el7.x86_64.rpm                perl-Time-HiRes-1.9725-3.el7.x86_64.rpm
net-snmp-libs-5.7.2-49.el7.x86_64.rpm                      perl-Time-Local-1.2300-2.el7.noarch.rpm
net-tools-2.0-0.25.20131004git.el7.x86_64.rpm              psmisc-22.20-17.el7.x86_64.rpm
perl-5.16.3-297.el7.x86_64.rpm                             sysstat-10.1.5-19.el7.x86_64.rpm
perl-Carp-1.26-244.el7.noarch.rpm                          tcpdump-4.9.2-4.el7_7.1.x86_64.rpm
perl-constant-1.27-2.el7.noarch.rpm                        telnet-0.17-66.el7.x86_64.rpm
perl-Encode-2.51-7.el7.x86_64.rpm                          unzip-6.0-21.el7.x86_64.rpm
perl-Exporter-5.68-3.el7.noarch.rpm                        wget-1.14-18.el7_6.1.x86_64.rpm
perl-File-Path-2.09-2.el7.noarch.rpm
[root@localhost /]# 
  • 将其归档(基础环境离线安装软件获取完成,softrpms.tar.gz保存下来即可)
[root@localhost mnt]# cd /mnt/ && tar -zcvf softrpms.tar.gz 
[root@localhost mnt]# ls
softrpms  softrpms.tar.gz
[root@localhost mnt]# 

2、 内核

2.1 升级内核(必须,重要)

  • 在 /mnt 下创建 kernel目录 (可以是任意目录)
[root@localhost mnt]# mkdir kernel
  • 启用 ELRepo 仓库
[root@localhost mnt]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@localhost mnt]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
准备中...                          ################################# [100%]
正在升级/安装...
   1:elrepo-release-7.0-4.el7.elrepo  ################################# [100%]
[root@localhost mnt]# 
  • 查看可用的系统内核包
  • yum --disablerepo="*" --enablerepo=“elrepo-kernel” list available
[root@localhost mnt]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * elrepo-kernel: elrepo.0m3n.net
elrepo-kernel                                                                                          | 3.0 kB  00:00:00     
elrepo-kernel/primary_db                                                                               | 1.8 MB  00:00:01     
可安装的软件包
elrepo-release.noarch                                          7.0-5.el7.elrepo                                  elrepo-kernel
kernel-lt.x86_64                                               4.4.247-1.el7.elrepo                              elrepo-kernel
kernel-lt-devel.x86_64                                         4.4.247-1.el7.elrepo                              elrepo-kernel
kernel-lt-doc.noarch                                           4.4.247-1.el7.elrepo                              elrepo-kernel
kernel-lt-headers.x86_64                                       4.4.247-1.el7.elrepo                              elrepo-kernel
kernel-lt-tools.x86_64                                         4.4.247-1.el7.elrepo                              elrepo-kernel
kernel-lt-tools-libs.x86_64                                    4.4.247-1.el7.elrepo                              elrepo-kernel
kernel-lt-tools-libs-devel.x86_64                              4.4.247-1.el7.elrepo                              elrepo-kernel
kernel-ml.x86_64                                               5.9.12-1.el7.elrepo                               elrepo-kernel
kernel-ml-devel.x86_64                                         5.9.12-1.el7.elrepo                               elrepo-kernel
kernel-ml-doc.noarch                                           5.9.12-1.el7.elrepo                               elrepo-kernel
kernel-ml-headers.x86_64                                       5.9.12-1.el7.elrepo                               elrepo-kernel
kernel-ml-tools.x86_64                                         5.9.12-1.el7.elrepo                               elrepo-kernel
kernel-ml-tools-libs.x86_64                                    5.9.12-1.el7.elrepo                               elrepo-kernel
kernel-ml-tools-libs-devel.x86_64                              5.9.12-1.el7.elrepo                               elrepo-kernel
perf.x86_64                                                    5.9.12-1.el7.elrepo                               elrepo-kernel
python-perf.x86_64                                             5.9.12-1.el7.elrepo                               elrepo-kernel
[root@localhost mnt]# 
  • 下载最新系统内核,(有版本需求在下载的安装包后面跟版本号即可)
[root@localhost mnt]# yum install --downloadonly --downloaddir=/mnt/kernel/  --enablerepo="elrepo-kernel" kernel-ml kernel-ml-devel
  • 查看下载的内容,将其归档保存
[root@localhost mnt]# ls kernel/
kernel-ml-5.9.12-1.el7.elrepo.x86_64.rpm
kernel-ml-devel-5.9.12-1.el7.elrepo.x86_64.rpm
perl-5.16.3-297.el7.x86_64.rpm
perl-Carp-1.26-244.el7.noarch.rpm
perl-constant-1.27-2.el7.noarch.rpm
perl-Encode-2.51-7.el7.x86_64.rpm
perl-Exporter-5.68-3.el7.noarch.rpm
perl-File-Path-2.09-2.el7.noarch.rpm
perl-File-Temp-0.23.01-3.el7.noarch.rpm
perl-Filter-1.49-3.el7.x86_64.rpm
perl-Getopt-Long-2.40-3.el7.noarch.rpm
perl-HTTP-Tiny-0.033-3.el7.noarch.rpm
perl-libs-5.16.3-297.el7.x86_64.rpm
perl-macros-5.16.3-297.el7.x86_64.rpm
perl-parent-0.225-244.el7.noarch.rpm
perl-PathTools-3.40-5.el7.x86_64.rpm
perl-Pod-Escapes-1.04-297.el7.noarch.rpm
perl-podlators-2.5.1-3.el7.noarch.rpm
perl-Pod-Perldoc-3.20-4.el7.noarch.rpm
perl-Pod-Simple-3.28-4.el7.noarch.rpm
perl-Pod-Usage-1.63-3.el7.noarch.rpm
perl-Scalar-List-Utils-1.27-248.el7.x86_64.rpm
perl-Socket-2.010-5.el7.x86_64.rpm
perl-Storable-2.45-3.el7.x86_64.rpm
perl-Text-ParseWords-3.29-4.el7.noarch.rpm
perl-threads-1.87-4.el7.x86_64.rpm
perl-threads-shared-1.43-6.el7.x86_64.rpm
perl-Time-HiRes-1.9725-3.el7.x86_64.rpm
perl-Time-Local-1.2300-2.el7.noarch.rpm
[root@localhost mnt]# tar -zcvf kernel.tar.gz kernel/
[root@localhost mnt]# ls
kernel  kernel.tar.gz  softrpms  softrpms.tar.gz
[root@localhost mnt]# 

2.2 附上内核升级操作

  • 查看系统的可用内核
[root@localhost mnt]#  awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
  • 当我们的新内核安装完成后,根据内核序号设置默认内核
  • 一般新安装的内核序号为0
[root@localhost mnt]#  grub2-set-default 0
  • 设置/etc/default/grub文件默认内核序号
#其中GRUB_DEFAULT=0
[root@localhost mnt]# vi /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=0
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet"
GRUB_DISABLE_RECOVERY="true
  • 创建 grub2 boot引导配置文件,(注意备份以前的)
[root@localhost mnt]# grub2-mkconfig -o /boot/grub2/grub.cfg
  • 安装完成后需要重启生效
  • 当存在三个内核以上时,可用通过命令自动删除旧内核(没有的命令自行安装)
[root@localhost mnt]# package-cleanup --oldkernels

3、Kubeadm安装包

  • /mnt 下创建kubeadmSoft目录 (可以是任意目录)
[root@localhost ~]# mkdir /mnt/kubeadmSoft
  • 配置阿里kubernetes yum源(直接复制执行即可,也可以放置到脚本中)
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • 查看yum源中默认支持的版本
[root@localhost ~]# yum list kubelet kubeadm kubectl --disableexcludes=kubernetes |sort -r
已加载插件:fastestmirror
已安装的软件包
 * updates: mirror.lzu.edu.cn
Loading mirror speeds from cached hostfile
kubelet.x86_64                        1.19.4-0                         installed
kubectl.x86_64                        1.19.4-0                         installed
kubeadm.x86_64                        1.19.4-0                         installed
 * extras: mirror.lzu.edu.cn
 * elrepo: mirrors.tuna.tsinghua.edu.cn
 * base: mirror.lzu.edu.cn
[root@localhost ~]# 
  • 将安装包下载到本地(如果需要其他版本在软件后面跟上版本号即可,如kubelet-1.19.0)
[root@localhost ~]#  yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes --downloaddir=/mnt/kubeadmSoft/ --downloadonly
  • 查看安装包并归档保存
[root@localhost mnt]# ls kubeadmSoft/
14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm
318243df021e3a348c865a0a0b3b7d4802fe6c6c1f79500c6130d5c6b628766c-kubelet-1.19.4-0.x86_64.rpm
afa24df75879f7793f2b22940743e4d40674f3fcb5241355dd07d4c91e4866df-kubeadm-1.19.4-0.x86_64.rpm
c1fdeadba483d54bedecb0648eb7426cc3d7222499179911961f493a0e07fcd0-kubectl-1.19.4-0.x86_64.rpm
conntrack-tools-1.4.4-7.el7.x86_64.rpm
db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
[root@localhost mnt]# tar -zcvf kubeadmSoft.tar.gz kubeadmSoft/
[root@localhost mnt]# ls
kubeadmSoft  kubeadmSoft.tar.gz 
[root@localhost mnt]# 

4、docker安装包

  • docker安装也可以选择使用上面的yum方式,这里我们选择二进制文件安装
  • GitHub docker源码地址 https://github.com/moby/moby
  • 进入官方提供的二进制文件下载地址https://download.docker.com/linux/static/stable/x86_64/

kubeadm搭建高可用k8s集群(v1.19.4)_第29张图片

  • 选择你需要的版本后 通过GitHub docker源码地址获取docker.service文件将二进制文件安装的docker加入systemd

    https://github.com/moby/moby/tree/master/contrib/init/systemd

  • 注意获取docker.service.rpm的文件

  • 也可以直接将下面内容复制,保存为docker.service文件

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
  
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
#Restart=on-failure
StartLimitBurst=3
#StartLimitInterval=60s
  
[Install]
WantedBy=multi-user.target

5、keepalived配置模板文件

global_defs {
  notification_email {
    root@localhost
    }
}

vrrp_script check_haproxy {
        script "killall -0 haproxy"
        interval 3
        weight -2
        fall 10
	rise 2
    }
vrrp_instance VI_1 {
        state MASTER
        interface ens32
        virtual_router_id 51
        priority 250
        advert_int 3
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        track_script {
            check_haproxy
        }
        unicast_src_ip 192.168.91.131
        unicast_peer {
          192.168.91.133
          192.168.91.134
        }
        virtual_ipaddress {
          192.168.91.130
        }
}

6、haproxy配置模板文件

global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    15s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 15s
    timeout check           10s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats #http://ip:8006/stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin

frontend kubernetes
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-apiServer

backend k8s-apiServer
    mode tcp
    option tcplog
    option httpchk GET /healthz
    http-check expect string ok
    balance     roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server  m1 192.168.91.131:6443 check check-ssl verify none
    server  m2 192.168.91.133:6443 check check-ssl verify none
    server  m3 192.168.91.134:6443 check check-ssl verify none

7、获取calico网络插件

  • 地址 https://docs.projectcalico.org/manifests/calico.yaml
[root@localhost ~]# wget https://docs.projectcalico.org/manifests/calico.yaml
		如果出现无法解析docs.projectcalico.org 可以尝试配置网卡的DNS地址

8、获取kubeadm.conf文件

  • 单独获取kubeadm.conf
[root@localhost mnt]# kubeadm config print init-defaults > kubeadm.conf
  • 获取KubeProxyConfiguration和kubeadm.conf
kubeadm config print init-defaults --component-configs KubeProxyConfiguration > KubeProxyConfiguration.conf
  • 获取KubeletConfiguration和kubeadm.conf
[root@localhost mnt]# kubeadm config print init-defaults --component-configs KubeletConfiguration >  KubeletConfiguration.conf
  • 将文件保存到本地,整合需要的内容,合并至kubeadm.conf

kubeadm搭建高可用k8s集群(v1.19.4)_第30张图片

  • kubeadm.conf模板
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  kubeletExtraArgs:
    cgroup-driver: "systemd"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
controlPlaneEndpoint: "k8s.vip:8443"
imageRepository: 192.168.91.135:88/k8s.gcr.io
kubernetesVersion: v1.19.4
networking: #设置集群网络
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.68.0.0/16
apiServer:
  certSANs:
  - "k8s.vip"
etcd:
  local:
    dataDir: "/var/lib/etcd"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
systemReserved: 
  cpu: "0.25"
  memory: 128Mi
imageGCHighThresholdPercent: 85 #磁盘使用率超过 出发GC
imageGCLowThresholdPercent: 80 #磁盘使用率低于 停止GC
imageMinimumGCAge: 2m0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
failSwapOn: false
clusterDomain: cluster.local
rotateCertificates: true #开启证书轮询
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
bindAddressHardFail: false
mode: "ipvs"
#iptables:
#  masqueradeAll: false
#  masqueradeBit: null
#  minSyncPeriod: 0s
#  syncPeriod: 0s
ipvs:
   # 如果node提供lvs服务,排除以下CIDR 不受kube-proxy管理,避免刷掉lvs规则
  excludeCIDRs: [1.1.1.0/24,2.2.2.0/24]
  minSyncPeriod: 1s
  scheduler: "wrr"
  syncPeriod: 10s
  • KubeProxyConfiguration官方配置文档
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ 
or https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
  • KubeletConfiguration官方配置文档
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
  • 除kubeadm init 配置外,还有kubeadm join配置
kubeadm config print join-defaults

9、获取kubedm安装需要的镜像

  • 在服务器上安装上面Kubeadm,Docker。(我们上面下载的离线安装包直接安装即可)
  • 将以下复制为脚本保存执行即可
  • 镜像会保存到/mnt/k8simages下 如有需要可以设置为其他目录
#!/bin/bash
aliyunRegistry="registry.aliyuncs.com/google_containers"
kubernetesRegistry="k8s.gcr.io"
kubernetesImages=`kubeadm  config images list --kubernetes-version=1.19.4`
imageDir="/mnt/k8simages"
mkdir -p $imageDir

function imagePull() {
  local imageTag=$1
  docker pull $imageTag
}

function imageT(){
  local imageTag=$1
  local imageName=$2
  docker tag $imageTag $kubernetesRegistry/$imageName
  imageSave $kubernetesRegistry/$imageName $imageName
}
function imageSave(){
  local imageTag=$1
  local imageName=$2
  docker save $imageTag | gzip > $imageDir/$imageName.tgz
}
IFS=$'\n'
for image in $kubernetesImages ; do
  imageName=${image#*/}
  tag=$aliyunRegistry/$imageName
  imagePull $tag
  imageT $tag $imageName
done
  • 查看保存的镜像
[root@localhost mnt]# ls k8simages/
coredns:1.7.0.tgz  kube-apiserver:v1.19.4.tgz           kube-proxy:v1.19.4.tgz      pause:3.2.tgz
etcd:3.4.13-0.tgz  kube-controller-manager:v1.19.4.tgz  kube-scheduler:v1.19.4.tgz
[root@localhost mnt]#

10、获取calico插件所需的镜像

  • 配合calico.yaml文件,将下面的内存保存至脚本中,与calico.yaml文件放置同级目录,镜像将保存在/mnt/calico 下
#!/bin/bash
dir=/mnt/calico
mkdir -p $dir
for i in `cat calico.yaml  | grep image | awk '{print $2}'` ; do
  imageName=${i##*/}
docker pull $i
docker save $i | gzip >$dir/$imageName.tgz
done
  • 查看保存的文件
[root@localhost mnt]# ls calico
cni:v3.17.0.tgz  kube-controllers:v3.17.0.tgz  node:v3.17.0.tgz  pod2daemon-flexvol:v3.17.0.tgz
[root@localhost mnt]# 

11、所有文件保存至本地

  • 我将calico的镜像和k8s的都一起放在了k8simages中
  • ipvs.conf,sysctl,conf 这两个文件在配置环境环节由脚本自动生成。

kubeadm搭建高可用k8s集群(v1.19.4)_第31张图片

12、获取Harbor私仓&Rancher

  • 地址:https://github.com/goharbor/harbor/releases
  • Harbor的安装获取很简单就不再这里描述了
  • Rancher直接docker pull rancher/rancher:stable即可

你可能感兴趣的:(Kubernetes,linux,docker,kubernetes)