k8s1.27.2 kubeadm 高可用安装

一 安装说明

部署说明

本次部署采用的系统版本为Centos7.9
内核版本为6.3.5-1.el7
K8S版本为v1.27.2
containerd版本:1.7.1
cni版本:1.3.0
crictl 版本:1.27.0
etcd版本为3.5.7-0

链接:https://pan.baidu.com/s/1avNYjzovETuPZJxdIC1ByQ
提取码:hpqf

准备开始

一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供- 包管理器的发行版提供通用的指令。
每台机器 2 GB 或更多的 RAM(如果少于这个数字将会影响你应用的运行内存)。
CPU 2 核心及以上。
集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)。
节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
开启机器上的某些端口。请参见这里了解更多详细信息。
禁用交换分区。为了保证 kubelet 正常工作,你必须禁用交换分区。
例如,sudo swapoff -a 将暂时禁用交换分区。要使此更改在重启后保持不变,请确保在如 /etc/fstab、systemd.swap 等配置文件中禁用交换分区,具体取决于你的系统如何配置。

确保每个节点上 MAC 地址和 product_uuid 的唯一性

  • 你可以使用命令 ip link 或 ifconfig -a 来获取网络接口的 MAC 地址
  • 可以使用 sudo cat /sys/class/dmi/id/product_uuid 命令对 product_uuid 校验
    一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。 Kubernetes 使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。

二 集群安装

服务器信息,服务器IP地址不能设置成dhcp,要配置成静态IP,VIP不要和公司内网重复。

192.168.1.11  master01        # 4C4G 40G
192.168.1.12  master02        # 4C4G 40G
192.168.1.13  master03        # 4C4G 40G
192.168.1.100 master-lb      # VIP 

K8s Service网段:10.96.0.0/12
K8s Pod网段:172.16.0.0/12

2.1 系统环境

cat /etc/redhat-release 
# CentOS Linux release 7.9.2009 (Core)

2.2 配置所有节点hosts文件

echo '192.168.1.11 master01
192.168.1.12 master02
192.168.1.13 master03
192.168.1.100 master-lb  ' >> /etc/hosts

2.3 CentOS 7安装yum源如下

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

2.4 所有节点关闭firewalld 、dnsmasq、selinux

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

2.5 所有节点关闭swap分区,fstab注释swap

swapoff -a
#永久禁用
#若需要重启后也生效,在禁用swap后还需修改配置文件/etc/fstab,注释swap
sed -i.bak '/swap/s/^/#/' /etc/fstab

2.6 所有节点同步时间

安装ntpdate

# 设置时区
 timedatectl set-timezone "Asia/Shanghai"

# 安装ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y

# 所有节点同步时间。时间同步配置如下:
ntpdate time2.aliyun.com

# 加入到crontab
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

2.7 所有节点配置limit

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf

2.8 Master01节点免密钥登录其他节点

 ssh-keygen -t rsa  #一路回车

2.9 Master01配置免密码登录其他节点

for i in master01 master02 master03;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

2.10 内核升级

配置所有节点内核yum源,centos7 系统需要升级到4.19以上八本内核。

# 导入ELRepo仓库的公共密钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# 安装ELRepo仓库的yum源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

2.11 所有节点安装最新版本内核

# 所有节点安装内核
yum --enablerepo=elrepo-kernel -y install kernel-ml kernel-ml-devel

# 查看都有哪些内核
cat /boot/grub2/grub.cfg |grep menuentry

# 设置默认内核(这两个命令都是设置默认内核的,按需求选择。)
grub2-set-default 'CentOS Linux (6.3.6-1.el7.elrepo.x86_64) 7 (Core)'
grub2-set-default 0

# 查看默认启动那个内核
grub2-editenv list

# 最后重启服务器并查看是否为最新版内核
reboot 
uname -r

2.12 所有节点安装ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -y
# 所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

2.13 所有节点设置ipvs开机自启动

echo 'ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip ' > /etc/modules-load.d/ipvs.conf
systemctl enable --now systemd-modules-load.service

检查是否加载

lsmod | grep -e ip_vs -e nf_conntrack

# nf_conntrack_ipv4      16384  23 
# nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
# nf_conntrack          135168  10 # xt_conntrack,nf_conntrack_ipv6,nf_conntrack_ipv4,nf_nat,nf_nat_ipv6,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_netlink,ip_vs

2.14 配置所有节点内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
net.ipv4.conf.all.route_localnet = 1

vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

2.15 所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

2.12 安装CRI(原来叫docker)

配置内核参数

转发 IPv4 并让 iptables 看到桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:

lsmod | grep br_netfilter
lsmod | grep overlay

通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

配置cgroup 驱动
在 Linux 上,控制组(CGroup)用于限制分配给进程的资源。

kubelet 和底层容器运行时都需要对接控制组来强制执行 为 Pod 和容器管理资源 并为诸如 CPU、内存这类资源设置请求和限制。若要对接控制组,kubelet 和容器运行时需要使用一个 cgroup 驱动。 关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。

可用的 cgroup 驱动有两个:

  • cgroupfs
  • systemd

cgroupfs 驱动
cgroupfs 驱动是 kubelet 中默认的 cgroup 驱动。当使用 cgroupfs 驱动时, kubelet 和容器运行时将直接对接 cgroup 文件系统来配置 cgroup。

当 systemd 是初始化系统时, 不 推荐使用 cgroupfs 驱动,因为 systemd 期望系统上只有一个 cgroup 管理器。 此外,如果你使用 cgroup v2, 则应用 systemd cgroup 驱动取代 cgroupfs。

systemd cgroup 驱动
当某个 Linux 系统发行版使用 systemd 作为其初始化系统时,初始化进程会生成并使用一个 root 控制组(cgroup),并充当 cgroup 管理器。

systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。 因此,如果你 systemd 用作初始化系统,同时使用 cgroupfs 驱动,则系统中会存在两个不同的 cgroup 管理器。

同时存在两个 cgroup 管理器将造成系统中针对可用的资源和使用中的资源出现两个视图。某些情况下, 将 kubelet 和容器运行时配置为使用 cgroupfs、但为剩余的进程使用 systemd 的那些节点将在资源压力增大时变得不稳定。

当 systemd 是选定的初始化系统时,缓解这个不稳定问题的方法是针对 kubelet 和容器运行时将 systemd 用作 cgroup 驱动。

要将 systemd 设置为 cgroup 驱动,需编辑 KubeletConfiguration 的 cgroupDriver 选项,并将其设置为 systemd。例如:

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
...
cgroupDriver: systemd
  • 注意:从 v1.22 开始,在使用 kubeadm 创建集群时,如果用户没有在 KubeletConfiguration 下设置 cgroupDriver 字段,kubeadm 默认使用 systemd。

开始安装CRI

安装containerd
官网:https://containerd.io/

## 下载
wget https://github.com/containerd/containerd/releases/download/v1.7.1/containerd-1.7.1-linux-amd64.tar.gz
tar xvf containerd-1.7.1-linux-amd64.tar.gz

## 安装
tar -xf containerd-1.7.1-linux-amd64.tar.gz 
mv bin/* /usr/local/bin/

## 生成containerd 配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

通过 systemd 启动 containerd

cat > /usr/lib/systemd/system/containerd.service << EOF
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable --now containerd

安装runc
下载地址:https://github.com/opencontainers/runc/releases

install -m 755 runc.amd64 /usr/local/sbin/runc

安装CNI插件
下载地址:https://github.com/containernetworking/plugins/releases

mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz

安装crictl
下载地址:https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md

  • crictl 命令基本和docker一样的用法
tar -xf crictl-v1.27.0-linux-amd64.tar.gz -C /usr/local/bin

## 编辑配置文件
cat >  /etc/crictl.yaml << EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 30
debug: false
pull-image-on-create: false
EOF

配置 systemd cgroup 驱动
结合 runc 使用 systemd cgroup 驱动,在 /etc/containerd/config.toml 中设置:

  • 注意:这里有一个坑,就是你倒入镜像的版本要和这个config.toml 配置文件中的镜像版本一致否则初始化会报错
    需要的镜像版本是registry.k8s.io/pause:3.9,所有这个参数后面也要搞成对应的版本 " sandbox_image = “registry.k8s.io/pause:3.9” "
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

## 重启 containerd 服务
systemctl restart containerd

2.13 高可用组件安装

  • 所有Master节点通过yum安装HAProxy和KeepAlived
  • 所有Master节点配置HAProxy(详细配置参考HAProxy文档,所有Master节点的HAProxy配置相同)
yum install keepalived haproxy -y
cat > /etc/haproxy/haproxy.cfg << EOF
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend k8s-master
  bind 0.0.0.0:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  option httpchk GET /healthz
  http-check expect status 200
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server master01       192.168.1.11:6443  check
  server master02       192.168.1.12:6443  check
  server master03       192.168.1.13:6443  check
EOF
  • 所有Master节点配置KeepAlived,配置不一样,注意区分 [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf ,注意每个节点的IP和网卡(interface参数)
    Master01节点的配置:
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.1.11
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100
    }
    track_script {
      chk_apiserver 
} }
EOF

Master02节点的配置:

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.1.12
    virtual_router_id 51
    priority 99
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100
    }
    track_script {
      chk_apiserver 
} }
EOF

Master03节点的配置:

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.1.13
    virtual_router_id 51
    priority 98
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100
    }
    track_script {
      chk_apiserver 
} }
EOF

健康检查配置

 cat > /etc/keepalived/check_apiserver.sh  << EOF
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EOF

chmod +x /etc/keepalived/check_apiserver.sh

所有master节点启动haproxy和keepalived

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

vip 测试

[root@master01 ~]# ping 192.168.1.100
PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=1.39 ms
64 bytes from 192.168.1.100: icmp_seq=2 ttl=64 time=2.46 ms
64 bytes from 192.168.1.100: icmp_seq=3 ttl=64 time=1.68 ms
64 bytes from 192.168.1.100: icmp_seq=4 ttl=64 time=1.08 ms

三 开始安装

3.1 安装kubeadm、kubelet 和 kubectl

  • 你需要在每台机器上安装kubeadm、kubelet 和 kubectl
## 配置K8Syum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

## 安装 可能需要指定版本
yum -y repolist
yum list kubeadm --showduplicates | sort -r
yum install --disableexcludes=kubernetes -y kubelet-1.27.2-0 kubectl-1.27.2-0 kubeadm-1.27.2-0
systemctl enable --now kubelet

3.2 列出所需的镜像列表

## 查看当前版本所需要的镜像并下载
kubeadm config images list

镜像下载好以后导入所有节点

  • registry.k8s.io/kube-apiserver:v1.27.2
    registry.k8s.io/kube-controller-manager:v1.27.2
    registry.k8s.io/kube-scheduler:v1.27.2
    registry.k8s.io/kube-proxy:v1.27.2
    registry.k8s.io/pause:3.9
    registry.k8s.io/etcd:3.5.7-0
    registry.k8s.io/coredns/coredns:v1.10.1

镜像导入命令和docker有所不同

ctr -n k8s.io image import 加镜像名字 # 或者导入自己的镜像仓库在pull下来
# 倒入好镜像以后用crictl查看ctr也能查看,但是不直观
crictl images
# ctr 好像又命名空间的概念 我也没研究过 要是嫌麻烦可以安装docker客户端工具管理containerd
ctr -n k8s.io images ls  

3.3 生成初始化文件按照需求修改

kubeadm config print init-defaults > kubeadm-init.yaml

这是修改过的

  • 修改了IP地址端口节点名称K8S安装版本
  • Master01节点创建kubeadm-init.yaml配置文件如下:
    Master01:(# 注意,如果不是高可用集群,192.168.1.100:8443改为master01的地址,8443改为apiserver的端口,默认是6443,注意更改v1.27.2自己服务器kubeadm的版本:kubeadm version)
cat  > ./kubeadm-init.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.11
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.27.2
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
controlPlaneEndpoint: "192.168.1.100:8443"
EOF

3.4 Master01节点初始化

  • 初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可。
  • 初始化的时候可以看详细日志 在后面添加 --v=5 即可
kubeadm init --config kubeadm-init.yaml  --upload-certs
  • 如果初始化失败,重置后再次初始化,命令如下:
kubeadm reset -f ; ipvsadm --clear  ; rm -rf ~/.kube
  • 初始化成功以后显示如下

    解释:
    要开始使用您的集群,您需要以普通用户身份运行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

或者,如果您是 root 用户,可以运行:

export KUBECONFIG=/etc/kubernetes/admin.conf

接下来,您需要部署一个 Pod 网络到集群中。您可以在以下链接中的选项之一中选择,并运行 kubectl apply -f [podnetwork].yaml:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

您现在可以通过在每个控制平面节点上以 root 用户身份运行以下命令来加入任意数量的控制平面节点:

kubeadm join 192.168.1.100:8443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:3fcd0d0ac88c9a4f1321f6d15cb484b8f67b1492c10282f5faa3070b5741635f \
  --control-plane --certificate-key bf521ccd59a5d33a2d8370e0ae9f10b7f00db3412f1c066aafd0e516c80664ae

请注意,certificate-key 提供对集群敏感数据的访问权限,请保密!为了安全起见,上传的证书将在两个小时后被删除;如果需要,您可以使用 “kubeadm init phase upload-certs --upload-certs” 在之后重新加载证书。

然后,您可以通过在每个工作节点上以 root 用户身份运行以下命令来加入任意数量的工作节点:

kubeadm join 192.168.1.100:8443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:3fcd0d0ac88c9a4f1321f6d15cb484b8f67b1492c10282f5faa3070b5741635f

3.5 开始安装calico

下载地址:https://github.com/projectcalico/calico/blob/v3.26.0/manifests/calico-etcd.yaml
下载好以后修改配置

# 添加etcd 节点
sed -i 's#etcd_endpoints: "http://:"#etcd_endpoints: "https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379"#g' calico-etcd.yaml

# 添加证书
ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

# 添加证书路径
sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

# 修改pod网段地址
POD_SUBNET="172.16.0.0/12"
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

全部修改好以后检查一遍没问题就可以部署了

kubectl apply -f calico-etcd.yaml

部署成功以后再次查看集群状态就没问题了

[root@master01 ~]# kubectl get node 
NAME       STATUS     ROLES           AGE     VERSION
master01   Ready      control-plane   5h45m   v1.27.2
master02   Ready      control-plane   5h24m   v1.27.2
master03   Ready      control-plane   5h23m   v1.27.2

3.6 部署 Metrics

安装之前需要删除污点

kubectl taint node --all   node-role.kubernetes.io/control-plane:NoSchedule-

这是官方配置文件,直接拿来用会提示缺少证书:https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

以下为修改添加证书相关路径添加挂在点等等 证书文件路径为/etc/kubernetes/pki/front-proxy-ca.crt(部署集群时自动生成的证书)
在安装Metrics

cat > ./components.yaml << E
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt 
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - mountPath: /etc/kubernetes/pki
          name: k8s-certs
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - hostPath:
          path: /etc/kubernetes/pki
        name: k8s-certs
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
E
kubectl create -f components.yaml

kube-system metrics-server-595f65d8d5-tcxkz 1/1 Running 4 277d

3.7 修改ipvs模式

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下(默认是 iptables 模式)
在master01节点执行

kubectl edit cm kube-proxy -n kube-system
  • mode: “ipvs”

更新Kube-Proxy的Pod:

kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

验证Kube-Proxy模式 (修改成功以后会输出ipvs)
curl 127.0.0.1:10249/proxyMode
ipvs

3.8 kubectl 命令补全

yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

#加载bash-completion
source /etc/profile.d/bash_completion.sh   

你还可以在补全时为 kubectl 使用一个速记别名

echo 'alias k=kubectl
complete -o default -F __start_kubectl k ' >> ~/.bashrc
source  ~/.bashrc

3.9 安装 Helm

点击跳转

3.10 安装 ingress 控制器

点击跳转

3.11 安装rook存储

点击跳转

四 注意事项

  • 注意:kubeadm安装的集群,证书有效期默认是一年。
  • master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。
  • 启动和二进制不同的是,
  • kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml
  • 其他组件的配置文件在/etc/Kubernetes/manifests目录下,比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启pod。不能再次创建该文件

Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式打开

## 查看污点
kubectl describe node  | grep Taint
## 删除污点
kubectl taint node --all node-role.kubernetes.io/control-plane:NoSchedule-

至此安装完成,下面的装不装都行!

五 安装Kuboard

  • 国产K8S管理平台 **官网:**https://www.kuboard.cn/

这个是在线安装 官网有离线安装

kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml

你可能感兴趣的:(kubernetes,运维,容器)