使用 kubeadm 部署 Kubernetes 1.24.0 集群

01-使用 kubeadm 部署 Kubernetes 集群

文章目录

  • 01-使用 kubeadm 部署 Kubernetes 集群
    • 0. 部署过程简述
    • 1. 环境说明
    • 2. 预配置集群
    • 3. 安装 Containerd
      • 3.1. cgroup driver 的说明
      • 3.2. 部署 containerd
      • 3.3. 配置 Containerd
      • 3.4. 安装nerdctl
      • 3.5. 安装 crictl
      • 3.6. 安装 buildkit(可选,用于 nerdctl build)
    • 4. 安装 Kunernetes 集群
      • 4.1. bridged网桥设置
      • 4.2. 配置IPVS
      • 4.3. 安装 kubelet、kubeadm、kubectl
      • 4.4. 初始化主节点 -- 只在 master 节点执行
      • 4.5. 安装网络插件calico(只在master执行)
      • 4.6. node 节点可以执行kubectl命令方法
      • 4.7. 配置命令的 TAB 补齐

0. 部署过程简述

使用 kubeadm 部署 Kubernetes 1.24.0 集群_第1张图片

1. 环境说明

主机名 IP 地址 角色
k8s-master01 192.168.122.20 Kubernetes 控制节点
k8s-worker01 192.168.122.21 Kubernetes 工作节点
k8s-worker02 192.168.122.22 Kubernetes 工作节点
  • 容器运行时:Containerd-1.6.6

  • Kubernetes 版本:1.24.0

2. 预配置集群

  1. 配置静态名称解析
# cat >> /etc/hosts << EOF
192.168.122.20      k8s-master01.local.example.com        k8s-master01
192.168.122.21      k8s-worker01.local.example.com        k8s-worker01
192.168.122.22      k8s-worker02.local.example.com        k8s-worker02
EOF
  1. 关闭防火墙、SELinux
# sed -i 's/enforcing/disabled/' /etc/selinux/config
# setenforce 0
# systemctl disable --now firewalld
# iptables -F
  1. 关闭 SWAP 分区
# sed -ri 's/.*swap.*/#&/' /etc/fstab 
# swapoff -a 
# free -m
total        used        free      shared  buff/cache   available
Mem:           3929         147        3536           8         245        3555
Swap:             0           0           0
  1. 配置时间同步
# yum install -y chrony
# systemctl enable --now chronyd
# vim /etc/chrony.conf
# systemctl restart chronyd
# chronyc sources -v
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88                  2  10   377    96  -1598us[-2163us] +/-   21ms

3. 安装 Containerd

3.1. cgroup driver 的说明

  • Linux 使用 cgroup 进行资源的隔离控制
  • Centos 启动时使用 systemd(即我们使用的 systemctl,Centos7 之前是使用 /etc/rc.d/init.d)进行初始化系统,会生成一个 cgroup manager 去管理 cgroupfs
  • 如果让 Containerd 直接去管理 cgroupfs,又会生成一个 cgroup manager
  • 一个系统有两个 cgroup manager 是很不稳定的,所以我们需要配置 Containerd 直接使用 systemd 去管理cgroupfs

3.2. 部署 containerd

  1. 获取 Containerd 二进制安装包,并解压部署
# wget https://github.com/containerd/containerd/releases/download/v1.4.12/cri-containerd-cni-1.4.12-linux-amd64.tar.gz
# mkdir cri-containerd-cni
# tar -zxvf cri-containerd-cni-1.4.12-linux-amd64.tar.gz -C cri-containerd-cni
  1. 将解压的文件,复制到系统的配置目录和执行目录
# cp -a cri-containerd-cni/etc/systemd/system/containerd.service /etc/systemd/system
# cp -a cri-containerd-cni/etc/crictl.yaml /etc
# cp -a cri-containerd-cni/etc/cni /etc
# cp -a cri-containerd-cni/usr/local/sbin/runc /usr/local/sbin
# cp -a cri-containerd-cni/usr/local/bin/* /usr/local/bin
# cp -a cri-containerd-cni/opt/* /opt
  1. 启动 containerd,并加入开机启动项
# systemctl daemon-reload 
# systemctl enable containerd --now

3.3. 配置 Containerd

  1. 生成 Containerd 默认配置文件
# mkdir /etc/containerd
# containerd config default | tee /etc/containerd/config.toml
### Containerd 默认配置文件如下 ###
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
plugin_dir = ""
disabled_plugins = []
required_plugins = []
oom_score = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  tcp_address = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[ttrpc]
  address = ""
  uid = 0
  gid = 0

[debug]
  address = ""
  uid = 0
  gid = 0
  level = ""

[metrics]
  address = ""
  grpc_histogram = false

[cgroup]
  path = ""

[timeouts]
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[plugins]
  [plugins."io.containerd.gc.v1.scheduler"]
    pause_threshold = 0.02
    deletion_threshold = 0
    mutation_threshold = 100
    schedule_delay = "0s"
    startup_delay = "100ms"
  [plugins."io.containerd.grpc.v1.cri"]
    disable_tcp_service = true
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    stream_idle_timeout = "4h0m0s"
    enable_selinux = false
    selinux_category_range = 1024
    sandbox_image = "k8s.gcr.io/pause:3.2"
    stats_collect_period = 10
    systemd_cgroup = false
    enable_tls_streaming = false
    max_container_log_line_size = 16384
    disable_cgroup = false
    disable_apparmor = false
    restrict_oom_score_adj = false
    max_concurrent_downloads = 3
    disable_proc_mount = false
    unset_seccomp_profile = ""
    tolerate_missing_hugetlb_controller = true
    disable_hugetlb_controller = true
    ignore_image_defined_volumes = false
    [plugins."io.containerd.grpc.v1.cri".containerd]
      snapshotter = "overlayfs"
      default_runtime_name = "runc"
      no_pivot = false
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        runtime_type = ""
        runtime_engine = ""
        runtime_root = ""
        privileged_without_host_devices = false
        base_runtime_spec = ""
      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        runtime_type = ""
        runtime_engine = ""
        runtime_root = ""
        privileged_without_host_devices = false
        base_runtime_spec = ""
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
          runtime_engine = ""
          runtime_root = ""
          privileged_without_host_devices = false
          base_runtime_spec = ""
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      max_conf_num = 1
      conf_template = ""
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = ""
    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""
  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"
  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"
  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"
  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false
  [plugins."io.containerd.runtime.v1.linux"]
    shim = "containerd-shim"
    runtime = "runc"
    runtime_root = ""
    no_shim = false
    shim_debug = false
  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]
  [plugins."io.containerd.snapshotter.v1.devmapper"]
    root_path = ""
    pool_name = ""
    base_image_size = ""
    async_remove = false
  1. 修改 Containerd 配置文件
# vim /etc/containerd/config.toml
... 部分输出省略 ...   
enable_selinux = false
    selinux_category_range = 1024
    # sandbox_image = "k8s.gcr.io/pause:3.2" # 注释此行,修改如下
    sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"
    
... 部分输出省略 ...    
 privileged_without_host_devices = false
          base_runtime_spec = ""
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            # 添加下面这行
            SystemdCgroup = true

... 部分输出省略 ...  
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          #endpoint = ["https://registry-1.docker.io"]
          # 注释上面那行,添加下面三行
          endpoint = ["https://docker.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://registry.cn-hangzhou.aliyuncs.com/google_containers"]
... 部分输出省略 ...
  1. 重启 Containerd
# systemctl daemon-reload 
# systemctl enable containerd --now 
# systemctl status containerd

3.4. 安装nerdctl

  • nerdctl 是 containerd 原生的命令行管理工具,和 Docker 的命令行兼容

  • nerdctl 管理的容器、镜像与 Kubernetes 的容器、镜像是完全隔离的,不能互通,目前只能通过crictl命令行工具来查看、拉取Kubernetes的容器、镜像

  • nerdctl 和 containerd 的版本对应关系,参考:https://github.com/containerd/nerdctl/blob/v0.6.1/go.mod

  • 下载解压,然后复制到执行目录下,即可使用

# wget https://github.com/containerd/nerdctl/releases/download/v0.6.1/nerdctl-0.6.1-linux-amd64.tar.gz
# mkdir nerdctl
# tar -zxvf nerdctl-0.6.1-linux-amd64.tar.gz -C nerdctl
# cp -a nerdctl/nerdctl /usr/bin
# nerdctl images

注意:

  • nerdctl run -p containerPort:hostPort的端口映射无效,只能通过 nerdctl run -net host来实现

3.5. 安装 crictl

crictl 是 Kubernetes 用于管理 Containerd 上的镜像和容器的一个命令行工具,主要用于 Debug

  1. 下载解压
# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-amd64.tar.gz
# tar -zxvf crictl-v1.24.0-linux-amd64.tar.gz -C /usr/local/bin
  1. 创建 crictl 的配置文件
# cat > /etc/crictl.yaml <
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
pull-image-on-create: false
EOF
# systemctl daemon-reload
  1. 使用 crictl
# crictl images
IMAGE               TAG                 IMAGE ID            SIZE

3.6. 安装 buildkit(可选,用于 nerdctl build)

  • 使用 nerdctl build 进行 Dockerfile 的镜像构建时,报缺少 buildkit,所以需要进行安装

  • buildkit 和 containerd 的版本对应关系,参考:https://github.com/moby/buildkit/blob/v0.8.3/go.mod

  1. 下载、解压、安装客户端 buildkit
# mkdir buildkit && cd buildkit/
# cd buildkit/
# wget https://github.com/moby/buildkit/releases/download/v0.8.3/buildkit-v0.8.3.linux-amd64.tar.gz
# tar -zxvf buildkit-v0.8.3.linux-amd64.tar.gz
# cp -a bin /usr/local
  1. 编写 buildkitd 的启动文件
# cat /etc/systemd/system/buildkit.service
[Unit]
Description=BuildKit
Documentation=https://github.com/moby/buildkit

[Service]
ExecStart=/usr/local/bin/buildkitd --oci-worker=false --containerd-worker=true

[Install]
WantedBy=multi-user.target
  1. 启动 buildkitd 服务端程序
# systemctl daemon-reload
# systemctl enable buildkit --now

4. 安装 Kunernetes 集群

4.1. bridged网桥设置

  • 为了让服务器的iptables能发现bridged traffic,需要添加网桥过滤和地址转发功能
  1. 新建 modules-load.d/k8s.conf 文件
# cat <
overlay
br_netfilter
EOF
  1. 新建 sysctl.d/k8s.conf 文件
# cat <
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
  1. 加载配置文件
# sysctl --system
  1. 加载 br_netfilter 网桥过滤模块,和加载网络虚拟化技术模块
# modprobe br_netfilter && modprobe overlay
  1. 检验网桥过滤模块是否加载成功
# lsmod | grep -e br_netfilter -e overlay
br_netfilter           24576  0
bridge                192512  1 br_netfilter
overlay               135168  0

4.2. 配置IPVS

service有基于iptables和基于ipvs两种代理模型。基于ipvs的性能要高一些。需要手动载入才能使用ipvs模块

  1. 安装 ipset 和 ipvsadm
# yum install ipset ipvsadm
  1. 新建脚本文件 /etc/sysconfig/modules/ipvs.modules,内容如下
# cat > /etc/sysconfig/modules/ipvs.modules <
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# chmod +x /etc/sysconfig/modules/ipvs.modules 
# bash /etc/sysconfig/modules/ipvs.modules
  1. 检验模块是否加载成功
# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          172032  3 nf_nat,nft_ct,ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs

4.3. 安装 kubelet、kubeadm、kubectl

  1. 添加软件源
# cat <
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  1. 安装 kubeadm、kubectl、kubelet;并启用 kubelet
# yum install -y --setopt=obsoletes=0 kubelet-1.24.0 kubeadm-1.24.0 kubectl-1.24.0
# systemctl enable kubelet --now

说明:

  • obsoletes 等于 1 表示更新旧的 rpm 包的同时会删除旧包,0 表示更新旧的rpm包不会删除旧包

  • kubelet启动后,可以用命令 journalctl -f -u kubelet 查看 kubelet 更详细的日志

  • kubelet 默认使用 systemd 作为 cgroup driver

  • 启动后,kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

  1. 预下载镜像

    1. 查看集群所需镜像的版本
# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.24.0
k8s.gcr.io/kube-controller-manager:v1.24.0
k8s.gcr.io/kube-scheduler:v1.24.0
k8s.gcr.io/kube-proxy:v1.24.0
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6
    1. 编辑镜像下载文件 images.sh,然后执行,其中 worker 节点只需要 kube-proxy 和pause
$ master 节点
# tee ./images.sh <<'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.24.0
kube-controller-manager:v1.24.0
kube-scheduler:v1.24.0
kube-proxy:v1.24.0
pause:3.7
etcd:3.5.3-0
coredns:v1.8.6
)
for imageName in ${images[@]} ; do
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF
# chmod +x ./images.sh && ./images.sh

$ worker 节点
# tee ./images.sh <<'EOF'
#!/bin/bash
images=(
kube-proxy:v1.24.0
pause:3.7
)
for imageName in ${images[@]} ; do
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF
# chmod +x ./images.sh && ./images.sh

4.4. 初始化主节点 – 只在 master 节点执行

  1. 使用命令行部署 master 节点
# kubeadm init \
--apiserver-advertise-address=192.168.122.20 \
--control-plane-endpoint=k8s-master01 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.24.0 \
--service-cidr=172.10.1.0/24 \
--pod-network-cidr=10.10.0.0/16
  1. 部署过程中的报错及解决方案

  2. 报错信息

使用 kubeadm 部署 Kubernetes 1.24.0 集群_第2张图片

  1. 解决方案:
在所有主机上执行
# crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
# ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
# kubeadm reset -f

部署 master 节点成功信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.122.20:6443 --token 69bygk.27gkamxs4uta1tr3 \
        --discovery-token-ca-cert-hash sha256:6501bf7893d8157d6824b78ed647adb3f1c607e762cbfb080a4223d9b954ee39
  1. 根据提示在 worker 节点执行命令,使 worker 节点加入 Kubernetes 集群
[root@k8s-worker01 ~]#
kubeadm join 192.168.122.20:6443 --token 69bygk.27gkamxs4uta1tr3 \
--discovery-token-ca-cert-hash sha256:6501bf7893d8157d6824b78ed647adb3f1c607e762cbfb080a4223d9b954ee39

[root@k8s-worker02 ~]#
kubeadm join 192.168.122.20:6443 --token 69bygk.27gkamxs4uta1tr3 \
--discovery-token-ca-cert-hash sha256:6501bf7893d8157d6824b78ed647adb3f1c607e762cbfb080a4223d9b954ee39
  1. 在 master 节点查看集群信息
[root@k8s-master01 ~]# kubectl get node
NAME                             STATUS   ROLES           AGE     VERSION
k8s-master01.local.example.com   Ready    control-plane   2m49s   v1.24.0
k8s-worker01.local.example.com   Ready    <none>          2m10s   v1.24.0
k8s-worker02.local.example.com   Ready    <none>          2m7s    v1.24.0

4.5. 安装网络插件calico(只在master执行)

  • 参考calico官网
  • calico 版本选择,参考https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
  • 插件使用的是DaemonSet的控制器,会在每个节点都运行
  1. 下载calico.yaml文件
# curl https://docs.projectcalico.org/archive/v3.19/manifests/calico.yaml -O
  1. 将下面的部分
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"

​ 替换为如下,其中 IP 为 kubeadm init 时候 pod-network-cidr 的 IP

            - name: CALICO_IPV4POOL_CIDR
              value: "10.10.0.0/16"
  1. 查看需要的镜像
# cat calico.yaml | grep image
          image: docker.io/calico/cni:v3.19.4
          image: docker.io/calico/cni:v3.19.4
          image: docker.io/calico/pod2daemon-flexvol:v3.19.4
          image: docker.io/calico/node:v3.19.4
          image: docker.io/calico/kube-controllers:v3.19.4
  1. 编辑镜像下载文件,然后执行脚本文件
# tee ./calicoImages.sh <<'EOF'
#!/bin/bash

images=(
docker.io/calico/cni:v3.19.4
docker.io/calico/pod2daemon-flexvol:v3.19.4
docker.io/calico/node:v3.19.4
docker.io/calico/kube-controllers:v3.19.4
)
for imageName in ${images[@]} ; do
crictl pull $imageName
done
EOF
# chmod +x ./calicoImages.sh && ./calicoImages.sh
  1. 部署 calico
# kubectl apply -f calico.yaml
[root@k8s-master01 ~]# kubectl get pod -A
NAMESPACE     NAME                                                     READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-57d95cb479-4whbz                 1/1     Running   1 (66s ago)   4m19s
kube-system   calico-node-pxj5h                                        0/1     Running   0             40s
kube-system   calico-node-vz9wv                                        1/1     Running   0             4m19s
kube-system   calico-node-zdjqg                                        0/1     Running   0             52s
kube-system   coredns-7f74c56694-8zhtg                                 1/1     Running   0             14m
kube-system   coredns-7f74c56694-klp4h                                 1/1     Running   0             14m
kube-system   etcd-k8s-master01.local.example.com                      1/1     Running   1             14m
kube-system   kube-apiserver-k8s-master01.local.example.com            1/1     Running   1             14m
kube-system   kube-controller-manager-k8s-master01.local.example.com   1/1     Running   1             14m
kube-system   kube-proxy-n4cbg                                         1/1     Running   0             14m
kube-system   kube-proxy-pvzkq                                         1/1     Running   0             14m
kube-system   kube-proxy-vqlff                                         1/1     Running   0             14m
kube-system   kube-scheduler-k8s-master01.local.example.com            1/1     Running   1             14m

[root@k8s-master01 ~]# kubectl get node
NAME                             STATUS   ROLES           AGE   VERSION
k8s-master01.local.example.com   Ready    control-plane   15m   v1.24.0
k8s-worker01.local.example.com   Ready    <none>          14m   v1.24.0
k8s-worker02.local.example.com   Ready    <none>          14m   v1.24.0

4.6. node 节点可以执行kubectl命令方法

在 master 节点上将 $HOME/.kube 复制到 node 节点的 $HOME 目录下

# scp -r $HOME/.kube k8s-worker01:$HOME && # scp -r $HOME/.kube k8s-worker02:$HOME

4.7. 配置命令的 TAB 补齐

# echo 'source /usr/share/bash-completion/bash_completion' >>/root/.bashrc
# echo 'source  <(kubectl completion bash)' >>/root/.bashrc
# source /root/.bashrc

至此,使用 kubeadm 部署 Kubernetes-1.24.0 集群结束

你可能感兴趣的:(kubernetes,docker,容器)