【Containerd版】Kubeadm高可用安装K8s集群1.23+

目录
  • 基本环境配置
    • 节点规划
    • 网段规划及软件版本
    • 基本配置
    • 内核升级配置
  • K8s组件及Runtime安装
    • Containerd安装
    • K8s组件安装
  • 高可用实现
  • 集群初始化
    • Master01初始化
    • 添加Master节点
    • 添加Worker节点
  • CNI插件Calico安装
  • Metrics Server部署
  • Dashboard部署
    • 安装
    • 登录Dashboard

@
目录

    基本环境配置

    成为K8s架构师只需一步,点我了解

    节点规划

    主机名 IP地址 说明
    k8s-master01 ~ 03 10.0.0.201 ~ 203 master节点 * 3
    k8s-master-lb 10.0.0.236 keepalived虚拟IP
    k8s-node01 ~ 02 10.0.0.204 ~ 205 worker节点 * 2

    网段规划及软件版本

    配置信息 备注
    系统版本 CentOS 7.9
    Docker版本 20.10.x
    Pod网段 172.16.0.0/12
    Service网段 192.168.0.0/16

    基本配置

    所有节点配置hosts,修改/etc/hosts如下:

    10.0.0.201 k8s-master01
    10.0.0.202 k8s-master02
    10.0.0.203 k8s-master03
    10.0.0.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
    10.0.0.204 k8s-node01
    10.0.0.205 k8s-node02
    

    yum源配置:

    curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    cat < /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
    
    

    必备工具安装:

    yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
    

    拥抱云原生,高薪就业点我了解
    所有节点关闭防火墙、selinux、dnsmasq、swap:

    systemctl disable --now firewalld 
    systemctl disable --now dnsmasq
    systemctl disable --now NetworkManager
    
    setenforce 0
    sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
    sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
    swapoff -a && sysctl -w vm.swappiness=0
    sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
    
    

    时间同步:

    rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
    yum install ntpdate -y
    ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    echo 'Asia/Shanghai' >/etc/timezone
    ntpdate time2.aliyun.com
    # 加入到crontab
    */5 * * * * /usr/sbin/ntpdate time2.aliyun.com
    

    所有节点配置limit:

    ulimit -SHn 65535
    
    vim /etc/security/limits.conf
    # 末尾添加如下内容
    * soft nofile 65536
    * hard nofile 131072
    * soft nproc 65535
    * hard nproc 655350
    * soft memlock unlimited
    * hard memlock unlimited
    

    免密钥配置:

    ssh-keygen -t rsa
    for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
    
    

    下载安装所有的源码文件:

    cd /root/ ; git clone https://gitee.com/dukuan/k8s-ha-install.git
    

    所有节点升级系统并重启:

    yum update -y --exclude=kernel* && reboot
    

    内核升级配置

    下载内核:

    cd /root
    wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
    wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
    
    

    从master01节点传到其他节点:

    for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
    

    所有节点安装内核:

    cd /root && yum localinstall -y kernel-ml*
    # 所有节点更改内核启动顺序
    grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
    
    grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
    # 检查默认内核是不是4.19
    [root@k8s-master02 ~]# grubby --default-kernel
    /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
    # 所有节点重启,然后检查内核是不是4.19
    [root@k8s-master02 ~]# uname -a
    Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
    

    所有节点安装ipvsadm:

    yum install ipvsadm ipset sysstat conntrack libseccomp -y
    

    所有节点配置ipvs模块:

    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack
    vim /etc/modules-load.d/ipvs.conf 
    # 加入以下内容
    ip_vs
    ip_vs_lc
    ip_vs_wlc
    ip_vs_rr
    ip_vs_wrr
    ip_vs_lblc
    ip_vs_lblcr
    ip_vs_dh
    ip_vs_sh
    ip_vs_fo
    ip_vs_nq
    ip_vs_sed
    ip_vs_ftp
    ip_vs_sh
    nf_conntrack
    ip_tables
    ip_set
    xt_set
    ipt_set
    ipt_rpfilter
    ipt_REJECT
    ipip
    # 设置开机自动加载
    systemctl enable --now systemd-modules-load.service
    

    所有节点配置k8s内核:

    cat < /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    fs.may_detach_mounts = 1
    net.ipv4.conf.all.route_localnet = 1
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    fs.inotify.max_user_watches=89100
    fs.file-max=52706963
    fs.nr_open=52706963
    net.netfilter.nf_conntrack_max=2310720
    
    net.ipv4.tcp_keepalive_time = 600
    net.ipv4.tcp_keepalive_probes = 3
    net.ipv4.tcp_keepalive_intvl =15
    net.ipv4.tcp_max_tw_buckets = 36000
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_max_orphans = 327680
    net.ipv4.tcp_orphan_retries = 3
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_max_syn_backlog = 16384
    net.ipv4.ip_conntrack_max = 65536
    net.ipv4.tcp_max_syn_backlog = 16384
    net.ipv4.tcp_timestamps = 0
    net.core.somaxconn = 16384
    EOF
    sysctl --system
    
    

    重启服务器后,测试配置是否还在加载:

    reboot
    lsmod | grep --color=auto -e ip_vs -e nf_conntrack
    
    

    高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万!

    K8s组件及Runtime安装

    Containerd安装

    所有节点安装docker-ce-20.10:

    yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
    

    配置Containerd所需的模块(所有节点):

    # cat <

    所有节点加载模块:

    # modprobe -- overlay
    # modprobe -- br_netfilter
    

    所有节点,配置Containerd所需的内核:

    cat <

    所有节点加载内核:

    # sysctl --system
    

    所有节点配置Containerd的配置文件:

    # mkdir -p /etc/containerd
    # containerd config default | tee /etc/containerd/config.toml
    

    所有节点将Containerd的Cgroup改为Systemd:

    # vim /etc/containerd/config.toml
    找到containerd.runtimes.runc.options,添加SystemdCgroup = true
    所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6:
    

    所有节点启动Containerd,并配置开机自启动:

    # systemctl daemon-reload
    # systemctl enable --now containerd
    

    所有节点配置crictl客户端连接的运行时位置:

    # cat > /etc/crictl.yaml <

    K8s组件安装

    所有节点安装1.23最新版本kubeadm、kubelet和kubectl:
    # yum install kubeadm-1.23* kubelet-1.23* kubectl-1.23* -y
    

    更改Kubelet的配置使用Containerd作为Runtime:

    # cat >/etc/sysconfig/kubelet<

    设置Kubelet开机自启动:

    # systemctl daemon-reload
    # systemctl enable --now kubelet
    

    高可用实现

    所有Master节点通过yum安装HAProxy和KeepAlived:

    yum install keepalived haproxy -y
    

    所有Master节点配置HAProxy:

    [root@k8s-master01 etc]# mkdir /etc/haproxy
    [root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg 
    global
      maxconn  2000
      ulimit-n  16384
      log  127.0.0.1 local0 err
      stats timeout 30s
    
    defaults
      log global
      mode  http
      option  httplog
      timeout connect 5000
      timeout client  50000
      timeout server  50000
      timeout http-request 15s
      timeout http-keep-alive 15s
    
    frontend monitor-in
      bind *:33305
      mode http
      option httplog
      monitor-uri /monitor
    
    frontend k8s-master
      bind 0.0.0.0:16443
      bind 127.0.0.1:16443
      mode tcp
      option tcplog
      tcp-request inspect-delay 5s
      default_backend k8s-master
    
    backend k8s-master
      mode tcp
      option tcplog
      option tcp-check
      balance roundrobin
      default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
      server k8s-master01	10.0.0.201:6443  check
      server k8s-master02	10.0.0.202:6443  check
      server k8s-master03	10.0.0.203:6443  check
    
    

    所有Master节点配置KeepAlived,

    Master01节点的配置:
    [root@k8s-master01 etc]# mkdir /etc/keepalived
    
    [root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf 
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    script_user root
        enable_script_security
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
        interval 5
        weight -5
        fall 2  
    rise 1
    }
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        mcast_src_ip 10.0.0.201
        virtual_router_id 51
        priority 101
        advert_int 2
        authentication {
            auth_type PASS
            auth_pass K8SHA_KA_AUTH
        }
        virtual_ipaddress {
            10.0.0.236
        }
        track_script {
           chk_apiserver
        }
    }
    
    
    Master02节点的配置:
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    script_user root
        enable_script_security
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
       interval 5
        weight -5
        fall 2  
    rise 1
    }
    vrrp_instance VI_1 {
        state BACKUP
        interface ens33
        mcast_src_ip 10.0.0.202
        virtual_router_id 51
        priority 100
        advert_int 2
        authentication {
            auth_type PASS
            auth_pass K8SHA_KA_AUTH
        }
        virtual_ipaddress {
            10.0.0.236
        }
        track_script {
           chk_apiserver
        }
    }
    
    
    Master03节点的配置:
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    script_user root
        enable_script_security
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
     interval 5
        weight -5
        fall 2  
    rise 1
    }
    vrrp_instance VI_1 {
        state BACKUP
        interface ens33
        mcast_src_ip 10.0.0.203
        virtual_router_id 51
        priority 100
        advert_int 2
        authentication {
            auth_type PASS
            auth_pass K8SHA_KA_AUTH
        }
        virtual_ipaddress {
            10.0.0.236
        }
        track_script {
           chk_apiserver
        }
    }
    
    

    所有master节点配置KeepAlived健康检查文件:

    [root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh 
    #!/bin/bash
    
    err=0
    for k in $(seq 1 3)
    do
        check_code=$(pgrep haproxy)
        if [[ $check_code == "" ]]; then
            err=$(expr $err + 1)
            sleep 1
            continue
        else
            err=0
            break
        fi
    done
    
    if [[ $err != "0" ]]; then
        echo "systemctl stop keepalived"
        /usr/bin/systemctl stop keepalived
        exit 1
    else
        exit 0
    fi
    

    启动haproxy和keepalived:

    [root@k8s-master01 keepalived]# systemctl daemon-reload
    [root@k8s-master01 keepalived]# systemctl enable --now haproxy
    [root@k8s-master01 keepalived]# systemctl enable --now keepalived
    

    高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~

    集群初始化

    Master01初始化

    vim kubeadm-config.yaml
    apiVersion: kubeadm.k8s.io/v1beta2
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: 7t2weq.bjbawausm0jaxury
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 10.0.0.201
      bindPort: 6443
    nodeRegistration:
      criSocket: /run/containerd/containerd.sock
      name: k8s-master01
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      certSANs:
      - 10.0.0.236
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: 10.0.0.236:16443
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: v1.23.1 # 更改此处的版本号和kubeadm version一致
    networking:
      dnsDomain: cluster.local
      podSubnet: 172.16.0.0/12
      serviceSubnet: 192.168.0.0/16
    scheduler: {}
    

    更新kubeadm文件:

    kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
    

    将new.yaml文件复制到其他master节点:

    for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done
    

    所有Master节点提前下载镜像:

    kubeadm config images pull --config /root/new.yaml 
    

    Master01节点初始化:

    kubeadm init --config /root/new.yaml  --upload-certs
    

    初始化成功后信息如下:

      kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    	--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
    	--control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c
    
    Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
    As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
    "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    	--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94
    
    

    Master01节点配置环境变量,用于访问Kubernetes集群:

    cat <> /root/.bashrc
    export KUBECONFIG=/etc/kubernetes/admin.conf
    EOF
    source /root/.bashrc
    

    添加Master节点

    使用上述初始化命令生产的join命令添加即可:

    kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    	--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
    	--control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c
    
    

    添加Worker节点

    kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    	--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94
    
    

    添加完成后结果如下:【Containerd版】Kubeadm高可用安装K8s集群1.23+_第1张图片

    CNI插件Calico安装

    只在master01执行:

    cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x && cd calico/
    

    修改Pod网段:

    POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
    
    sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
    kubectl apply -f calico.yaml
    
    

    创建完成后,等待几分钟后查看状态:
    【Containerd版】Kubeadm高可用安装K8s集群1.23+_第2张图片

    Metrics Server部署

    将Master01节点的front-proxy-ca.crt复制到所有Node节点:

    scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
    scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt
    
    

    安装metrics server:

    cd /root/k8s-ha-install/kubeadm-metrics-server
    
    # kubectl  create -f comp.yaml 
    serviceaccount/metrics-server created
    clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
    clusterrole.rbac.authorization.k8s.io/system:metrics-server created
    rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
    clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
    clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
    service/metrics-server created
    deployment.apps/metrics-server created
    apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
    

    查看Metrics Server状态:
    MetricsServer状态
    状态正常后,查看度量指标:

    # kubectl top node
    NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
    k8s-master01   153m         3%     1701Mi          44%       
    k8s-master02   125m         3%     1693Mi          44%       
    k8s-master03   129m         3%     1590Mi          41%       
    k8s-node01     73m          1%     989Mi           25%       
    k8s-node02     64m          1%     950Mi           24%       
    # kubectl top po -A
    NAMESPACE     NAME                                       CPU(cores)   MEMORY(bytes)   
    kube-system   calico-kube-controllers-66686fdb54-74xkg   2m           17Mi            
    kube-system   calico-node-6gqpb                          21m          85Mi            
    kube-system   calico-node-bmvjt                          29m          76Mi            
    kube-system   calico-node-hdp9c                          15m          82Mi            
    kube-system   calico-node-wwrfv                          23m          86Mi            
    kube-system   calico-node-zzv88                          22m          84Mi            
    kube-system   calico-typha-67c6dc57d6-hj6l4              2m           23Mi            
    kube-system   calico-typha-67c6dc57d6-jm855              2m           22Mi            
    kube-system   coredns-7d89d9b6b8-sr6mf                   1m           16Mi            
    kube-system   coredns-7d89d9b6b8-xqwjk                   1m           16Mi            
    kube-system   etcd-k8s-master01                          24m          96Mi            
    kube-system   etcd-k8s-master02                          20m          91Mi            
    kube-system   etcd-k8s-master03                          21m          92Mi            
    kube-system   kube-apiserver-k8s-master01                41m          502Mi           
    kube-system   kube-apiserver-k8s-master02                35m          476Mi           
    kube-system   kube-apiserver-k8s-master03                71m          480Mi           
    kube-system   kube-controller-manager-k8s-master01       15m          65Mi            
    kube-system   kube-controller-manager-k8s-master02       1m           26Mi            
    kube-system   kube-controller-manager-k8s-master03       2m           27Mi            
    kube-system   kube-proxy-8lt45                           1m           18Mi            
    kube-system   kube-proxy-d6jfh                           1m           18Mi            
    kube-system   kube-proxy-hfnvz                           1m           19Mi            
    kube-system   kube-proxy-nsms8                           1m           18Mi            
    kube-system   kube-proxy-xmlhq                           3m           21Mi            
    kube-system   kube-scheduler-k8s-master01                2m           26Mi            
    kube-system   kube-scheduler-k8s-master02                2m           24Mi            
    kube-system   kube-scheduler-k8s-master03                2m           24Mi            
    kube-system   metrics-server-d54b585c4-4dqpf             46m          16Mi
    
    

    高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~

    Dashboard部署

    安装

    cd /root/k8s-ha-install/dashboard/
    
    [root@k8s-master01 dashboard]# kubectl  create -f .
    serviceaccount/admin-user created
    clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    namespace/kubernetes-dashboard created
    serviceaccount/kubernetes-dashboard created
    service/kubernetes-dashboard created
    secret/kubernetes-dashboard-certs created
    secret/kubernetes-dashboard-csrf created
    secret/kubernetes-dashboard-key-holder created
    configmap/kubernetes-dashboard-settings created
    role.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    deployment.apps/kubernetes-dashboard created
    service/dashboard-metrics-scraper created
    deployment.apps/dashboard-metrics-scraper created
    
    

    登录Dashboard

    查看dashboard端口号:

    # kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
    

    查看管理员Token:

    [root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
    Name:         admin-user-token-r4vcp
    Namespace:    kube-system
    Labels:       
    Annotations:  kubernetes.io/service-account.name: admin-user
                  kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w
    
    

    通过任意宿主机+端口号即可访问Dashboard:
    【Containerd版】Kubeadm高可用安装K8s集群1.23+_第3张图片

    【Containerd版】Kubeadm高可用安装K8s集群1.23+_第4张图片
    高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~

    参考:K8s架构师视频
    K8s技术交流群:
    【Containerd版】Kubeadm高可用安装K8s集群1.23+_第5张图片

    你可能感兴趣的:(【Containerd版】Kubeadm高可用安装K8s集群1.23+)