通过kubeadm安装k8s集群

通过kubeadm安装k8s集群

一.环境准备

1.节点配置

主机名 ip 角色 系统
kubernetes-master 192.168.xx.138 master ubuntu-20.04.1
kubernetes-node-01 192.168.xx.139 node ubuntu-20.04.1
kubernetes-node-02 192.168.xx.140 node ubuntu-20.04.1

2. 永久关闭交换空间

(1)首先关闭swap

swapoff -a

(2) 编辑fstab文件

vi /etc/fstab

(3) 注释swapfile开头的行


pl@kubernets-master:~# vim /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#                
# / was on /dev/sda5 during installation
UUID=1793612a-4b22-4654-b6a2-7827f533e071 /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/sda1 during installation
UUID=0CC2-FE26  /boot/efi       vfat    umask=0077      0       1
#/swapfile                                 none            swap    sw              0       0
# /swapfile       none    swap    sw      0       0

pl@kubernets-master:~# free
              总计         已用        空闲      共享    缓冲/缓存    可用
内存:     3995080     1288808     1053548        9184     1652724     2474356
交换:           0           0           0

(4)补充

如果不关闭swap,就会在kubeadm初始化Kubernetes的时候报错,如下图:

image.png

3.关闭防火墙

ufw disable

4.配置DNS解析域名

vi /etc/systemd/resolved.conf
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See resolved.conf(5) for details
#DNS=114.114.114.114
#nameserver 8.8.8.8
[Resolve]
DNS=114.114.114.114
#FallbackDNS=
#Domains=
#LLMNR=no
#MulticastDNS=no
#DNSSEC=no
#DNSOverTLS=no
#Cache=yes
#DNSStubListener=yes
#ReadEtcHosts=yes
~

#重启
service systemd-resolved restart

5.安装Docker

# 更新软件源
sudo apt-get update
# 安装所需依赖
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# 安装 GPG 证书
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# 新增软件源信息
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# 再次更新软件源
sudo apt-get -y update
# 安装 Docker CE 版
sudo apt-get -y install docker-ce
# 开启 Docker Service
systemctl enable docker.service

补充:

因为国内镜像加速期可能会很卡,可以替换成自己的镜像

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "registry-mirrors": [
    "https://xxx.mirror.aliyuncs.com",
    "https://dockerhub.azk8s.cn",
    "https://registry.docker-cn.com"
  ],
  "storage-driver": "overlay2"
}

6.安装Kubernetes必备工具

(1)安装系统工具

pl@kubernets-master:~# curl -sSL https://manageacloud.com/api/cm/configuration/activate_swap/ubuntu/manageacloud-production-script.sh | bash
pl@kubernets-master:~# apt-get update && apt-get install -y apt-transport-https

(2) 安装GPG证书

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

  Linuxapt-get命令是Debian Linux发行版中的APT软件包管理工具。所有基于Debian的发行都使用这个包管理系统。deb包可以把一个应用的文件包在一起,大体就如同Windows上的安装文件。

  apt -get下载的都是deb包下的资源,deb包下的资源都是秘钥认证的,要想下载必须在本地安装相同的秘钥。也就是gpg证书用来注册和生成秘钥。有了这个秘钥方能下载deb包下的资源。

(3) 写入软件源

cat << EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

(4)安装

apt-get update && apt-get install -y kubelet kubeadm kubectl
  • 关于每回下载软件之前都执行apt-get update

    • apt-get update是用来跟新软件源,apt这个工具在本地会那上一次的软件源去下载,所以为了每回下载的软件都是最新的就需要在下载软件之前先更新安装源。
  • apt-get install -y 可以同时下载多个软件并回答yes

7.同步时间

(1)设置时区

dpkg-reconfigure tzdata

(2)选择亚洲-上海

image.png

image.png

(3) 时间同步

# 安装 ntpdate
apt-get install ntpdate

# 设置系统时间与网络时间同步(cn.pool.ntp.org 位于中国的公共 NTP 服务器)
ntpdate cn.pool.ntp.org

# 将系统时间写入硬件时间
hwclock --systohc

(4)确认时间

date

# 输出如下(自行对照与系统时间是否一致)
Sun Jun  2 22:02:35 CST 2019

8.固定ip和配置主机名

pl@kubernets-master:~# vi /etc/netplan/01-network-manager-all.yaml
network:
    ethernets:
        ens33:
          addresses: [192.168.xx.138/24]   #xx网段一定和自己主机相同,否则解析域名和ping外网都会失败
          gateway4: 192.168.xx.6
          nameservers:
            addresses: [192.168.xx.2]
    version: 2
    
network:
    ethernets:
        ens33:
          addresses: [192.168.xx.137/24]
          gateway4: 192.168.xx.6
          nameservers:
            addresses: [114.114.114.114,8.8.8.8,192.168.xx.3]
    version: 2
    
network:
    ethernets:
        ens33:
          addresses: [192.168.xx.136/24]
          gateway4: 192.168.xx.6
          nameservers:
            addresses: [192.168.xx.3,114.114.114.114]
    version: 2    

#立即生效
pl@kubernets-master:~# netplan apply

#配置主机名
hostnamectl set-hostname kubernets-node1

# 配置 hosts
cat >> /etc/hosts << EOF
192.168.xx.139 kubernets-node1
EOF


hostnamectl set-hostname kubernets-node2

cat >> /etc/hosts << EOF
192.168.xx.140 kubernets-node2
EOF

hostnamectl set-hostname kubernets-master

cat >> /etc/hosts << EOF
192.168.xx.138 kubernets-master
EOF

二.安装集群

1. 安装集群有两种方式:

  • kubeadm

    • 安装简单,但是很少正式环境使用这种方式
  • 二进制安装

    • 操作相对复杂,但是适用范围更广。

本次先展示通过kubeadm工具安装集群

2.kubeadm工具介绍

Kubernetes从1.4版本开始引入了命令行工具kubeadm,致力于简化集群的安装过程,并解决Kubernetes集群的高可用问题。在Kubernetes 1.13版本中,kubeadm工具进入GA阶段,宣称已经为生产环境应用准备就绪。

kubeadm 是 kubernetes 的集群安装工具,能够快速安装 kubernetes 集群,安装 kubernetes 主要是安装它的各个镜像,而 kubeadm 已经为我们集成好了运行 kubernetes 所需的基本镜像。

3.创建kubeadm.yml

(1)导出配置文件

# 导出配置文件
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml 

#命令参数说明
kubeadm config print 输出kubernetes 集群的配置  
init-defaults 初始化默认配置  
--kubeconfig ClusterConfiguration    与集群通信时使用的kubeconfig文件。如果没有设置该标志,则可以搜索一组标准位置,以查找现有的kubeconfig文件
#help
 kubeadm config print -h

This command prints configurations for subcommands provided.
For details, see: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2

Usage:用法
  kubeadm config print [flags]
  kubeadm config print [command]

Available Commands:有效的命令
  init-defaults Print default init configuration, that can be used for 'kubeadm init'
  join-defaults Print default join configuration, that can be used for 'kubeadm join'

Flags:参数
  -h, --help   help for print   

Global Flags:全局参数
      --add-dir-header           If true, adds the file directory to the header
      --kubeconfig string        The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. (default "/etc/kubernetes/admin.conf")
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity

Use "kubeadm config print [command] --help" for more information about a command.

(2)编辑配置

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # 修改为主节点 IP
  advertiseAddress: 192.168.xx.138
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: kubernetes-master  #主节点名字
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
# 国内不能访问 Google,修改为阿里云    
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.22.0
networking:
  dnsDomain: cluster.local
  # 配置 POD 所在网段为我们虚拟机不重叠的网段(这里用的是 Flannel 默认网段)
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

4.拉取所需镜像

kubeadm config images pull --config kubeadm.yml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.22.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.5
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.0-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.4

5. 初始化主节点

root@kubernets-master:/home/k8s/application/k8s# kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.22.0
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "kubernetes-master" could not be reached
        [WARNING Hostname]: hostname "kubernetes-master": lookup kubernetes-master on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.xx.138]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.xx.138 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.xx.138 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.504288 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
bdef8c3a114b3f4673be3cb00883e0a2405f6494cb74ac05832d731b8e58ff70
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
# node通过该命令加入到k8s集群中
kubeadm join 192.168.xx.138:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:2a24b2a9f901941b80c0f5c5822b54212893502a03eab046dc98918fd098487d

命令说明

kubeadm init --config=kubeadm.yml  --upload-certs | tee kubeadm-init.log 

--config=kubeadm.yml 指定集群初始化配置
 --upload-certs  上传 control-plane 证书到 kubeadm-certs Secret
 |Linux所提供的管道符“|”将两个命令隔开,管道符左边命令的输出就会作为管道符右边命令的输入。连续使用管道意味着第一个命令的输出会作为 第二个命令的输入
 tee命令用于读取标准输入的数据,并将其内容输出成文件。
 
 整个命令  左侧按照kubeadm.yml 初始化 并上传 control-plane 证书 将左侧初始化的数据日志输出到 kubeadm-init.log 

6.配置kubectl

mkdir -p $HOME/.kube  
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# 非 ROOT 用户执行
chown $(id -u):$(id -g) $HOME/.kube/config

命令说明

mkdir -p $HOME/.kube 
mkdir -p  mkdir -p :递归创建目录,即使上级目录不存在,会按目录层级自动创建目录
$HOME HOME的环境变量  /root
.kube  前面有小数点代表是隐藏文件夹    
隐藏文件 ls是看不到的  ls -la   可以开到 #-l参数:列出文件的详细信息(文件长度、拥有者、时间戳等)#-a参数:列出当前目录所有文件,包括隐藏文件

7.验证主节点是否初始化成功


pl@kubernets-master:~# kubectl get node
NAME                STATUS   ROLES                  AGE   VERSION
kubernetes-master   Ready    control-plane,master   2d    v1.22.2

8.安装从节点

root@kubernets-node2:/home/k8s/application/k8s# kubeadm join 192.168.xx.138:6443 --token abcdef.0123456789abcdef         --discovery-token-ca-cert-hash sha256:dc7c61c905ff4d2a3e67e15a3b9da2dadefab455d3a043cfa013925f9d19f0fd
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

9.验证Node是否加入k8s集群成功

回到 Master 主节点查看是否安装成功

kubectl get node

# 输出如下
NAME                 STATUS     ROLES    AGE   VERSION
kubernetes-master   NotReady    control-plane,master   2d    v1.22.2
kubernets-node1     NotReady                     47h   v1.22.2
kubernets-node2     NotReady                     47h   v1.22.2

执行kubectl get nodes命令,会发现Kubernetes提示Master为NotReady状态,这是因为还没有安装CNI网络插件

10.安装CNI网络插件

(1) CNI介绍

  • CNI(Container Network Interface) 是一个标准的,通用的接口。

  • 在容器平台,Docker,Kubernetes,Mesos 容器网络解决方案 flannel,calico,weave。只要提供一个标准的接口,就能为同样满足该协议的所有容器平台提供网络功能,而 CNI 正是这样的一个标准接口协议。

  • 在 Kubernetes 中,kubelet 可以在适当的时间调用它找到的插件,为通过 kubelet 启动的 pod进行自动的网络配置。

(2)Kubernetes 中可选的 CNI 插件如下:

  • Flannel
  • Calico
  • Canal
  • Weave

本次选择Calico网络插件

(3) Calico介绍

  Calico 为容器和虚拟机提供了安全的网络连接解决方案,并经过了大规模生产验证(在公有云和跨数千个集群节点中),可与 Kubernetes,OpenShift,Docker,Mesos,DC / OS 和 OpenStack 集成。

  Calico 还提供网络安全规则的动态实施。使用 Calico 的简单策略语言,您可以实现对容器,虚拟机工作负载和裸机主机端点之间通信的细粒度控制。

(4) 下载Calico配置文件并修改

#下载
wget https://docs.projectcalico.org/v3.18/manifests/calico.yaml
#修改
vi calico.yaml
# 找到CALICO_IPV4POOL_CIDR:
image.png

(5) 安装Calico插件

kubectl apply -f calico.yaml     

# 输出如下
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

(6)验证安装是否成功

#查看 Calico 网络插件处于 Running 状态即表示安装成功
watch kubectl get pods --all-namespaces  @bug1     //前面不加这个watch也可以  kubectl get pods --all-namespaces

# 输出如下                                                 准备                  重试次数
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-658558ddf8-9zzjg    1/1     Running   0          90s
kube-system   calico-node-9cr5f                           1/1     Running   0          91s
kube-system   calico-node-n99mz                           1/1     Running   0          91s
kube-system   calico-node-nl67v                           1/1     Running   0          91s
kube-system   coredns-bccdc95cf-9s4bm                     1/1     Running   0          56m
kube-system   coredns-bccdc95cf-s8ggd                     1/1     Running   0          56m
kube-system   etcd-kubernetes-master                      1/1     Running   0          55m
kube-system   kube-apiserver-kubernetes-master            1/1     Running   0          55m
kube-system   kube-controller-manager-kubernetes-master   1/1     Running   0          55m
kube-system   kube-proxy-8s87d                            1/1     Running   0          36m
kube-system   kube-proxy-cbnlb                            1/1     Running   0          36m
kube-system   kube-proxy-vwhxj                            1/1     Running   0          56m
kube-system   kube-scheduler-kubernetes-master            1/1     Running   0          55m
# 查看节点状态处于 Ready 即表示安装成功

pl@kubernets-master:~# kubectl get node
NAME                STATUS   ROLES                  AGE   VERSION
kubernetes-master   Ready    control-plane,master   2d    v1.22.2
kubernets-node1     Ready                     47h   v1.22.2
kubernets-node2     Ready                     47h   v1.22.2

你可能感兴趣的:(通过kubeadm安装k8s集群)