k8s集群环境部署

1. 简介

Kubernetes(简称k8s)是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效,Kubernetes提供了资源调度、部署管理、服务发现、扩容缩容、监控,维护等一整套功能。,努力成为跨主机集群的自动部署、扩展以及运行应用程序容器的平台。 它支持一系列容器工具, 包括Docker等。

相关连接:
k8s官方网站
k8s代码地址
cncf官方网站
k8s中文社区
k8s中文社区文档

1.1 kubernetes背景和历史

大规模容器集群管理工具,从Borg到Kubernetes

在Docker 作为高级容器引擎快速发展的同时,Google也开始将自身在容器技术及集群方面的积累贡献出来。在Google内部,容器技术已经应用了很多年,Borg系统运行管理着成千上万的容器应用,在它的支持下,无论是谷歌搜索、Gmail还是谷歌地图,可以轻而易举地从庞大的数据中心中获取技术资源来支撑服务运行。

Borg是集群的管理器,在它的系统中,运行着众多集群,而每个集群可由成千上万的服务器联接组成,Borg每时每刻都在处理来自众多应用程序所提交的成百上千的Job, 对这些Job进行接收、调度、启动、停止、重启和监控。正如Borg论文中所说,Borg提供了3大好处:

1)隐藏资源管理和错误处理,用户仅需要关注应用的开发。

  1. 服务高可用、高可靠。

  2. 可将负载运行在由成千上万的机器联合而成的集群中。

作为Google的竞争技术优势,Borg理所当然的被视为商业秘密隐藏起来,但当Tiwtter的工程师精心打造出属于自己的Borg系统(Mesos)时, Google也审时度势地推出了来源于自身技术理论的新的开源工具。

2014年6月,谷歌云计算专家埃里克·布鲁尔(Eric Brewer)在旧金山的发布会为这款新的开源工具揭牌,它的名字Kubernetes在希腊语中意思是船长或领航员,这也恰好与它在容器集群管理中的作用吻合,即作为装载了集装箱(Container)的众多货船的指挥者,负担着全局调度和运行监控的职责。

虽然Google推出Kubernetes的目的之一是推广其周边的计算引擎(Google Compute Engine)和谷歌应用引擎(Google App Engine)。但Kubernetes的出现能让更多的互联网企业可以享受到连接众多计算机成为集群资源池的好处。

Kubernetes对计算资源进行了更高层次的抽象,通过将容器进行细致的组合,将最终的应用服务交给用户。Kubernetes在模型建立之初就考虑了容器跨机连接的要求,支持多种网络解决方案,同时在Service层次构建集群范围的SDN网络。其目的是将服务发现和负载均衡放置到容器可达的范围,这种透明的方式便利了各个服务间的通信,并为微服务架构的实践提供了平台基础。而在Pod层次上,作为Kubernetes可操作的最小对象,其特征更是对微服务架构的原生支持。

Kubernetes项目来源于Borg,可以说是集结了Borg设计思想的精华,并且吸收了Borg系统中的经验和教训。

Kubernetes作为容器集群管理工具,于2015年7月22日迭代到 v 1.0并正式对外公布,这意味着这个开源容器编排系统可以正式在生产环境使用。与此同时,谷歌联合Linux基金会及其他合作伙伴共同成立了CNCF基金会( Cloud Native Computing Foundation),并将Kuberentes 作为首个编入CNCF管理体系的开源项目,助力容器技术生态的发展进步。Kubernetes项目凝结了Google过去十年间在生产环境的经验和教训,从Borg的多任务Alloc资源块到Kubernetes的多副本Pod,从Borg的Cell集群管理,到Kubernetes设计理念中的联邦集群,在Docker等高级引擎带动容器技术兴起和大众化的同时,为容器集群管理提供独了到见解和新思路。

1.2 Kubernetes 特点

可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)
可扩展: 模块化, 插件化, 可挂载, 可组合
自动化: 自动部署,自动重启,自动复制,自动伸缩/扩展

Kubernetes一个核心的特点就是自动化,能够自主的管理容器来保证云平台中的容器按照用户的期望状态运行着(比如用户想让apache一直运行,用户不需要关心怎么去做,Kubernetes会自动去监控,然后去重启,新建,总之,让apache一直提供服务),管理员可以加载一个微型服务,让规划器来找到合适的位置,同时,Kubernetes也系统提升工具以及人性化方面,让用户能够方便的部署自己的应用(就像canary deployments)。

现在Kubernetes着重于不间断的服务状态(比如web服务器或者缓存服务器)和原生云平台应用(Nosql),在不久的将来会支持各种生产云平台中的各种服务,例如,分批,工作流,以及传统数据库。

所有Kubernetes中的资源,比如Pod,都通过一个叫URI的东西来区分,这个URI有一个UID,URI的重要组成部分是:对象的类型(比如pod),对象的名字,对象的命名空间,对于特殊的对象类型,在同一个命名空间内,所有的名字都是不同的,在对象只提供名称,不提供命名空间的情况下,这种情况是假定是默认的命名空间。UID是时间和空间上的唯一。

1.3 Kubernetes设计架构

Kubernetes集群包含有节点代理kubelet和Master组件(APIs, scheduler, etc),一切都基于分布式的存储系统。下面这张图是Kubernetes的架构图。

image.png

Kubernetes主要由以下几个核心组件组成:
etcd:保存了整个集群的状态;
apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
kubelet:负责本Node节点上的Pod的创建、修改、监控、删除等生命周期管理,同时Kubelet定时“上报”本Node的状态信息到Api Server里;
Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI);
kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡;

2. Kubernetes集群环境部署

2.1 基础设置

服务器及组件详情,如下:

组件名 IP信息 配置信息
k8s-ansible-client 192.168.20.250-eth0 1核1G
k8s-master-1、etcd1 192.168.20.201-eth0 2核2G
k8s-master-2、etcd2 192.168.20.189-eth0 2核2G
k8s-master-3、etcd3 192.168.20.249-eth0 2核2G
k8s-work-1 192.168.20.236-eth0 4核4G
k8s-work-2 192.168.20.147-eth0 4核4G
k8s-work-3 192.168.20.253-eth0 4核4G
k8s-harbor-1 192.168.20.227-eth0 1核1G

2.2 设置主机名

192.168.20.250
# hostname k8s-ansible-client   //临时
# echo "k8s-ansible-client" > /etc/hostname //永久

192.168.20.201
# hostname k8s-master-1   //临时
# echo "k8s-master-1" > /etc/hostname //永久

192.168.20.189
# hostname k8s-master-2   //临时
# echo "k8s-master-2" > /etc/hostname //永久

192.168.20.249
# hostname k8s-master-3  //临时
# echo "k8s-master-3" > /etc/hostname    //永久

192.168.20.236
# hostname k8s-work-1  //临时
# echo "k8s-work-1" > /etc/hostname    //永久

192.168.20.147
# hostname k8s-work-2  //临时
# echo "k8s-work-2" > /etc/hostname    //永久

192.168.20.253 
# hostname k8s-work-3  //临时
# echo "k8s-work-2" > /etc/hostname    //永久

192.168.20.227
# hostname k8s-harbor-1  //临时
# echo "k8s-harbor-1" > /etc/hostname    //永久

2.3 绑定hosts

cat <<"EOF">>/etc/hosts
192.168.20.250    k8s-ansible-client
192.168.20.201    k8s-master-1 etcd1
192.168.20.189    k8s-master-2 etcd2
192.168.20.249    k8s-master-3 etcd3
192.168.20.236    k8s-work-1
192.168.20.147    k8s-work-2
192.168.20.253    k8s-work-3
192.168.20.227    k8s-harbor-1
EOF

2.4 在k8s-ansible-client安装ansible

root@k8s-ansible-client:~# apt install python3-pip -y
# 设置国内pip源
root@k8s-ansible-client:~# pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
# 安装ansible
root@k8s-ansible-client:~# pip3 install ansible

2.5 ansible-client设置到各个服务器免密登录

root@k8s-ansible-client:~# ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:Ri1BFTIR5pQKwuArgZEBL0Yvaj+oN7e5ybloZ6oi5N8 root@k8s-ansible-client
The key's randomart image is:
+---[RSA 3072]----+
|**     .X=o.     |
|=o+ .  +.=       |
|++.o . .+ .      |
|ooo   .. .       |
|oo      S        |
|o.o    .         |
|o. o             |
|+.+o*+           |
|=++BOE           |
+----[SHA256]-----+

root@k8s-ansible-client:~# apt install sshpass -y
root@k8s-ansible-client:~# cat ansible-ssh-key.sh 
#!/bin/bash

IP="
192.168.20.250
192.168.20.201
192.168.20.189
192.168.20.249
192.168.20.236
192.168.20.147
192.168.20.253
192.168.20.227
"

for node in ${IP};do
    sshpass -p 123@2021 ssh-copy-id  ${node}  -o StrictHostKeyChecking=no
    if [ $? -eq 0 ];then
        echo "${node} id_rsa.pub copy sucess"
    else
        echo "${node} id_rsa.pub copy failed"
    fi
done

root@k8s-ansible-client:~# sh ansible-ssh-key.sh 

2.6 部署harbor镜像仓库

# 安装依赖包
root@k8s-harbor-1:~# apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
# 加 Docker 的官方 GPG 密钥
root@k8s-harbor-1:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# 使用以下指令设置稳定版仓库
root@ubuntu-template:~# add-apt-repository \
>    "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/ \
>   $(lsb_release -cs) \
>   stable"
# 要安装特定版本的 Docker Engine-Community,请在仓库中列出可用版本,然后选择一种安装。列出您的仓库中可用的版本
root@k8s-harbor-1:~# apt-cache madison docker-ce
# 安装 Docker Engine-Community
root@k8s-harbor-1:~# apt-get install docker-ce=5:19.03.15~3-0~ubuntu-bionic docker-ce-cli=5:19.03.15~3-0~ubuntu-bionic containerd.io

# 运行此命令下载docker compose的当前稳定版本
root@k8s-harbor-1:~# curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# 对二进制文件应用可执行权限
root@k8s-harbor-1:~# chmod +x /usr/local/bin/docker-compose

#下载harbor包
root@k8s-harbor-1:/usr/local/src# wget https://github.com/goharbor/harbor/releases/download/v2.3.2/harbor-offline-installer-v2.3.2.tgz

# 解压包
root@k8s-harbor-1:/usr/local/src# tar -zxvf harbor-offline-installer-v2.3.2.tgz
harbor/harbor.v2.3.2.tar.gz
harbor/prepare
harbor/LICENSE
harbor/install.sh
harbor/common.sh
harbor/harbor.yml.tmpl

root@k8s-harbor-1:/usr/local/src/harbor# cp harbor.yml.tmpl harbor.yml
root@k8s-harbor-1:/usr/local/src/harbor# vim harbor.yml
hostname: harbor.openscp.com #服务器IP或域名
http:
  port: 80 
harbor_admin_password: Harbor12345  #Harbor超级管理员密码
database:
  password: root123  #数据库管理员密码

data_volume: /data   #配置harbor数据文件,也就是未来镜像文件的存储位置(这个应该是可以挂载到硬盘,就是下午说得500G硬盘)
# https related config
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /data/ssl/harbor.openscp.com.cer
  private_key: /data/ssl/harbor.openscp.com.key

# 启动安装harbor
root@k8s-harbor-1:/usr/local/src/harbor# ./install.sh

# 验证
root@k8s-harbor-1:~# mkdir   /etc/docker/certs.d/harbor.openscp.com -p
root@k8s-harbor-1:~# cp /data/ssl/harbor.openscp.com.cer /etc/docker/certs.d/harbor.openscp.com
root@k8s-harbor-1:~# docker login harbor.openscp.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

浏览器登录harbor,并创建base仓库


image.png

注:这里https证书,是浏览器信任的证书

2.7 将认证信息和证书推送到其他节点

# 创建目录
root@k8s-ansible-client:~# mkdir -p /root/ansible/harbor
root@k8s-ansible-client:~/ansible/harbor# tree .
.
├── ansible.cfg
├── files
│   ├── config.json
│   └── harbor.openscp.com.cer
├── hosts
└── playbook.yml

1 directory, 5 files

root@k8s-ansible-client:~/ansible/harbor# cat ansible.cfg 
[defaults]
inventory=./hosts
root@k8s-ansible-client:~/ansible/harbor# cat hosts 
192.168.20.250
192.168.20.201
192.168.20.189
192.168.20.249
192.168.20.236
192.168.20.147
192.168.20.253
192.168.20.227
root@k8s-ansible-client:~/ansible/harbor# cat playbook.yml 
- hosts: all
  remote_user: root
  gather_facts: no

  tasks:

  - name: xxxxxx
    shell:
      cmd:  mkdir -p /root/.docker/

  - name: copy config.json
    copy:
      src: files/config.json
      dest: /root/.docker/config.json

  - name: mkdir harbor.openscp.com
    shell:
      cmd:  mkdir -p /etc/docker/certs.d/harbor.openscp.com/

  - name: copy harbor.openscp.com.cer
    copy:
      src: files/harbor.openscp.com.cer
      dest: /etc/docker/certs.d/harbor.openscp.com/harbor.openscp.com.cer

# 利用ansible执行
root@k8s-ansible-client:~/ansible/harbor# ansible-playbook -v playbook.yml

2.8 报错处理

# 从harbor拉取镜像,并且启动
root@k8s-ansible-client:~# kubectl run nginx-test-001 --image=harbor.openscp.com/base/nginx:latest sleep 50000

报错,如下:


image.png
root@k8s-ansible-client:~# kubectl describe pods nginx-test-001
...
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  13s               default-scheduler  Successfully assigned default/nginx-test-001 to 192.168.20.147
  Normal   BackOff    12s               kubelet            Back-off pulling image "harbor.openscp.com/base/nginx:latest"
  Warning  Failed     12s               kubelet            Error: ImagePullBackOff
  Normal   Pulling    1s (x2 over 12s)  kubelet            Pulling image "harbor.openscp.com/base/nginx:latest"
  Warning  Failed     1s (x2 over 12s)  kubelet            Failed to pull image "harbor.openscp.com/base/nginx:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: unauthorized to access repository: base/nginx, action: pull: unauthorized to access repository: base/nginx, action: pull
  Warning  Failed     1s (x2 over 12s)  kubelet            Error: ErrImagePull

解决方法,如下:
如果是${HOME}/.docker/config.json看说明是要额外设环境变量给kubelet的,可以试试放在/var/lib/kubelet/config.json

root@k8s-work-1:/var/lib/kubelet# cp /root/.docker/config.json /var/lib/kubelet/
root@k8s-work-2:/var/lib/kubelet# cp /root/.docker/config.json /var/lib/kubelet/
root@k8s-work-3:/var/lib/kubelet# cp /root/.docker/config.json /var/lib/kubelet/

3. 使用kubeasz安装Kubernetes集群

项目致力于提供快速部署高可用k8s集群的工具, 同时也努力成为k8s实践、使用的参考书;基于二进制方式部署和利用ansible-playbook实现自动化;既提供一键安装脚本, 也可以根据安装指南分步执行安装各个组件。
github地址

3.1 下载kubeasz工具

# 设置版本变量
root@k8s-ansible-client:~# export release=3.1.0
root@k8s-ansible-client:~# curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   614  100   614    0     0   1112      0 --:--:-- --:--:-- --:--:--  1112
100 15075  100 15075    0     0   8233      0  0:00:01  0:00:01 --:--:-- 14181
root@k8s-ansible-client:~# chmod +x ./ezdown
# 使用工具脚本下载 vim ezdown 指定Docker版本:19.03.15
root@k8s-ansible-client:~# ./ezdown -D

3.2 创建集群配置实例

root@k8s-ansible-client:~# /etc/kubeasz/ezctl new k8s-pop
2021-09-23 23:43:04 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-pop
2021-09-23 23:43:04 DEBUG set version of common plugins
2021-09-23 23:43:04 DEBUG cluster k8s-pop: files successfully created.
2021-09-23 23:43:04 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-pop/hosts'
2021-09-23 23:43:04 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-pop/config.yml'

root@k8s-ansible-client:~# ll /etc/kubeasz/clusters/k8s-pop/
total 20
drwxr-xr-x 2 root root 4096 Sep 23 23:43 ./
drwxr-xr-x 3 root root 4096 Sep 23 23:43 ../
-rw-r--r-- 1 root root 6695 Sep 23 23:43 config.yml
-rw-r--r-- 1 root root 1686 Sep 23 23:43 hosts
root@k8s-ansible-client:~# 

hosts文件修改

root@k8s-ansible-client:/etc/kubeasz/clusters/k8s-pop# grep -v "^#" hosts |grep -v "^$"
[etcd]
192.168.20.201
192.168.20.189
192.168.20.249
[kube_master]
192.168.20.201
192.168.20.189
192.168.20.249
[kube_node]
192.168.20.236
192.168.20.147
192.168.20.253
[harbor]
[ex_lb]
[chrony]
[all:vars]
SECURE_PORT="6443"
CONTAINER_RUNTIME="docker"
CLUSTER_NETWORK="calico"
PROXY_MODE="ipvs"
SERVICE_CIDR="10.68.0.0/16"
CLUSTER_CIDR="172.20.0.0/16"
NODE_PORT_RANGE="30000-32767"
CLUSTER_DNS_DOMAIN="pop.local"
bin_dir="/usr/local/bin"
base_dir="/etc/kubeasz"
cluster_dir="{{ base_dir }}/clusters/k8s-pop"
ca_dir="/etc/kubernetes/ssl"

部分公网镜像,传到harbor私有仓库

root@k8s-harbor-1:~# docker pull easzlab/pause-amd64:3.4.1
3.4.1: Pulling from easzlab/pause-amd64
fac425775c9d: Pull complete 
Digest: sha256:9ec1e780f5c0196af7b28f135ffc0533eddcb0a54a0ba8b32943303ce76fe70d
Status: Downloaded newer image for easzlab/pause-amd64:3.4.1
docker.io/easzlab/pause-amd64:3.4.1
root@k8s-harbor-1:~# docker tag easzlab/pause-amd64:3.4.1 harbor.openscp.com/base/pause-amd64:3.4.1
root@k8s-harbor-1:~# docker push harbor.openscp.com/base/pause-amd64:3.4.1
The push refers to repository [harbor.openscp.com/base/pause-amd64]
915e8870f7d1: Pushed 
3.4.1: digest: sha256:9ec1e780f5c0196af7b28f135ffc0533eddcb0a54a0ba8b32943303ce76fe70d size: 526

配置文件修改config.yml

root@k8s-ansible-client:/etc/kubeasz/clusters/k8s-pop# grep -v "^#" config.yml |grep -v "^$"
INSTALL_SOURCE: "online"
OS_HARDEN: false
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"
local_network: "0.0.0.0/0"
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
ENABLE_MIRROR_REGISTRY: true
SANDBOX_IMAGE: "harbor.openscp.com/base/pause-amd64:3.4.1"
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
DOCKER_STORAGE_DIR: "/var/lib/docker"
ENABLE_REMOTE_API: false
INSECURE_REG: '["127.0.0.1/8", "harbor.openscp.com"]'
MASTER_CERT_HOSTS:
  - "10.1.1.1"
  - "k8s.test.io"
  #- "www.test.com"
NODE_CIDR_LEN: 24
KUBELET_ROOT_DIR: "/var/lib/kubelet"
MAX_PODS: 110
KUBE_RESERVED_ENABLED: "yes"
SYS_RESERVED_ENABLED: "no"
BALANCE_ALG: "roundrobin"
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"
flannel_offline: "flannel_{{ flannelVer }}.tar"
CALICO_IPV4POOL_IPIP: "Always"
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
CALICO_NETWORKING_BACKEND: "brid"
calico_ver: "v3.15.3"
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
calico_offline: "calico_{{ calico_ver }}.tar"
ETCD_CLUSTER_SIZE: 1
cilium_ver: "v1.4.1"
cilium_offline: "cilium_{{ cilium_ver }}.tar"
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"
OVERLAY_TYPE: "full"
FIREWALL_ENABLE: "true"
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"
dns_install: "no"
corednsVer: "1.8.0"
ENABLE_LOCAL_DNS_CACHE: no
dnsNodeCacheVer: "1.17.0"
LOCAL_DNS_CACHE: "169.254.20.10"
metricsserver_install: "no"
metricsVer: "v0.3.6"
dashboard_install: "no"
dashboardVer: "v2.2.0"
dashboardMetricsScraperVer: "v1.0.6"
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443
HARBOR_SELF_SIGNED_CERT: true
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

利用ezctl安装集群

# 查看命令参数
root@k8s-ansible-client:~# /etc/kubeasz/ezctl setup -h
Usage: ezctl setup  
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings 
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
      ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master

# 安装
root@k8s-ansible-client:~# /etc/kubeasz/ezctl setup k8s-pop 01
ansible-playbook -i clusters/k8s-pop/hosts -e @clusters/k8s-pop/config.yml  playbooks/01.prepare.yml
2021-09-24 22:51:12 INFO cluster:k8s-pop setup step:01 begins in 5s, press any key to abort:
...

root@k8s-ansible-client:~# /etc/kubeasz/ezctl setup k8s-pop 02
ansible-playbook -i clusters/k8s-pop/hosts -e @clusters/k8s-pop/config.yml  playbooks/02.etcd.yml
2021-09-24 22:55:42 INFO cluster:k8s-pop setup step:02 begins in 5s, press any key to abort:
...

root@k8s-ansible-client:~# /etc/kubeasz/ezctl setup k8s-pop 03
ansible-playbook -i clusters/k8s-pop/hosts -e @clusters/k8s-pop/config.yml  playbooks/03.runtime.yml
2021-09-24 22:56:43 INFO cluster:k8s-pop setup step:03 begins in 5s, press any key to abort:
...

root@k8s-ansible-client:~# /etc/kubeasz/ezctl setup k8s-pop 04
ansible-playbook -i clusters/k8s-pop/hosts -e @clusters/k8s-pop/config.yml  playbooks/04.kube-master.yml
2021-09-24 22:59:40 INFO cluster:k8s-pop setup step:04 begins in 5s, press any key to abort:
...

root@k8s-ansible-client:~# /etc/kubeasz/ezctl setup k8s-pop 05
ansible-playbook -i clusters/k8s-pop/hosts -e @clusters/k8s-pop/config.yml  playbooks/05.kube-node.yml
2021-09-24 23:03:18 INFO cluster:k8s-pop setup step:05 begins in 5s, press any key to abort:
...

root@k8s-ansible-client:~# /etc/kubeasz/ezctl setup k8s-pop 06
ansible-playbook -i clusters/k8s-pop/hosts -e @clusters/k8s-pop/config.yml  playbooks/06.network.yml
2021-09-24 23:05:08 INFO cluster:k8s-pop setup step:06 begins in 5s, press any key to abort:
...

3.3 验证

# 查看所有node节点
root@k8s-master-1:~# kubectl get nodes
NAME             STATUS                     ROLES    AGE    VERSION
192.168.20.147   Ready                      node     121m   v1.21.0
192.168.20.189   Ready,SchedulingDisabled   master   123m   v1.21.0
192.168.20.201   Ready,SchedulingDisabled   master   123m   v1.21.0
192.168.20.236   Ready                      node     121m   v1.21.0
192.168.20.249   Ready,SchedulingDisabled   master   123m   v1.21.0
192.168.20.253   Ready                      node     121m   v1.21.0

# 查看所有的pod
root@k8s-master-1:~# kubectl get pod -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-647f956d86-mtpdr   1/1     Running   0          12m     192.168.20.236   192.168.20.236              
kube-system   calico-node-7tqpw                          1/1     Running   0          9m21s   192.168.20.249   192.168.20.249              
kube-system   calico-node-hv6xc                          1/1     Running   0          9m13s   192.168.20.253   192.168.20.253              
kube-system   calico-node-jcnlw                          1/1     Running   0          9m6s    192.168.20.147   192.168.20.147              
kube-system   calico-node-lktp8                          1/1     Running   0          12m     192.168.20.201   192.168.20.201              
kube-system   calico-node-mv7lp                          1/1     Running   0          9m32s   192.168.20.189   192.168.20.189              
kube-system   calico-node-wqbnr                          1/1     Running   0          12m     192.168.20.236   192.168.20.236              

# 查看svc
root@k8s-master-1:~# kubectl get svc -A
NAMESPACE   NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     kubernetes   ClusterIP   10.68.0.1            443/TCP   133m

# 验证calio网络
root@k8s-master-1:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.20.1    0.0.0.0         UG    100    0        0 eth0
10.10.0.0       0.0.0.0         255.255.255.0   U     0      0        0 eth1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.20.108.64   192.168.20.236  255.255.255.192 UG    0      0        0 tunl0
172.20.168.0    192.168.20.249  255.255.255.192 UG    0      0        0 tunl0
172.20.182.64   192.168.20.189  255.255.255.192 UG    0      0        0 tunl0
172.20.191.0    192.168.20.147  255.255.255.192 UG    0      0        0 tunl0
172.20.196.0    0.0.0.0         255.255.255.192 U     0      0        0 *
172.20.213.0    192.168.20.253  255.255.255.192 UG    0      0        0 tunl0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0

root@k8s-master-1:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.20.236 | node-to-node mesh | up    | 16:58:14 | Established |
| 192.168.20.189 | node-to-node mesh | up    | 17:04:39 | Established |
| 192.168.20.147 | node-to-node mesh | up    | 17:04:41 | Established |
| 192.168.20.253 | node-to-node mesh | up    | 17:04:52 | Established |
| 192.168.20.249 | node-to-node mesh | up    | 17:04:58 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

# 启动三个NGINX的pod
root@k8s-master-1:~# kubectl run nginx-test1 --image=nginx:latest sleep 50000
pod/nginx-test1 created
root@k8s-master-1:~# kubectl run nginx-test2 --image=nginx:latest sleep 50000
pod/nginx-test2 created
root@k8s-master-1:~# kubectl run nginx-test3 --image=nginx:latest sleep 50000
pod/nginx-test3 created

root@k8s-master-1:~# kubectl get pod -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP             NODE             NOMINATED NODE   READINESS GATES
nginx-test1   1/1     Running   0          32s   172.20.191.2   192.168.20.147              
nginx-test2   1/1     Running   0          27s   172.20.213.3   192.168.20.253              
nginx-test3   1/1     Running   0          24s   172.20.191.3   192.168.20.147              

3.4 安装coredns

root@k8s-master-1:~/yaml# vim coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        bind 0.0.0.0
        ready
        kubernetes yd.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.8.3
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  type: NodePort
  selector:
    k8s-app: kube-dns
  clusterIP: 10.68.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
    targetPort: 9153
    nodePort: 30009

# apply
root@k8s-master-1:~/yaml# kubectl apply -f coredns.yaml
serviceaccount/coredns unchanged
clusterrole.rbac.authorization.k8s.io/system:coredns unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:coredns unchanged
configmap/coredns unchanged
deployment.apps/coredns unchanged
service/kube-dns created

验证dns解析

root@k8s-master-1:~/yaml# kubectl run alpine-test --image=alpine:latest sleep 50000
pod/alpine-test created
command terminated with exit code 126
root@k8s-master-1:~/yaml# kubectl exec -it alpine-test /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # nslookup baidu.com
Server:     10.68.0.2
Address:    10.68.0.2:53

Non-authoritative answer:
Name:   baidu.com
Address: 220.181.38.148
Name:   baidu.com
Address: 220.181.38.251

Non-authoritative answer:
*** Can't find baidu.com: No answer

查看metrics数据


image.png

3.5 安装dashboard

# github地址
https://github.com/kubernetes/dashboard

# dashboard的官方yaml文件下载
https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

# 安装
root@k8s-master-1:~/yaml# vim dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30008
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

root@k8s-master-1:~/yaml# kubectl apply -f dashboard.yml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

创建secret,访问dashboard

root@k8s-master-1:~/yaml# vim admin-user.yml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
# apply
root@k8s-master-1:~/yaml# kubectl apply -f admin-user.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 查找创建secrets 
root@k8s-master-1:~/yaml# kubectl get secrets -A | grep admin
kubernetes-dashboard   admin-user-token-hbjp5                           kubernetes.io/service-account-token   3      12s

# 获取token信息
root@k8s-master-1:~/yaml# kubectl describe secrets  admin-user-token-hbjp5 -n kubernetes-dashboard
Name:         admin-user-token-hbjp5
Namespace:    kubernetes-dashboard
Labels:       
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 5040071a-762c-4602-a873-f558989a1e60

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlA3YjBkSVE3QWlJdzRNOVlfcGpHWWI3dTU3OUhtczZTVGJldk91TS1pejQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWhianA1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1MDQwMDcxYS03NjJjLTQ2MDItYTg3My1mNTU4OTg5YTFlNjAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.l6vvuJQ_TJKa5YLrpC5460RLl-KCiyGPoprAqJd9iCozLvseixp0gWAIm0zWXazPa1mMbJdBsjNNyNBWO2fKQrvT4nXKMBGwRKPLAtElBSwY2thJaWxkc_RP8xkCV8WWLLl7rduKdmmrpOyX3KiOb2mKPRe2oL0153hqdoTAw5m2hI8Xp8w1Ov6OF7OGrD6z_dKpK1kjbZZh1jvX5BU3ncS6tMvEggebXgAFzwnKcq89NO7BfSf7B-AVKH_FN2ebv8fJlw0IxnLVOg_pvbhQeBbaBilSkRgmgu3ZHdIwf61_BZaShehS6lC1ciCJkqZ4_xM8t9dopQ4eB5B-Tisc3w

验证
https://192.168.20.253:30008/#/login

image.png

image.png

你可能感兴趣的:(k8s集群环境部署)