CentOS7 使用二进制部署 Kubernetes v1.15.3集群

组件版本 && 集群环境

组件版本:

  • Kubernetes v1.15.3
  • Etcd v3.3.10
  • Flanneld v0.11.0
服务器IP 角色
192.168.1.241 master
192.168.1.242 node1
192.168.1.243 node2

一、部署节点:

集群环境变量:

# 建议使用未用的网段来定义服务网段和Pod 网段
# 服务网段(Service CIDR),部署前路由不可达,部署后集群内部使用IP:Port可达
SERVICE_CIDR="192.254.0.0/16"
# Pod 网段(Cluster CIDR),部署前路由不可达,部署后路由可达(flanneld 保证)
CLUSTER_CIDR="172.18.0.0/16"

# kubernetes 服务IP(预先分配,一般为SERVICE_CIDR中的第一个IP)
CLUSTER_KUBERNETES_SVC_IP="192.254.0.1"
# 集群 DNS 服务IP(从SERVICE_CIDR 中预先分配)
CLUSTER_DNS_SVC_IP="192.254.0.2"
# flanneld 网络配置前缀
FLANNEL_ETCD_PREFIX="/kubernetes/network"

二、初始化环境

1. 设置关闭防火墙及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

2. 关闭Swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

3. 设置系统参数 - 允许路由转发,不对bridge的数据进行处理

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

如果遇到以下错误,解决办法如下:
CentOS7 使用二进制部署 Kubernetes v1.15.3集群_第1张图片

4. 创建安装目录

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

5. 安装 Docker

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce
systemctl start docker && systemctl enable docker

6. ssh-key认证

ssh-keygen 
ssh-copy-id 192.168.1.241
ssh-copy-id 192.168.1.242
ssh-copy-id 192.168.1.243

三、创建CA证书和密钥对

kubernetes 系统各个组件需要使用TLS证书对通信进行加密,这里我们使用CloudFlare的PKI 工具集cfssl 来生成Certificate Authority(CA) 证书和密钥文件, CA 是自签名的证书,用来签名后续创建的其他TLS 证书。

创建证书是可以创建相对的路径来存放不同的证书,方便日后查找

(本文章没有创建)

1.安装 CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -P /usr/local/src
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -P /usr/local/src
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -P /usr/local/src

chmod +x /usr/local/src/cfssl*

mv cfssl_linux-amd64 /usr/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.创建CA

cat ca-config.json
{
	"signing": {
		"default": {
			"expiry": "87600h"
		},
		"profiles": {
			"kubernetes": {
				"expiry": "87600h",
				"usages": [
					"signing",
					"key encipherment",
					"server auth",
					"client auth"
				]
			}
		}
	}
}	

config.json:可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile;
signing: 表示该证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE;
server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
client auth: 表示server 可以用该CA 对client 提供的证书进行验证。
修改CA 证书签名请求:

cat ca-csr.json
{
	"CN": "kubernetes",
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "BeiJing",
			"ST": "BeiJing",
			"O": "k8s",
			"OU": "System"
		}
	]
}	

CN: Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(User
Name);浏览器使用该字段验证网站是否合法;
O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);
生成CA 证书和私钥:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem	

3.分发证书:
将生成的CA 证书、密钥文件、配置文件拷贝到所有机器的/k8s/kubernetes/ssl/目录下:

cp ca* /k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 192.168.1.242:/k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 192.168.1.243:/k8s/kubernetes/ssl/

四、部署高可用etcd 集群

kubernetes系统使用etcd存储所有的数据,我们这里部署3个节点的etcd集群,这3个节点直接复用kubernetes 的3个节点,分别命名为etcd01、etcd02、etcd03:

  • 192.168.1.241 etcd01
  • 192.168.1.242 etcd02
  • 192.168.1.243 etcd03

1.解压安装文件

下载地址:https://github.com/etcd-io/etcd/releases
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/

vim /k8s/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.241:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.241:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.241:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.241:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.241:2380,etcd02=https://192.168.1.242:2380,etcd03=https://192.168.1.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

2.创建TLS 密钥和证书
为了保证通信安全,客户端(如etcdctl)与etcd 集群、etcd 集群之间的通信需要使用TLS 加密。
创建etcd 证书签名请求:

cat > etcd-csr.json <{
  "CN": "etcd",
  "hosts": [
	"192.168.1.241",
	"192.168.1.242",
	"192.168.1.243"
  ],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF

hosts 字段指定授权使用该证书的etcd节点IP
生成etcd证书和私钥:

cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
-ca-key=/k8s/kubernetes/ssl/ca-key.pem \
-config=/k8s/kubernetes/ssl/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
#此处可能会出现hosts值无效,不用理,直接跳过
ls etcd*
etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

cp etcd* /k8s/etcd/ssl/

3.创建 etcd的 systemd unit 文件

#要注意下方的变量和证书路径是否和你本地的一致
vim /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/etcd.pem \
--key-file=/k8s/etcd/ssl/etcd-key.pem \
--peer-cert-file=/k8s/etcd/ssl/etcd.pem \
--peer-key-file=/k8s/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/k8s/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

为了保证通信安全,需要指定etcd 的公私钥(cert-file和key-file)、Peers通信的公私钥和CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA 证书(trusted-ca-file);

4.将启动文件、配置文件拷贝到 节点1、节点2:

cd /k8s/ 
scp -r etcd/ 192.168.1.242:/k8s/
scp -r etcd/ 192.168.1.243:/k8s/
scp /lib/systemd/system/etcd.service 192.168.1.242:/lib/systemd/system/etcd.service
scp /lib/systemd/system/etcd.service 192.168.1.243:/lib/systemd/system/etcd.service

修改对应节点的cfg/etcd文件:

[root@host1 ~]# cat /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd03"   #每个节点IP的角色都要修改
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.243:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.243:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.243:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.243:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.241:2380,etcd02=https://192.168.1.242:2380,etcd03=https://192.168.1.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[root@host2 ~]# cat /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.242:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.242:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.242:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.242:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.241:2380,etcd02=https://192.168.1.242:2380,etcd03=https://192.168.1.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

5.启动etcd 服务 注意启用3.4以上版本的时候service文件不需要加变量会自动读取以下是3.4以上版本的service文件

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

#部署到三台节点上

6.验证服务
部署完etcd 集群后,在任一etcd 节点上执行下面命令:

/k8s/etcd/bin/etcdctl \
--ca-file=/k8s/kubernetes/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/etcd.pem \
--key-file=/k8s/etcd/ssl/etcd-key.pem \
--endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" \
cluster-health
输出结果如下:
member 2e4d105025f61a1b is healthy: got healthy result from https://192.168.1.241:2379
member 8ad9da8a203d86d8 is healthy: got healthy result from https://192.168.1.242:2379
member c1b34b5ace31a23f is healthy: got healthy result from https://192.168.1.243:2379
cluster is healthy

3.4版本以上的测试命令

/k8s/etcd/bin/etcdctl \
--cacert=/k8s/kubernetes/ssl/CA/ca.pem \
--cert=/k8s/etcd/ssl/etcd.pem \
--key=/k8s/etcd/ssl/etcd-key.pem\ --endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" endpoint health

可以看到上面的信息3个节点上的etcd 均为healthy,则表示集群服务正常。

五、部署Flannel 网络

kubernetes 要求集群内各节点能通过Pod 网段互联互通,下面我们来使用Flannel 在所有节点上创建互联互通的Pod 网段的步骤。

1.创建TLS 密钥和证书
etcd 集群启用了双向TLS 认证,所以需要为flanneld 指定与etcd 集群通信的CA 和密钥。
创建flanneld 证书签名请求:

cat > flanneld-csr.json <{
  "CN": "flanneld",
  "hosts": [],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF

生成flanneld 证书和私钥:

cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
-ca-key=/k8s/kubernetes/ssl/ca-key.pem \
-config=/k8s/kubernetes/ssl/ca-config.json \
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld	
ls flanneld*
flanneld.csr  flanneld-csr.json  flanneld-key.pem  flanneld.pem	
mkdir -p /web/kubernetes/ssl/flanneld
cp flanneld*.pem /web/kubernetes/ssl/flanneld

2.向etcd 写入集群Pod 网段信息
该步骤只需在第一次部署Flannel 网络时执行,后续在其他节点上部署Flanneld 时无需再写入该信息

/web/k8s/etcd/bin/etcdctl \
--endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" \
--ca-file=/web/kubernetes/ssl/CA/ca.pem \
--cert-file=/web/kubernetes/ssl/flanneld/flanneld.pem \
--key-file=/web/kubernetes/ssl/flanneld/flanneld-key.pem \
set /kubernetes/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
输出信息:
{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}
ssl]# /k8s/etcd/bin/etcdctl --cacert=/k8s/kubernetes/ssl/CA/ca.pem --cert=/k8s/flannel/ssl/flanneld.pem --key=/k8s/flannel/ssl/flanneld-key.pem --endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" put /kubernetes/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
输出OK

以上是版本etcd3.4.4的命令

写入的 Pod 网段(${CLUSTER_CIDR},172.18.0.0/16)必须与kube-controller-manager 的 --cluster-cidr 选项值一致;

3.安装和配置flanneld

下载地址:https://github.com/coreos/flannel/releases
tar xf flannel-v0.11.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /web/kubernetes/bin/

创建flanneld的systemd unit 文件

cat /lib/systemd/system/flanneld.service    
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
#EnvironmentFile=/k8s/kubernetes/cfg/flanneld
#ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStart=/web/kubernetes/bin/flanneld --etcd-cafile=/web/kubernetes/ssl/CA/ca.pem --etcd-certfile=/web/kubernetes/ssl/flanneld/flanneld.pem --etcd-keyfile=/web/kubernetes/ssl/flanneld/flanneld-key.pem --etcd-endpoints=https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379 --etcd-prefix=/kubernetes/network
ExecStartPost=/web/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
  • mk-docker-opts.sh脚本将分配给flanneld 的Pod 子网网段信息写入到/run/flannel/docker 文件中,后续docker 启动时使用这个文件中的参数值为 docker0 网桥
    flanneld 使用系统缺省路由所在的接口和其他节点通信,对于有多个网络接口的机器(内网和公网),可以用 --iface 选项值指定通信接口(上面的 systemd unit 文件没指定这个选项)
    配置Docker启动指定子网段
cat /lib/systemd/system/docker.service    
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
4.启动flanneld
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl restart docker

部署到三台节点上

5.检查flanneld 服务,检查分配给各flanneld 的Pod 网段信息l
ifconfig flannel.1

查看集群 Pod 网段(/16)

/web/k8s/etcd/bin/etcdctl
–endpoints=“https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379”
–ca-file=/web/kubernetes/ssl/CA/ca.pem
–cert-file=/web/kubernetes/ssl/flanneld/flanneld.pem
–key-file=/web/kubernetes/ssl/flanneld/flanneld-key.pem
get /kubernetes/network/config

{ “Network”: “172.18.0.0/16”, “Backend”: {“Type”: “vxlan”}}

查看已分配的 Pod 子网段列表(/24)

/k8s/etcd/bin/etcdctl \

–endpoints=“https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379”
–ca-file=/k8s/kubernetes/ssl/ca.pem
–cert-file=/k8s/flanneld/ssl/flanneld.pem
–key-file=/k8s/flanneld/ssl/flanneld-key.pem
ls /kubernetes/network/subnets

/kubernetes/network/subnets/172.18.100.0-24

查看某一 Pod 网段对应的 flanneld 进程监听的 IP 和网络参数

/k8s/etcd/bin/etcdctl \

--endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.1681.243:2379" \
--ca-file=/k8s/kubernetes/ssl/ca.pem \
--cert-file=/k8s/flanneld/ssl/flanneld.pem \
--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
get /kubernetes/network/subnets/172.18.100.0-24

{"PublicIP":"192.168.1.241","BackendType":"vxlan","BackendData":{"VtepMAC":"e2:29:21:80:5e:d1"}}

6.将配置文件复制到其他节点

scp -r /k8s/kubernetes/bin/* 192.168.1.242:/k8s/kubernetes/bin/
scp -r /k8s/kubernetes/bin/* 192.168.1.243:/k8s/kubernetes/bin/
scp -r /k8s/flanneld/ssl/* 192.168.1.242:/k8s/flanneld/ssl/
scp -r /k8s/flanneld/ssl/* 192.168.1.243:/k8s/flanneld/ssl/
scp /lib/systemd/system/flanneld.service 192.168.1.242:/lib/systemd/system/flanneld.service
scp /lib/systemd/system/flanneld.service 192.168.1.243:/lib/systemd/system/flanneld.service
scp /lib/systemd/system/docker.service 192.168.1.242:/lib/systemd/system/docker.service 
scp /lib/systemd/system/docker.service 192.168.1.243:/lib/systemd/system/docker.service 

7.确保各节点间Pod 网段能互联互通
在各个节点部署完Flanneld 后,查看已分配的Pod 子网段列表:

/k8s/etcd/bin/etcdctl \
--endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" \
--ca-file=/k8s/kubernetes/ssl/ca.pem \
--cert-file=/k8s/flanneld/ssl/flanneld.pem \
--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
ls /kubernetes/network/subnets

/kubernetes/network/subnets/172.18.88.0-24
/kubernetes/network/subnets/172.18.85.0-24
/kubernetes/network/subnets/172.18.100.0-24

六、 部署master 节点

kubernetes master 节点包含的组件有:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。

下载解压二进制文件

wget https://dl.k8s.io/v1.15.3/kubernetes-server-linux-amd64.tar.gz  

下载不了就要自己找了,有时候需要

tar -xf kubernetes-server-linux-amd64.tar.gz
 cd kubernetes/server/bin/
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /web/kubernetes/bin/

创建kubernetes 证书
创建kubernetes 证书签名请求:

cat > kubernetes-csr.json <{
  "CN": "kubernetes",
  "hosts": [
	"127.0.0.1",
	"192.168.1.240",
	"k8s-api.virtual.local",
	"192.254.0.1",
	"kubernetes",
	"kubernetes.default",
	"kubernetes.default.svc",
	"kubernetes.default.svc.cluster",
	"kubernetes.default.svc.cluster.local"
  ],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF	
  • 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,所以上面分别指定了当前部署的 master 节点主机 IP
    以及apiserver 负载的内部域名
  • 还需要添加 kube-apiserver 注册的名为 kubernetes 的服务 IP (Service Cluster IP),一般是 kube-apiserver --service-cluster-ip-range 选项值指定的网段的第一个IP,如 “10.254.0.1”
    生成kubernetes 证书和私钥:
cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem -ca-key=/k8s/kubernetes/ssl/ca-key.pem \
-config=/k8s/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | \ 
cfssljson -bare kubernetes

ls kub*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem
cp kubernetes*.pem /web/kubernetes/ssl/kubernetes

七、配置和启动kube-apiserver

1.创建kube-apiserver 使用的客户端token 文件:
kubelet 首次启动时向kube-apiserver 发送TLS Bootstrapping 请求,kube-apiserver 验证请求中的token 是否与它配置的token.csv 一致,如果一致则自动为kubelet 生成证书和密钥。
TLS Bootstrapping 使用的Token,可以使用命令

head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
9d3d0413211c8d92ed1b33a913154ce5

cat /web/kubernetes/cfg/token.csv
9d3d0413211c8d92ed1b33a913154ce5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"	

2.创建apiserver配置文件

cat /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379 \
--bind-address=192.168.1.240 \
--secure-port=6443 \
--advertise-address=192.168.1.240 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/web/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/web/kubernetes/ssl/kubernetes/kubernetes.pem  \
--tls-private-key-file=/web/kubernetes/ssl/kubernetes/kubernetes-key.pem \
--client-ca-file=/web/kubernetes/ssl/CA/ca.pem \
--service-account-key-file=/web/kubernetes/ssl/CA/ca-key.pem \
--etcd-cafile=/web/kubernetes/ssl/CA/ca.pem \
--etcd-certfile=/web/kubernetes/ssl/kubernetes/kubernetes.pem \
--etcd-keyfile=/web/kubernetes/ssl/kubernetes/kubernetes-key.pem"

3.创建kube-apiserver 的systemd unit文件

cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/web/kubernetes/cfg/kube-apiserver
ExecStart=/web/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	
4.启动服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

1.创建kube-controller-manager(资源控制中心)配置文件

cat /web/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/24 \
--cluster-cidr=172.18.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/web/kubernetes/ssl/CA/ca.pem \
--cluster-signing-key-file=/web/kubernetes/ssl/CA/ca-key.pem  \
--root-ca-file=/web/kubernetes/ssl/CA/ca.pem \
--service-account-private-key-file=/web/kubernetes/ssl/CA/ca-key.pem"	

2.创建kube-controller-manager systemd unit 文件

cat /lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/web/kubernetes/cfg/kube-controller-manager
ExecStart=/web/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

3.启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager

1.创建kube-scheduler(资源调度)配置文件

cat /web/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true"

2.创建kube-scheduler systemd unit 文件

cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/web/kubernetes/cfg/kube-scheduler
ExecStart=/web/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	
3.启动服务
systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl restart kube-scheduler.service
systemctl status kube-scheduler.service

验证master 节点
将可执行文件添加到 PATH 变量中

echo "export PATH=$PATH:/web/kubernetes/bin/" >>/etc/profile
source /etc/profile

验证:

kubectl get cs 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"} 

如果出现 ,暂时没啥问题

八、部署Node 节点

kubernetes Node 节点包含如下组件:

  • kubelet
  • kube-proxy
    安装和配置kubelet
    kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况;
    kubelet就是运行在Node节点上的,接收kube-apiserver发送的请求,管理Pod容器,执行交互式命令,如exec、run、logs等;所以这一步安装是在所有的Node节点上,如果你想把你的Master也当做Node节点的话,当然也可以在Master节点上安装的。
    kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色,然后kubelet 才有权限创建认证请求(certificatesigningrequests):
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap	

将kubelet 二进制文件拷贝node节点

cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.1.242:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.1.243:/k8s/kubernetes/bin/

创建 kubelet bootstrap kubeconfig 文件一下创建需要在同一目录运行

cat environment.sh 
BOOTSTRAP_TOKEN=9d3d0413211c8d92ed1b33a913154ce5
KUBE_APISERVER="https://192.168.1.241:6443"	

source environment.sh

设置集群参数

kubectl config set-cluster kubernetes \
--certificate-authority=/web/kubernetes/ssl/CA/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

设置上下文参数

kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

设置默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig	
  • –embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
  • 设置 kubelet 客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成;
    将bootstrap kubeconfig文件拷贝到所有 nodes节点
cp bootstrap.kubeconfig /web/kubernetes/cfg/
scp bootstrap.kubeconfig 192.168.1.241:/web/kubernetes/cfg/
scp bootstrap.kubeconfig 192.168.1.242:/web/kubernetes/cfg/	

创建kubelet 参数配置文件拷贝到所有 nodes节点
创建 kubelet 参数配置模板文件:

vim /web/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.1.241
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.2"]
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
	enabled: true

注意格式一定要正确,不然会认证失败

创建kubelet配置文件

vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.1.241 \
--kubeconfig=/web/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/web/kubernetes/cfg/bootstrap.kubeconfig \
--config=/web/kubernetes/cfg/kubelet.config \
--cert-dir=/web/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
创建kubelet systemd unit 文件
vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/web/kubernetes/cfg/kubelet
ExecStart=/web/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target	

拷贝文件:

scp /k8s/kubernetes/cfg/kubelet* 192.168.1.242:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/kubelet* 192.168.1.243:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kubelet.service 192.168.1.242:/lib/systemd/system/kubelet.service 
scp /lib/systemd/system/kubelet.service 192.168.1.243:/lib/systemd/system/kubelet.service 

其他节点需要修改对应的address和hostname-override地址
启动kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

通过kubelet 的TLS 证书请求
kubelet 首次启动时向kube-apiserver 发送证书签名请求,必须通过后kubernetes 系统才会将该 Node 加入到集群。
查看未授权的CSR 请求:当授权了,过段时间就会消失

kubectl get csr  
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   2m37s   kubelet-bootstrap   Pending
kubectl get nodes
No resources found.	
通过CSR 请求:
kubectl certificate approve node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I
certificatesigningrequest.certificates.k8s.io/node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I approved
kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   4m9s   kubelet-bootstrap   Approved,Issued
kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.1.240   Ready       13s   v1.13.1	

其余两台节点启动后通过csr请求:

kubectl get csr  
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo   3m22s   kubelet-bootstrap   Pending
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   12m     kubelet-bootstrap   Approved,Issued
node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U   3m35s   kubelet-bootstrap   Pending
kubectl certificate approve node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo
certificatesigningrequest.certificates.k8s.io/node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo approved
kubectl certificate approve node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U
certificatesigningrequest.certificates.k8s.io/node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U approved
kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo   4m40s   kubelet-bootstrap   Approved,Issued
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   14m     kubelet-bootstrap   Approved,Issued
node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U   4m53s   kubelet-bootstrap   Approved,Issued
kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.1.240   Ready       9s    v1.13.1
192.168.1.241   Ready       22s   v1.13.1
192.168.1.242   Ready       10m   v1.13.1	

配置kube-proxy
kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
创建kube-proxy 证书签名请求:

cat > 	 <{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}

kube-apiserver 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限
生成kube-proxy 客户端证书和私钥

cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
-ca-key=/k8s/kubernetes/ssl/ca-key.pem \
-config=/k8s/kubernetes/ssl/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

cp kube-proxy*.pem /k8s/kubernetes/ssl/
scp kube-proxy*.pem 192.168.1.241:/k8s/kubernetes/ssl/
scp kube-proxy*.pem 192.168.1.242:/k8s/kubernetes/ssl/

创建kube-proxy kubeconfig 文件

设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数

kubectl config set-credentials kube-proxy \
--client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
--client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

设置上下文参数

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

设置默认上下文

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

将kube-proxy kubeconfig文件拷贝到所有 nodes节点

cp kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.1.241:/k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.1.242:/k8s/kubernetes/cfg/

创建 kube-proxy 配置文件

cat /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.1.241 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"	
  • –cluster-cidr 必须与 kube-apiserver 的 --service-cluster-ip-range 选项值一致
    创建kube-proxy systemd unit 文件
cat /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

启动服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy

启动其余两台节点服务:

scp /k8s/kubernetes/cfg/kube-proxy 192.168.1.242:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/kube-proxy 192.168.1.243:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kube-proxy.service 192.168.1.242:/lib/systemd/system/kube-proxy.service
scp /lib/systemd/system/kube-proxy.service 192.168.1.243:/lib/systemd/system/kube-proxy.service

修改对应节点的hostname-override地址
集群状态
打node 或者master 节点的标签

kubectl label node 192.168.1.241  node-role.kubernetes.io/master='master'
kubectl label node 192.168.1.242  node-role.kubernetes.io/node='node'
kubectl label node 192.168.1.243  node-role.kubernetes.io/node='node'

查看集群状态:

kubectl get node,cs
NAME                  STATUS   ROLES    AGE   VERSION
node/192.168.1.243   Ready    node     42m   v1.13.1
node/192.168.1.242   Ready    node     42m   v1.13.1
node/192.168.1.241   Ready    master   52m   v1.13.1

NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}

部署dashboard可视化插件

cat kubernetes-dashboard.yaml

# Copyright 2017 The Kubernetes Authors.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31620
  selector:
    k8s-app: kubernetes-dashboard    
cat dashboard-adminuser.yaml 
 apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system 

#查看service和端口

kubectl get service -n kube-system | grep dashboard

创建登录用户

kubectl create -f dashboard-adminuser.yaml

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

把获取到的Token复制到登录界面的Token输入框中:

部署仪表盘资料
https://blog.csdn.net/networken/article/details/85607593

部署k8s集群资料
https://blog.csdn.net/wfs1994/article/details/86408254

你可能感兴趣的:(devops,容器,docker)