kubeadm
工具部署k8s
集群
二进制
的部署方式但是需要单独部署k8s的每个组件比较繁琐)Kubernetes v1.17.0
版本部署1、一台或多台运行着下列系统的机器:
2、每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
3、 CPU为2核或更多
4、集群中的所有机器的网络彼此均能相互连接(公网和内网都可以),并且可以访问外网
5、节点之中不可以有重复的主机名、MAC 地址或 product_uuid
6、开启机器上的某些端口、请参见这里 了解更多详细信息。
7、禁用swap分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区
服务器列表
主机 | 角色 | IP |
---|---|---|
hadoop300 | Master | 192.168.2.100 |
hadoop301 | Node-1 | 192.168.2.101 |
hadoop302 | Node-2 | 192.168.2.102 |
每台主机需安装的组件
组件 | hadoop300(master) | hadoop301 | hadoop302 |
---|---|---|---|
kubelet | V | V | V |
Docker | V | V | V |
kubeadm | V | V | V |
kubectl | V | V | V |
Docker
: 使用Docker作为Kubernetes的CRI(容器运行时)
tip
: 听说最新版本的k8s不在使用docker了kubelet
: 它运⾏在所有节点上,主要负责去启动容器和 Podkubeadm
: 一个简化部署k8s的工具,部署后基本很少使用, 除非升级k8skubectl
: k8s命令⾏⼯具,通过它去与k8s对话
, 管理k8s的各种资源.API服务器:
就是一个web服务, 对k8s各种资源(Pod,Service,RC) 的增删改查, 也是集群内各个组件之间数据交换的中间人etcd
: 采用raft协议作为一致性算法实现的分布式的key-value数据库, 用于存储资源信息Controller Manager
: 应用集群管理者, 管理副本,节点,资源,命名空间,服务等. 如RC,RSScheduler
: 负责把 Pod 调度到 Node 上,不过调度完以后就由 kubelet 来管理 Node 了。kubelet
: 处理Scheduler 的调度任务并且完成资源调度以后,kubelet 进程会在 APIServer 上注册 Node 信息,定期向 Master 汇报 Node 信息, 同时管理 Pod 及 Pod 中的容器,proxy
: 实现service的通信与负载均衡机制, 负责为Pod创建代理服务,实现server到Pod的请求路由和转发,从而实现K8s层级的虚拟转发网络。节点与节点通信通过物理网卡
跨节点的Pod之间通信通过虚拟的网络层(比如Flannel或者Calico)
同一节点的Pod之间通信通过docker虚拟网桥
Pod内部的容器间的通信通过共享网络空间Pause
外网的通信通过Service层
这里选择的Flannel
网络方案, 具体架构为:
Flannel
是针对Kubernetes设计的一个网络规划服务,是一种应用层网络, 覆盖网络
, 通过在每个节点上部署一个flanneld进程去监听一个端口的TCP数据包, 收到数据包后去对原始网络通信转发协议再包了一层, 就像HTTP是在TCP之上再封装一层数据包, 然后可以通过域名访问.flanneld进程
会监听一个端口, 并且将收到来自节点1的Pod的数据后进行协议包装, 然后根据协议转发到节点2的Flanneld进程
, 然后其再转发给对应的Pod, 从而实现跨节点的Pod的通信.docker网桥和其下的pod的所有ip地址的映射关系
, 通过这个映射关系就可以知道最终要访问的pod是在哪个主机下的哪个docker网桥下的哪个ip地址, 并且保证了所有node上flanned所看到的配置是一致的。同时每个node上的flanned监听etcd上的数据变化,实时感知集群中node的变化。kubectl
命令时 通过向 Kubemetes API 服务器
发送一个 HTTP 请求,在集群中创建一个新的 Replication Controller
对象 然后,ReplicationController 建了1 个新的 pod
,调度器Scheduler
将其调度到 一个工作节点
上, Kubelet 看到 pod 被调度到节点上,就告知 Docker 从镜像中心中拉取指定的镜像,因为本地没有该镜像, 下载镜像后, Docker会启动运行该容器
假设你已经为集群每台服务器安装了docker, 配置了独立主机名,静态IP地址, 关闭了防火墙, 并且能互相免密登陆
每台主机都要配置
/etc/selinux/config
文件, 修改如下:SELINUX=disabled
或者
直接执行下面命令
[root@hadoop300 ~]$ xcall "sed -i 's/\SELINUX=.*/SELINUX=disabled/' /etc/selinux/config"
每台主机都要配置
修改 /etc/fstab
文件, 修改如下:
# 注释掉即可
#/dev/mapper/centos-swap swap swap defaults 0 0
重启后查看swap状态
[root@hadoop300 ~]# free -mh
total used free shared buff/cache available
Mem: 3.8G 815M 2.2G 10M 795M 2.8G
Swap: 0B 0B 0B
(可选)
每台主机都要配置
修改/etc/docker/daemon.json
文件, 修改如下
{
"registry-mirrors":["https://jccl15o4.mirror.aliyuncs.com"]
}
之后重启docker配置
[root@hadoop300 tmp]# systemctl daemon-reload
[root@hadoop300 tmp]# systemctl restart docker
创建/etc/yum.repos.d/kubrenetes.repo
文件
[root@hadoop300 ~]$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
每台主机都要
[root@hadoop300 ~]$ yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
设置 kubelet 为 开机启动
[root@hadoop300 ~] systemctl enable kubelet
[root@hadoop300 ~] systemctl start kubelet
kubeadm
工具初始化 k8s集群
在hadoop300主机执行
[root@hadoop300 ~]$kubeadm init \
--apiserver-advertise-address=192.168.2.100 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.0 \
--pod-network-cidr=10.244.0.0/16
[参数说明]
apiserver-advertise-address
: 指定 API Server的IP地址image-repository
: 默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址kubernetes-version
: 部署的k8s版本pod-network-cidr
: 指定Pod网络的IP地址范围, 选用不同的网络方案值可能不同,这里选择Flannel
网络方案, 填写10.244.0.0/16 即可等待执行完成,把执行日志保存下载,后面会用到
[root@hadoop300 hadoop]$ kubeadm init \
> --apiserver-advertise-address=192.168.2.100 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.17.0 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hadoop300 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hadoop300 localhost] and IPs [192.168.2.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hadoop300 localhost] and IPs [192.168.2.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0125 01:09:06.409400 9705 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.503496 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[mark-control-plane] Marking the node hadoop300 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node hadoop300 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ycni4f.ru3eby1og6qasmzc
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.100:6443 --token ycni4f.ru3eby1og6qasmzc \
--discovery-token-ca-cert-hash sha256:d46b209f85303f3bffcdacd4ecc4f3856eb4198ce41c60f871f2c0a8d6ce162f
init日志
可以看出需要把/etc/kubernetes/admin.conf
文件配置成能全局访问到即可, 可以自定配置到环境变量, 也可以按照init日志
的方法进行存放[root@hadoop300 ~]#mkdir -p $HOME/.kube # 此目录主要保存一些k8s的配置缓存之类
[root@hadoop300 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@hadoop300 ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config # 属主和属组受给当前用户
使用kubectl
查看 k8s节点的状态
[root@hadoop300 ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
hadoop300 NotReady master 17m v1.17.0
将/proc/sys/net/bridge/bridge-nf-call-iptables
文件设置为1
(每台服务器都要配置)
[root@hadoop300 ~]echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@hadoop300 ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
之后等待一段时间后查看flannel的pod是否运行成功, 然后master节点就变为Ready状态了
[root@hadoop300 ~]$ kubectl get pod --all-namespaces
NAME READY STATUS RESTARTS AGE
coredns-9d85f5447-2jq9g 1/1 Running 0 58m
coredns-9d85f5447-tlgbk 1/1 Running 0 58m
etcd-hadoop300 1/1 Running 1 58m
kube-apiserver-hadoop300 1/1 Running 1 58m
kube-controller-manager-hadoop300 1/1 Running 1 58m
kube-flannel-ds-gqj6v 1/1 Running 0 22m
kube-proxy-w6f49 1/1 Running 1 58m
kube-scheduler-hadoop300 1/1 Running 1 58m
# 查看节点状态
[root@hadoop300 ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
hadoop300 Ready master 60m v1.17.0
加入集群命令
kubeadm join --token : --discoverytoken-ca-cert-hash sha256:
token从上面的init日志中获取, 如果未保存自行通过kubeadm工具获取
如下将两个节点hadoop301和hadoop302假如k8s集群
[hadoop@hadoop302 ~]$ kubeadm join 192.168.2.100:6443 \
--token ycni4f.ru3eby1og6qasmzc \
--discovery-token-ca-cert-hash sha256:d46b209f85303f3bffcdacd4ecc4f3856eb4198ce41c60f871f2c0a8d6ce162f
[hadoop@hadoop301 ~] # 同上
要等待一段时间, 因为slave节点拉取Flannel
可能比较慢:
此时再查看3个节点的状态都变为Ready了
[root@hadoop300 ~]$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hadoop300 Ready master 88m v1.17.0 192.168.2.100 CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://1.13.1
hadoop301 Ready 13m v1.17.0 192.168.2.101 CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://1.13.1
hadoop302 Ready 17m v1.17.0 192.168.2.102 CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://1.13.1
/user
1、编写DrockerFile
FROM java:8
VOLUME /tmp
ADD test.jar test.jar
ENTRYPOINT ["nohup","java","-jar","/test.jar","&"]
2、然后打包成镜像
[root@hadoop300 tmp]# pwd
/home/hadoop/tmp
[root@hadoop300 tmp]# ll
-rw-rw-r-- 1 hadoop hadoop 98 1月 31 21:01 Dockerfile
-rw-rw-r-- 1 hadoop hadoop 19329878 1月 31 21:00 test.jar
[root@hadoop300 tmp]# docker build -t springboot01 .
[root@hadoop300 tmp]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
springboot01 latest d04728534480 4 days ago 663 MB
3、推送到Docker Hub镜像仓库
[root@hadoop300 tmp]# docker login
[root@hadoop300 tmp]# docker push burukeyou/springboot01
vim demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo-pod01 # pod名字
namespace: default # 所属命名空间
labels:
demo: java # 打标签
spec:
containers:
- name: springboot01 # 容器名字
image: burukeyou/springboot01 # 镜像地址(刚才推送的)
imagePullPolicy: IfNotPresent # 是否优先从本地拉取镜像,如果不存在再从远程拉取
ports:
- containerPort: 8080 # 启动端口
/user
返回字符串 “user: {name: 30}”
[root@hadoop300 tmp]# kubectl create -f demo.yaml
pod/demo-pod01 created
[root@hadoop300 tmp]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
demo-pod01 1/1 Running 0 42s 10.244.1.20 hadoop302
[root@hadoop300 tmp]# curl http://10.244.1.20:8080/user
user: {name: 30}
资源清单
发布1、先把资源清单下载下来并修改
[root@hadoop300 tmp]$ wget http://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
[root@hadoop300 tmp]# vim recommended.yaml
2、然后修改Service类型为NodePort便于外网访问
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # 修改为NodePort类型的Service
ports:
- port: 443
targetPort: 8443
nodePort: 30443 # dashboard网页的访问端口(自定义)
selector:
k8s-app: kubernetes-dashboard
部署dashboard
[root@hadoop300 tmp]# kubectl create -f recommended.yaml
查看 Dashboard的 Pod 和 services 是否启动成功
[root@hadoop300 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-mclvf 1/1 Running 0 7s
kubernetes-dashboard kubernetes-dashboard-5996555fd8-2xn6c 1/1 Running 0 8s
[root@hadoop300 ~]# kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.115.150 8000/TCP 41s
kubernetes-dashboard kubernetes-dashboard NodePort 10.96.54.85 443:30443/TCP 42s
然后通过浏览器访问, 访问地址为: https://k8s任意节点的IP:30443/
发现要登陆, 提供了两种方式,这里采用token登陆, 先创建一个用户去登陆
1、编写创建管理员并且绑定角色的资源清单 vim create-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hadoop # 自定义用户名
namespace: kubernetes-dashboard
---
# 给用户 hadoop 绑定角色
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hadoop # 同上
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: hadoop # 同上
namespace: kubernetes-dashboard
发布
[root@hadoop300 tmp]# kubectl apply -f create-admin.yaml
获取Token
# 查看所有 serviceaccount 和 secrets
[root@hadoop300 tmp]# kubectl get sa,secrets -n kubernetes-dashboard
NAME SECRETS AGE
serviceaccount/default 1 61m
serviceaccount/hadoop 1 31s
serviceaccount/kubernetes-dashboard 1 61m
NAME TYPE DATA AGE
secret/default-token-wdt66 kubernetes.io/service-account-token 3 61m
secret/hadoop-token-6g859 kubernetes.io/service-account-token 3 31s
secret/kubernetes-dashboard-certs Opaque 0 61m
secret/kubernetes-dashboard-csrf Opaque 1 61m
secret/kubernetes-dashboard-key-holder Opaque 2 61m
secret/kubernetes-dashboard-token-7thdm kubernetes.io/service-account-token 3 61m
# 查看 hadoop用户的secret的Token
[root@hadoop300 tmp]# kubectl describe secret hadoop-token-6g859 -n kubernetes-dashboard
Name: hadoop-token-6g859
Namespace: kubernetes-dashboard
Labels:
Annotations: kubernetes.io/service-account.name: hadoop
kubernetes.io/service-account.uid: 6a63be78-5de6-4892-8123-6a66378df504
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjFNQzA0QXZhdXJTeUtVSHhPQ3pldkZ6NWdRM285cTlPUjdYTXoxWjJ1Q00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJoYWRvb3AtdG9rZW4tNmc4NTkiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiaGFkb29wIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNmE2M2JlNzgtNWRlNi00ODkyLTgxMjMtNmE2NjM3OGRmNTA0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmhhZG9vcCJ9.wr7g1CmIrgv9Jj-Ap1leow8zefIzotzS4LE_qO5VNziVqEcnhCYzP2q6GnkUYR4CJt7YtEBqF6OLvlB5mHBPmFtHgtp-LiFUujScKgDdx8jdTBcVmeb39Fw_knjBuSLBOd3fqdvXumBajlwKpDQL_gYnkhc7bxn5FICYfalf1PF3AoPq8WjR2VoCDnGBB1qeaT87e6xnflScx3l6NNSEN3Bl8Ymt8WJRi4Ch0nhUZPLAXZxgO3kt1-TWHo5wASYiMW4Xwb-kPv6yAgoNTm9h6jgGqimf2InEW9rGLbnRAR0O9ZelFI6G4bE5sXtdNL_YdaVTRcmUYUKusaMpEADquQ
快捷获取Token命令
kubectl describe secret $(kubectl get secrets -n kubernetes-dashboard | grep hadoop | awk '{print $1}') -n kubernetes-dashboard | grep token:
然后拿着token去登陆即可
三种方法:
1、可通过火狐浏览器强行打开
2、如果用chrome浏览器chrome浏览器提示不安全打不开, 在当前页面用键盘输入 thisisunsafe, 不是在地址栏输入,是直接敲键盘
3、重新部署Dashboard, 关闭安全验证
a)卸载Dashboard应用
[root@hadoop300 tmp]# kubectl delete -f recommended.yaml
修改资源清单recommended.yaml
把Secret 注释掉不创建
# ------------------- Dashboard Secret------------------- #
#apiVersion: v1
#kind: Secret
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
# name: kubernetes-dashboard-certs
# namespace: kubernetes-dashboard
#type: Opaque
然后再部署dashboard
[root@hadoop300 tmp]# kubectl create -f recommended.yaml
然后自己生成 之前注释掉的secret
即 kubernetes-dashboard-certs
# 生成dashboard.key 文件
[root@hadoop300 cert]# openssl genrsa -out dashboard.key 2048
[root@hadoop300 cert]# ll
-rw-r--r-- 1 root root 1675 2月 4 02:27 dashboard.key
# 生成dashboard.csr 文件
[root@hadoop300 cert]# openssl req -days 36000 -new -out dashboard.csr -key
[root@hadoop300 cert]# ll
-rw-r--r-- 1 root root 903 2月 4 02:28 dashboard.csr
-rw-r--r-- 1 root root 1675 2月 4 02:27 dashboard.key
# 生成自签证书
[root@hadoop300 cert]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
# 使用自签证书创建secret, 创建名为 kubernetes-dashboard-certs的 generic secret 它包含两个条目
# dashboard.key 和 dashboard.crt
[root@hadoop300 cert] kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
# 查看 该secrets 是否创建成功
[root@hadoop300 cert]# kubectl get sa,secrets -n kubernetes-dashboard
NAME SECRETS AGE
serviceaccount/default 1 23h
serviceaccount/kubernetes-dashboard 1 23h
NAME TYPE DATA AGE
secret/default-token-92k64 kubernetes.io/service-account-token 3 23h
secret/kubernetes-dashboard-certs Opaque 2 23h
secret/kubernetes-dashboard-csrf Opaque 1 23h
secret/kubernetes-dashboard-key-holder Opaque 2 23h
secret/kubernetes-dashboard-token-lcxrm kubernetes.io/service-account-token 3 23h
如果觉得文章有用,你可鼓励下作者