CICD搭建之 centos7 安装单机k8s问题

以前写过ubuntu16下安装k8s,因为平时工作都是用开发环境的linux系统,所以很久没开虚拟机了,ub的版本也更新了几个大版本了,最近想自己搭一套cicd平台,就直接把ub的虚拟机系统重新安装成centos7,重新安装了docker和k8,docker安装简单,k8还是用kubeadm来安装。这里记录一下在kubeadm初始化的时候碰到的问题。

执行初始化提示以下告警信息

[root@localhost wxd]# kubeadm init --pod-network-cidr=10.244.0.0/16
W1002 21:46:55.554754    2413 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

逐条解决
1、设置防火墙开放端口范围

查看防火墙状态

[root@localhost wxd]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since 五 2020-10-02 20:31:21 CST; 1h 23min ago
     Docs: man:firewalld(1)
 Main PID: 696 (firewalld)
    Tasks: 2
   Memory: 34.0M
   CGroup: /system.slice/firewalld.service
           └─696 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid

10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER' failed: iptables: No chain/target...hat name.
10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER' failed: iptables: No chain/target...hat name.
10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptable...hat name.
10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptable...hat name.
10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptable...hat name.
10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptable...hat name.
10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER-ISOLATION' failed: iptables: No ch...hat name.
10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION' failed: iptables: No ch...hat name.
10月 02 20:31:28 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: ipta... chain?).
10月 02 20:31:29 localhost.localdomain firewalld[696]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: ipta... chain?).
Hint: Some lines were ellipsized, use -l to show in full.

列出当前防火墙开放端口信息

[root@localhost wxd]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp0s8 enp0s9
  sources: 
  services: dhcpv6-client ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

设置防火墙开放端口

[root@localhost wxd]# firewall-cmd --permanent --zone=public --add-port=6443-10250/tcp
success

查看是否配置正确,必须先重载,否则看到的还是之前的信息

[root@localhost wxd]# firewall-cmd --reload
success
[root@localhost wxd]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp0s8 enp0s9
  sources: 
  services: dhcpv6-client ssh
  ports: 6443-10250/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

2、解决docker cgroupdriver问题
根据提示信息的网址说明进行配置,因为docker都是提前安装好的,所以略过docker的安装
检查一下/etc/docker/daemon.json是否存在,不存在就创建一下

[root@localhost wxd]# cat /etc/docker/daemon.json
cat: /etc/docker/daemon.json: 没有那个文件或目录
[root@localhost wxd]# cat <
[root@localhost wxd]# sudo mkdir -p /etc/systemd/system/docker.service.d
[root@localhost wxd]# sudo systemctl daemon-reload
[root@localhost wxd]# sudo systemctl restart docker
[root@localhost wxd]# sudo systemctl enable docker

3、设置kubelet开机启动

[root@localhost wxd]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

好,配置完成,重新初始

[root@localhost wxd]# kubeadm init --pod-network-cidr=10.244.0.0/16
W1002 22:05:47.522371    2952 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

提示/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
解决办法

$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

先确认一下,确实是0

[root@localhost wxd]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
0
[root@localhost wxd]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@localhost wxd]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
0
[root@localhost wxd]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
[root@localhost wxd]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
[root@localhost wxd]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
[root@localhost wxd]# kubeadm init --pod-network-cidr=10.244.0.0/16
W1002 22:46:24.955249    3309 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

执行kubeadm config images pull

[root@localhost wxd]# kubeadm config images pull
W1002 22:48:37.843810    3610 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
failed to pull image "k8s.gcr.io/kube-apiserver:v1.19.2": output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

得到版本号
通过阿里云拉取

[root@localhost wxd]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
v1.19.2: Pulling from google_containers/kube-apiserver
b9cd0ea6c874: Pull complete 
a84ff2cd01b7: Pull complete 
f5db63e1da64: Pull complete 
Digest: sha256:b119baef2a60b537c264c0ea009f63095169af089e1a36fb4167693f1b60cd1e
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2

修改tag

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2 k8s.gcr.io/kube-apiserver:v1.19.2

通过阿里云拉取得先知道各个镜像的版本,可以通过下面的命令获取

kubeadm config images list 

[root@localhost wxd]# kubeadm config images list
W1002 23:11:50.275162    4694 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.2
k8s.gcr.io/kube-controller-manager:v1.19.2
k8s.gcr.io/kube-scheduler:v1.19.2
k8s.gcr.io/kube-proxy:v1.19.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

网上找了一个命令生成的稍作做修改

kubeadm config images list
kubeadm config images list 2>/dev/null | sed 's/k8s.gcr.io/docker pull mirrorgcrio/g' | sudo sh
kubeadm config images list 2>/dev/null | sed 's/k8s.gcr.io\(.*\)/docker tag mirrorgcrio\1 k8s.gcr.io\1/g' | sudo sh
kubeadm config images list 2>/dev/null | sed 's/k8s.gcr.io/docker image rm mirrorgcrio/g' | sudo sh

修改后,因为阿里的镜像名有/ 所以要用转义符\进行转义

[root@localhost wxd]# kubeadm config images list 2>/dev/null | sed 's/k8s.gcr.io/docker pull registry.cn-hangzhou.aliyuncs.com\/google_containers/g'
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

通过阿里云拉取全部镜像

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

修改tag

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2 k8s.gcr.io/kube-apiserver:v1.19.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2 k8s.gcr.io/kube-controller-manager:v1.19.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.2 k8s.gcr.io/kube-scheduler:v1.19.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.2 k8s.gcr.io/kube-proxy:v1.19.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0

清理原镜像

docker image rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
docker image rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2
docker image rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.2
docker image rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.2
docker image rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker image rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker image rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

清理完毕后的镜像

[root@localhost wxd]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.19.2             d373dd5a8593        2 weeks ago         118MB
k8s.gcr.io/kube-apiserver            v1.19.2             607331163122        2 weeks ago         119MB
k8s.gcr.io/kube-controller-manager   v1.19.2             8603821e1a7a        2 weeks ago         111MB
k8s.gcr.io/kube-scheduler            v1.19.2             2f32d66b884f        2 weeks ago         45.7MB
k8s.gcr.io/etcd                      3.4.13-0            0369cf4303ff        5 weeks ago         253MB
k8s.gcr.io/coredns                   1.7.0               bfe3a36ebd25        3 months ago        45.2MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        7 months ago        683kB

再次初始化kubeadm

[root@localhost wxd]# kubeadm init --pod-network-cidr=10.244.0.0/16
W1002 23:38:45.961050    5257 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.3.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.508225 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xrt0xr.4jvurjewha8q8ab6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.3.15:6443 --token xrt0xr.4jvurjewha8q8ab6 \
    --discovery-token-ca-cert-hash sha256:14df112af5adda702b32fe0b318acf8f70df4a45a21c35bf9b0b20cd76881bb6 

按照提示依次执行

[root@localhost wxd]# mkdir -p $HOME/.kube
[root@localhost wxd]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@localhost wxd]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时的k8集群的状态是

[root@localhost wxd]# kubectl get pods -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-nwhdq             0/1     Pending   0          46m
coredns-f9fd979d6-v9htm             0/1     Pending   0          46m
etcd-localhost                      1/1     Running   0          46m
kube-apiserver-localhost            1/1     Running   0          46m
kube-controller-manager-localhost   1/1     Running   0          46m
kube-proxy-mr6qk                    1/1     Running   0          46m
kube-scheduler-localhost            1/1     Running   0          46m

dns是pending状态,需要cni插件才能正常运行。

下载flannel.yml

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

根据kube-flannel.yml中的版本信息

      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0-rc2

下载镜像

$ docker pull quay.io/coreos/flannel:v0.13.0-rc2
[root@localhost wxd]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看集群状态

[root@localhost wxd]# kubectl get pods -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-nwhdq             1/1     Running   0          97m
coredns-f9fd979d6-v9htm             1/1     Running   0          97m
etcd-localhost                      1/1     Running   0          97m
kube-apiserver-localhost            1/1     Running   0          97m
kube-controller-manager-localhost   1/1     Running   0          97m
kube-flannel-ds-2g6ml               1/1     Running   0          33s
kube-proxy-mr6qk                    1/1     Running   0          97m
kube-scheduler-localhost            1/1     Running   0          97m

集群节点也是Ready状态,说明集群已经启动完毕

[root@localhost wxd]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
localhost   Ready    master   106m   v1.19.2

参考资料:
一、解决docker驱动问题
二、解决kubeadm init /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
三、1条命令解决使用kubeadm安装 kubernetes 从 k8s.gcr.io 拉取镜像失败的问题
四、centos7开配置开放端口

你可能感兴趣的:(CICD搭建之 centos7 安装单机k8s问题)