全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署

目录

前言

一、Master管理节点需要安装的组件介绍

二、配置Master管理节点各组件模块

1.配置kube-apiserver组件

2.配置kube-controller-manager组件

3.配置kube-scheduler组件

4.查看集群组件状态

三、Node工作节点需要安装的组件介绍

四、配置Node工作节点各组件模块

1.安装Docker

2.配置Node节点证书

3.配置kubelet组件

4.配置kube-proxy组件

5.部署其它Node工作节点

6.部署K8S集群网络

7.部署集群内部DNS服务


 

前言

        在上一篇文章中,我们已经做好了前期的集群规划和Etcd数据库的集群部署,现在我们将对Master管理监控节点和Node工作节点配置核心组件。


一、Master管理节点需要安装的组件介绍

        Master 是 K8S 的集群控制节点,负责整个集群的管理和控制,基本上 K8S 所有的控制命令都是发给它,它来负责具体的执行过程。下面是Master节点上的组件介绍:

kube-apiserver:

        K8S集群中的所有资源的访问以及变更,都要依靠kube-apiserver这个重要的核心的组件。

        它是集群的统一入口,各组件协调者,以 HTTP Rest 提供接口服务,所有对象资源的增、删、改、查和监听操作都交给 kube-apiserver 处理后再提交给 Etcd 数据库存储。

  • 提供用户、集群中的不同部分和集群外部组件相互通信,各模块之间的数据交互和通信的枢纽
  • 查询和操纵集群中的Pod、Namespace、ConfigMap 和 Event 等对象的状态
  • 提供集群管理的 REST API 接口,包括认证授权、数据校验以及集群状态变更等
  • 是资源配额控制的入口
  • 拥有完备的集群安全机制

     

kube-controller-manager:

        是 K8S 里所有资源对象的自动化控制中心,处理集群中常规后台任务,一个资源对应一个控制器,而 kube-controller-manager 就是负责管理这些控制器的。

        Kubernetes 控制器管理器是一个守护程序,它嵌入了 Kubernetes 附带的核心控制循环。在机器人和自动化应用中,控制回路是调节系统状态的非终止回路。在 Kubernetes 中,控制器是一个控制循环,它通过 apiserver 监视集群的共享状态,并进行更改,尝试将当前状态移动到所需状态。目前 Kubernetes 附带的控制器示例包括复制控制器、端点控制器、命名空间控制器和服务帐户控制器。

kube-scheduler:

        Kubernetes 调度程序是一个控制平面进程,它将 Pod 分配给Node节点。调度程序根据约束和可用资源确定调度队列中每个 Pod 的有效放置节点。然后,调度程序对每个有效节点进行排名,并将 Pod 绑定到合适的节点。

        下图是在网上找到的k8s的框架图:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第1张图片

 

二、配置Master管理节点各组件模块

1.配置kube-apiserver组件

        (1)自签ApiServer SSL证书:

        先给apiserver签发一份SSL证书,整个过程和上一篇文章中给Etcd自签证书步骤大同小异。

        ①创建所需目录:

  • /k8s/kubernetes/ssl(自签证书存放目录)
  • /k8s/kubernetes/cfg(配置文件存放目录)
  • /k8s/kubernetes/bin(执行程序存放目录)
  • /k8s/kubernetes/logs(日志文件存放目录)
[root@k8s-master-1 ~]# cd /
[root@k8s-master-1 /]# mkdir -p /k8s/kubernetes/{ssl,cfg,bin,logs}
[root@k8s-master-1 /]# cd k8s/kubernetes/
[root@k8s-master-1 kubernetes]# ll
total 0
drwxr-xr-x 2 root root 6 Mar 23 18:42 bin
drwxr-xr-x 2 root root 6 Mar 23 18:42 cfg
drwxr-xr-x 2 root root 6 Mar 23 18:42 logs
drwxr-xr-x 2 root root 6 Mar 23 18:42 ssl

        ②进入ssl目录创建CA配置文件ca-config.json:

[root@k8s-master-1 ssl]# vim ca-config.json
[root@k8s-master-1 ssl]# cat ca-config.json
 {
   "signing": {
     "default": {
       "expiry": "87600h"
     },
     "profiles": {
       "kubernetes": {
         "usages": [
             "signing",
             "key encipherment",
             "server auth",
             "client auth"
         ],
         "expiry": "87600h"
       }
     }
   }
 }

       ③配置CA证书签名请求文件ca-csr.json:

[root@k8s-master-1 ssl]# vim ca-csr.json
[root@k8s-master-1 ssl]# cat ca-csr.json
 {
   "CN": "kubernetes",
   "key": {
     "algo": "rsa",
     "size": 2048
   },
   "names": [
     {
       "C": "CN",
       "ST": "Shanghai",
       "L": "Shanghai",
       "O": "kubernetes",
       "OU": "System"
     }
   ],
     "ca": {
        "expiry": "87600h"
     }
 }


[root@k8s-master-1 ssl]# ll
total 8
-rw-r--r-- 1 root root 292 Mar 23 18:46 ca-config.json
-rw-r--r-- 1 root root 262 Mar 23 18:46 ca-csr.json

        ④生成CA证书和私钥:

[root@k8s-master-1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2022/03/23 18:47:53 [INFO] generating a new CA key and certificate from CSR
2022/03/23 18:47:53 [INFO] generate received request
2022/03/23 18:47:53 [INFO] received CSR
2022/03/23 18:47:53 [INFO] generating key: rsa-2048
2022/03/23 18:47:53 [INFO] encoded CSR
2022/03/23 18:47:53 [INFO] signed certificate with serial number 72946279832304111272771850607707125708900150427

[root@k8s-master-1 ssl]# ll
total 20
-rw-r--r-- 1 root root  292 Mar 23 18:46 ca-config.json
-rw-r--r-- 1 root root 1013 Mar 23 18:47 ca.csr
-rw-r--r-- 1 root root  262 Mar 23 18:46 ca-csr.json
-rw------- 1 root root 1675 Mar 23 18:47 ca-key.pem
-rw-r--r-- 1 root root 1383 Mar 23 18:47 ca.pem

       ⑤配置证书签名请求文件kubernetes-csr.json:

[root@k8s-master-1 ssl]# vim kubernetes-csr.json
[root@k8s-master-1 ssl]# cat kubernetes-csr.json
{
    "CN": "kubernetes",
    "hosts": [    #指定会直接访问apiserver的IP列表,一般需指定etcd集群、kubernetes master 集群的主机IP和kubernetes服务的服务IP
      "127.0.0.1",
      "10.0.0.1",
      "192.168.61.161",
      "192.168.61.162",
      "192.168.61.163",
      "192.168.61.164",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "kubernetes",
            "OU": "System"
        }
    ]
}

        ⑥为kubernetes生成证书和私钥:

[root@k8s-master-1 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2022/03/23 18:57:35 [INFO] generate received request
2022/03/23 18:57:35 [INFO] received CSR
2022/03/23 18:57:35 [INFO] generating key: rsa-2048
2022/03/23 18:57:36 [INFO] encoded CSR
2022/03/23 18:57:36 [INFO] signed certificate with serial number 692567500574363843249097959324089300840804006652
2022/03/23 18:57:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

 

        (2)拉取二进制包,部署kube-apiserver:

        ①下载二进制包:kubernetes-v1.16.2-server-linux-amd64.zip的下载链接 提取码:n3as

        下载好后,用xshell等远程连接工具上传到Linux上,然后解压到指定目录,接着给予运行权限:

[root@k8s-master-1 ~]# rz

[root@k8s-master-1 ~]# ll
total 124368
-rw-r--r-- 1 root root 127350862 Apr 19 18:14 kubernetes-v1.16.2-server-linux-amd64.zip
[root@k8s-master-1 ~]# unzip -d /k8s/kubernetes/bin/ kubernetes-v1.16.2-server-linux-amd64.zip 

[root@k8s-master-1 ~]# cd /k8s/kubernetes/bin/
[root@k8s-master-1 bin]# ll
total 458404
-rw-r--r-- 1 root root 120672256 Mar 29 21:37 kube-apiserver
-rw-r--r-- 1 root root 110063616 Mar 29 21:37 kube-controller-manager
-rw-r--r-- 1 root root  44036096 Mar 29 21:37 kubectl
-rw-r--r-- 1 root root 113292024 Mar 29 21:37 kubelet
-rw-r--r-- 1 root root  38383616 Mar 29 21:37 kube-proxy
-rw-r--r-- 1 root root  42954752 Mar 29 21:37 kube-scheduler
[root@k8s-master-1 bin]# chmod +755 kube*
[root@k8s-master-1 bin]# ll
total 458404
-rwxr-xr-x 1 root root 120672256 Mar 29 21:37 kube-apiserver
-rwxr-xr-x 1 root root 110063616 Mar 29 21:37 kube-controller-manager
-rwxr-xr-x 1 root root  44036096 Mar 29 21:37 kubectl
-rwxr-xr-x 1 root root 113292024 Mar 29 21:37 kubelet
-rwxr-xr-x 1 root root  38383616 Mar 29 21:37 kube-proxy
-rwxr-xr-x 1 root root  42954752 Mar 29 21:37 kube-scheduler

        把kubectl复制到/usr/local/bin/目录下:

[root@k8s-master-1 bin]# cp kubectl /usr/local/bin/
[root@k8s-master-1 bin]# cd /usr/local/bin/
[root@k8s-master-1 bin]# ll
total 61812
-rwxr-xr-x 1 root root 10376657 Jan 15 15:59 cfssl
-rw-r--r-- 1 root root  6595195 Jan 15 16:13 cfssl-certinfo
-rwxr-xr-x 1 root root  2277873 Jan 15 16:07 cfssljson
-rwxr-xr-x 1 root root 44036096 Mar 29 21:38 kubectl

        ②配置Node令牌文件token.csv:

        master的apiserver启用TLS认证之后,Node工作节点的kubelet组件想要加入集群,就必须使用CA签发的有效证书,才能与apiserver进行通信;而一旦Node工作节点增多,签署证书会比较繁琐,为了简便,就产生了TLS Bootstrap机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。因此要提前为apiserver生成一个令牌文件,在后面的步骤中Node节点会使用到。

        随机生成token字符串:

[root@k8s-master-1 kubernetes]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
e2b20b5979e898e33c644471e53ee8a2

注意:apiserver配置的token一定要和Node节点的bootstrap.kubeconfig配置保持一致!

        创建token.csv:格式“token,用户,UID,用户组”

[root@k8s-master-1 kubernetes]# vim /k8s/kubernetes/cfg/token.csv
[root@k8s-master-1 kubernetes]# cat /k8s/kubernetes/cfg/token.csv
e2b20b5979e898e33c644471e53ee8a2,kubelet-bootstrap,10001,"system:node-bootstrapper"


[root@k8s-master-1 kubernetes]# cd cfg/
[root@k8s-master-1 cfg]# cat token.csv 
e2b20b5979e898e33c644471e53ee8a2,kubelet-bootstrap,10001,"system:node-bootstrapper"

        ③创建kube-apiserver的配置文件kube-apiserver.conf:

        配置项用途请查看:官方文档

[root@k8s-master-1 cfg]# vim /k8s/kubernetes/cfg/kube-apiserver.conf
[root@k8s-master-1 cfg]# cat /k8s/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--etcd-servers=https://192.168.61.161:2379,https://192.168.61.162:2379,https://192.168.61.163:2379 \
  --bind-address=192.168.61.161 \
  --secure-port=6443 \
  --advertise-address=192.168.61.161 \
  --allow-privileged=true \
  --service-cluster-ip-range=10.0.0.0/24 \
  --service-node-port-range=30000-32767 \
  --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
  --authorization-mode=RBAC,Node \
  --enable-bootstrap-token-auth=true \
  --token-auth-file=/k8s/kubernetes/cfg/token.csv \
  --kubelet-client-certificate=/k8s/kubernetes/ssl/kubernetes.pem \
  --kubelet-client-key=/k8s/kubernetes/ssl/kubernetes-key.pem \
  --tls-cert-file=/k8s/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/k8s/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/k8s/kubernetes/ssl/ca.pem \
  --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/k8s/etcd/ssl/ca.pem \
  --etcd-certfile=/k8s/etcd/ssl/etcd.pem \
  --etcd-keyfile=/k8s/etcd/ssl/etcd-key.pem \
  --v=2 \
  --logtostderr=false \
  --log-dir=/k8s/kubernetes/logs \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/k8s/kubernetes/logs/k8s-audit.log"

[root@k8s-master-1 cfg]# ll
total 8
-rw-r--r-- 1 root root 1261 Mar 29 21:59 kube-apiserver.conf
-rw-r--r-- 1 root root   84 Mar 29 21:50 token.csv


配置参数解释:
--etcd-servers:etcd 集群地址
--bind-address:apiserver 监听的地址,一般配主机IP
--secure-port:监听的端口
--advertise-address:集群通告地址,其它Node节点通过这个地址连接 apiserver,不配置则使用 --bind-address
--service-cluster-ip-range:Service 的 虚拟IP范围,以CIDR格式标识,该IP范围不能与物理机的真实IP段有重合。
--service-node-port-range:Service 可映射的物理机端口范围,默认30000-32767
--admission-control:集群的准入控制设置,各控制模块以插件的形式依次生效,启用RBAC授权和节点自管理
--authorization-mode:授权模式,包括:AlwaysAllow,AlwaysDeny,ABAC(基于属性的访问控制),Webhook,RBAC(基于角色的访问控制),Node(专门授权由 kubelet 发出的API请求)。(默认值"AlwaysAllow")。
--enable-bootstrap-token-auth:启用TLS bootstrap功能
--token-auth-file:这个文件将被用于通过令牌认证来保护API服务的安全端口。
--v:指定日志级别,0~8,越大日志越详细

④配置kube-apiserver.service服务:

[root@k8s-master-1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
[root@k8s-master-1 ~]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver.conf
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

        ⑤启动kube-apiserver服务组件,并设置开机自启动:

[root@k8s-master-1 ~]# systemctl daemon-reload
[root@k8s-master-1 ~]# systemctl start kube-apiserver 
[root@k8s-master-1 ~]# systemctl enable kube-apiserver 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

[root@k8s-master-1 ~]# systemctl status kube-apiserver 
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-03-30 14:54:46 CST; 16s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 2278 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─2278 /k8s/kubernetes/bin/kube-apiserver --etc...

Mar 30 14:54:46 k8s-master-1 systemd[1]: Started Kubernete...
Mar 30 14:54:46 k8s-master-1 systemd[1]: Starting Kubernet...
Mar 30 14:54:51 k8s-master-1 kube-apiserver[2278]: E0330 1...
Hint: Some lines were ellipsized, use -l to show in full.

        查看启动日志:

[root@k8s-master-1 ~]# less /k8s/kubernetes/logs/kube-apiserver.INFO

Log file created at: 2022/03/30 14:54:47
Running on machine: k8s-master-1
Binary: Built with gc go1.13.9 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0330 14:54:47.056533    2278 flags.go:33] FLAG: --add-dir-header="false"
I0330 14:54:47.056860    2278 flags.go:33] FLAG: --address="127.0.0.1"
I0330 14:54:47.056865    2278 flags.go:33] FLAG: --admission-control="[]"
I0330 14:54:47.056902    2278 flags.go:33] FLAG: --admission-control-config-file=""
I0330 14:54:47.056905    2278 flags.go:33] FLAG: --advertise-address="192.168.61.161"
I0330 14:54:47.056907    2278 flags.go:33] FLAG: --allow-privileged="true"
I0330 14:54:47.057821    2278 flags.go:33] FLAG: --alsologtostderr="false"
I0330 14:54:47.057827    2278 flags.go:33] FLAG: --anonymous-auth="true"
I0330 14:54:47.057831    2278 flags.go:33] FLAG: --api-audiences="[]"
I0330 14:54:47.057836    2278 flags.go:33] FLAG: --apiserver-count="1"
I0330 14:54:47.057896    2278 flags.go:33] FLAG: --audit-dynamic-configuration="false"
I0330 14:54:47.057898    2278 flags.go:33] FLAG: --audit-log-batch-buffer-size="10000"
I0330 14:54:47.057900    2278 flags.go:33] FLAG: --audit-log-batch-max-size="1"
I0330 14:54:47.057901    2278 flags.go:33] FLAG: --audit-log-batch-max-wait="0s"
按q退出

        ⑥将kubelet-bootstrap用户绑定到系统集群角色,之后便于Node使用token请求证书:

[root@k8s-master-1 ~]# kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

        成功会显示:clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

 

2.配置kube-controller-manager组件

        (1)配置kube-controller-manager.conf文件:

[root@k8s-master-1 cfg]# vim kube-controller-manager.conf
[root@k8s-master-1 cfg]# cat kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--leader-elect=true \
  --master=127.0.0.1:8080 \
  --address=127.0.0.1 \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.244.0.0/16 \
  --service-cluster-ip-range=10.0.0.0/24 \
  --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/k8s/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem \
  --experimental-cluster-signing-duration=87600h0m0s \
  --v=2 \
  --logtostderr=false \
  --log-dir=/k8s/kubernetes/logs"


[root@k8s-master-1 ~]# cd /k8s/kubernetes/cfg/
[root@k8s-master-1 cfg]# ll
total 12
-rw-r--r-- 1 root root 1261 Mar 29 21:59 kube-apiserver.conf
-rw-r--r-- 1 root root  571 Mar 30 15:05 kube-controller-manager.conf

参数解释:
leader-elect=true #当该组件启动多个时,自动选举,默认true
master #连接本地apiserver,默认会监听本地8080端口
allocate-node-cidrs #是否分配和设置Pod的CDIR
service-cluster-ip-range #Service集群IP段

        (2)创建kube-controller-manager.service服务:

[root@k8s-master-1 cfg]# vim /usr/lib/systemd/system/kube-controller-manager.service
[root@k8s-master-1 cfg]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

        (3)启动kube-controller-manager组件,并设置开机自启动:

[root@k8s-master-1 cfg]# systemctl daemon-reload
[root@k8s-master-1 cfg]# systemctl start kube-controller-manager
[root@k8s-master-1 cfg]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

[root@k8s-master-1 cfg]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-03-30 15:22:49 CST; 22s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 2393 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─2393 /k8s/kubernetes/bin/kube-controller-mana...

Mar 30 15:22:49 k8s-master-1 systemd[1]: Started Kubernete...
Mar 30 15:22:49 k8s-master-1 systemd[1]: Starting Kubernet...
Mar 30 15:22:49 k8s-master-1 kube-controller-manager[2393]: ...
Mar 30 15:22:50 k8s-master-1 kube-controller-manager[2393]: ...
Mar 30 15:23:00 k8s-master-1 kube-controller-manager[2393]: ...
Hint: Some lines were ellipsized, use -l to show in full.

        查看启动日志:

[root@k8s-master-1 cfg]# less /k8s/kubernetes/logs/kube-controller-manager.INFO

Log file created at: 2022/03/30 15:22:49
Running on machine: k8s-master-1
Binary: Built with gc go1.13.9 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0330 15:22:49.585253    2393 flags.go:33] FLAG: --add-dir-header="false"
I0330 15:22:49.585473    2393 flags.go:33] FLAG: --address="127.0.0.1"
I0330 15:22:49.585477    2393 flags.go:33] FLAG: --allocate-node-cidrs="true"
I0330 15:22:49.585480    2393 flags.go:33] FLAG: --allow-untagged-cloud="false"
I0330 15:22:49.585481    2393 flags.go:33] FLAG: --alsologtostderr="false"
I0330 15:22:49.585483    2393 flags.go:33] FLAG: --attach-detach-reconcile-sync-period="1m0s"
I0330 15:22:49.585485    2393 flags.go:33] FLAG: --authentication-kubeconfig=""
I0330 15:22:49.585488    2393 flags.go:33] FLAG: --authentication-skip-lookup="false"
I0330 15:22:49.585489    2393 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="10s"
I0330 15:22:49.585490    2393 flags.go:33] FLAG: --authentication-tolerate-lookup-failure="false"
I0330 15:22:49.585492    2393 flags.go:33] FLAG: --authorization-always-allow-paths="[/healthz]"
I0330 15:22:49.585497    2393 flags.go:33] FLAG: --authorization-kubeconfig=""
I0330 15:22:49.585498    2393 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
I0330 15:22:49.585500    2393 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
/k8s/kubernetes/logs/kube-controller-manager.INFO

 

3.配置kube-scheduler组件

        (1)配置kube-scheduler.conf文件:

[root@k8s-master-1 cfg]# vim /k8s/kubernetes/cfg/kube-scheduler.conf
[root@k8s-master-1 cfg]# cat /k8s/kubernetes/cfg/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--leader-elect=true \
  --master=127.0.0.1:8080 \
  --address=127.0.0.1 \
  --v=2 \
  --logtostderr=false \
  --log-dir=/k8s/kubernetes/logs"

        (2)配置kube-scheduler.service服务:

[root@k8s-master-1 cfg]# vim /usr/lib/systemd/system/kube-scheduler.service
[root@k8s-master-1 cfg]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kube-scheduler.conf
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

        (3)启动kube-scheduler服务组件,并设置开机自启动:

[root@k8s-master-1 cfg]# systemctl daemon-reload
[root@k8s-master-1 cfg]# systemctl start kube-scheduler
[root@k8s-master-1 cfg]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

[root@k8s-master-1 cfg]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-03-30 15:27:57 CST; 18s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 2454 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─2454 /k8s/kubernetes/bin/kube-scheduler --lea...

Mar 30 15:27:57 k8s-master-1 systemd[1]: Started Kubernete...
Mar 30 15:27:57 k8s-master-1 systemd[1]: Starting Kubernet...
Mar 30 15:27:57 k8s-master-1 kube-scheduler[2454]: I0330 1...
Mar 30 15:27:57 k8s-master-1 kube-scheduler[2454]: I0330 1...
Hint: Some lines were ellipsized, use -l to show in full.

        查看日志:

[root@k8s-master-1 cfg]# less /k8s/kubernetes/logs/kube-scheduler.INFO

Log file created at: 2022/03/30 15:27:57
Running on machine: k8s-master-1
Binary: Built with gc go1.13.9 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0330 15:27:57.866769    2454 flags.go:33] FLAG: --add-dir-header="false"
I0330 15:27:57.867105    2454 flags.go:33] FLAG: --address="127.0.0.1"
I0330 15:27:57.867111    2454 flags.go:33] FLAG: --algorithm-provider=""
I0330 15:27:57.867116    2454 flags.go:33] FLAG: --alsologtostderr="false"
I0330 15:27:57.867119    2454 flags.go:33] FLAG: --authentication-kubeconfig=""
I0330 15:27:57.867122    2454 flags.go:33] FLAG: --authentication-skip-lookup="false"
/k8s/kubernetes/logs/kube-scheduler.INFO

4.查看集群组件状态

[root@k8s-master-1 cfg]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}

 

三、Node工作节点需要安装的组件介绍

kubelet:

        一个在集群中每个Node节点上运行的代理,管理容器的生命周期,并会监控pod的运行状态,上报容器的健康情况。

        kubelet在PodSpec方面工作。PodSpec 是描述 Pod 的 YAML 或 JSON 对象。kubelet 采用一组 PodSpec,这些 PodSpec 通过各种机制(主要通过 apiserver)提供,并确保这些 PodSpec 中描述的容器正在运行且正常运行。kubelet 不管理不是由 Kubernetes 创建的容器。

kube-proxy:

        Kubernetes 网络代理在每个节点上运行。这反映了每个节点上的 Kubernetes API 中定义的服务,并且可以跨一组后端执行简单的 TCP、UDP 和 SCTP 流转发或轮循机制 TCP、UDP 和 SCTP 转发。服务群集 IP 和端口当前通过 Docker 链接兼容的环境变量找到,这些变量指定了服务代理打开的端口。有一个可选的插件为这些群集 IP 提供群集 DNS。用户必须使用 API 服务器 API 创建服务才能配置代理。

docker:

        docker的engine引擎会负责本机的容器创建和管理工作。

 

四、配置Node工作节点各组件模块

1.安装Docker

        前提确保自己的网络yum源能用。

[root@k8s-node-1 ~]# vim docker-install.sh
[root@k8s-node-1 ~]# cat docker-install.sh 
#! /bin/bash
 
#脚本安装Docker
#作者:dorte
#时间:2021.9.18 
#Email:1615360614qq.com
 
#优化环境,避免出现一些不必要的问题
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config &> /dev/null
setenforce 0
#停止防火墙,设置开机不启动防火墙
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
#清除规则
iptables -F
#停止NetworkManager,设置它开机不启动
systemctl stop NetworkManager &> /dev/null
systemctl disable NetworkManager &> /dev/null
 
 
#添加epel源
yum install -y epel-release
 
#添加docker-ce源
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat /etc/yum.repos.d/docker-ce.repo
yum clean all && yum makecache
 
#同步时间
unalias cp #取消cp的别名,让其不进行确认提示,临时的,重新开机后还是会恢复
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
 
#配置pip镜像源,以便快速下载Python库
mkdir /.pip
cat > /.pip/pip.conf < /etc/docker/daemon.json <

2.配置Node节点证书

        (1)在Master节点上用CA证书为Node节点配置证书签名请求文件kube-proxy-csr.json:

[root@k8s-master-1 ~]# cd /k8s/kubernetes/ssl/
[root@k8s-master-1 ssl]# vim kube-proxy-csr.json
[root@k8s-master-1 ssl]# cat kube-proxy-csr.json
{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "kubernetes",
            "OU": "System"
        }
    ]
}

        (2)为 kube-proxy 生成证书和私钥:

[root@k8s-master-1 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2022/03/30 15:37:34 [INFO] generate received request
2022/03/30 15:37:34 [INFO] received CSR
2022/03/30 15:37:34 [INFO] generating key: rsa-2048
2022/03/30 15:37:34 [INFO] encoded CSR
2022/03/30 15:37:34 [INFO] signed certificate with serial number 229259672626930233057876376397952991706752162876
2022/03/30 15:37:34 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").


[root@k8s-master-1 ssl]# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

        (3)在Node节点上创建k8s所需目录:

[root@k8s-node-1 ~]# mkdir -p /k8s/kubernetes/{bin,cfg,logs,ssl}
[root@k8s-node-1 ~]# cd /k8s/kubernetes/
[root@k8s-node-1 kubernetes]# ll
total 0
drwxr-xr-x 2 root root 6 Mar 30 15:41 bin
drwxr-xr-x 2 root root 6 Mar 30 15:41 cfg
drwxr-xr-x 2 root root 6 Mar 30 15:41 logs
drwxr-xr-x 2 root root 6 Mar 30 15:41 ssl

        (4) 在master节点上将证书拷贝到node节点:

[root@k8s-master-1 ssl]# scp -r /k8s/kubernetes/ssl/{ca.pem,kube-proxy.pem,kube-proxy-key.pem} root@k8s-node-1:/k8s/kubernetes/ssl/
root@k8s-node-1's password: 
ca.pem                     100% 1383     1.4KB/s   00:00    
kube-proxy.pem             100% 1424     1.4KB/s   00:00    
kube-proxy-key.pem         100% 1679     1.6KB/s   00:00 

         (5)将master节点上的kubelet、kube-proxy文件拷贝到node节点:

[root@k8s-master-1 ~]# scp -r /k8s/kubernetes/bin/{kubelet,kube-proxy} root@k8s-node-1:/k8s/kubernetes/bin/
root@k8s-node-1's password: 
kubelet                    100%  108MB 108.0MB/s   00:01    
kube-proxy                 100%   37MB  36.6MB/s   00:00 

 

3.配置kubelet组件

        (1)配置证书文件bootstrap.kubeconfig:

                bootstrap.kubeconfig 将用于向 apiserver 请求证书,apiserver 会验证 token、证书是否有效,验证通过则自动颁发证书。

[root@k8s-node-1 ssl]# vim /k8s/kubernetes/cfg/bootstrap.kubeconfig
[root@k8s-node-1 ssl]# cat /k8s/kubernetes/cfg/bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster: 
    certificate-authority: /k8s/kubernetes/ssl/ca.pem
    server: https://192.168.61.161:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: e2b20b5979e898e33c644471e53ee8a2    #注意此处要与master上的token.csv文件的字段保持一致

        (2)配置kubelet-config.yml文件:

                为了安全性,kubelet需授权访问,禁止匿名访问,通过 kubelet-config.yml 授权 apiserver 访问 kubelet

[root@k8s-node-1 cfg]# vim /k8s/kubernetes/cfg/kubelet-config.yml 
[root@k8s-node-1 cfg]# cat /k8s/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2 
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509: 
    clientCAFile: /k8s/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthroizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 100000
maxPods: 110

        (3)配置kubelet服务文件kubelet.conf:

[root@k8s-node-1 cfg]# vim /k8s/kubernetes/cfg/kubelet.conf
[root@k8s-node-1 cfg]# cat /k8s/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--hostname-override=k8s-node-1 \
  --network-plugin=cni \
  --cni-bin-dir=/opt/cni/bin \
  --cni-conf-dir=/etc/cni/net.d \
  --cgroups-per-qos=false \
  --enforce-node-allocatable="" \
  --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
  --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
  --config=/k8s/kubernetes/cfg/kubelet-config.yml \
  --cert-dir=/k8s/kubernetes/ssl \
  --pod-infra-container-image=kubernetes/pause:latest \
  --v=2 \
  --logtostderr=false \
  --log-dir=/k8s/kubernetes/logs"

[root@k8s-node-1 cfg]# ll
total 12
-rw-r--r-- 1 root root 378 Mar 30 15:55 bootstrap.kubeconfig
-rw-r--r-- 1 root root 534 Mar 30 16:00 kubelet.conf
-rw-r--r-- 1 root root 610 Mar 30 15:56 kubelet-config.yml


参数介绍:
--hostname-override:当前节点注册到K8S中显示的名称,默认为主机 hostname
--network-plugin:启用 CNI 网络插件
--cni-bin-dir:CNI 插件可执行文件位置,默认在 /opt/cni/bin 下
--cni-conf-dir:CNI 插件配置文件位置,默认在 /etc/cni/net.d 下
--cgroups-per-qos:必须加上这个参数和--enforce-node-allocatable,否则报错 [Failed to start ContainerManager failed to initialize top level QOS containers.......]
--kubeconfig:会自动生成 kubelet.kubeconfig,用于连接 apiserver
--bootstrap-kubeconfig:指定 bootstrap.kubeconfig 文件
--config:kubelet 配置文件
--cert-dir:证书目录
--pod-infra-container-image:管理Pod网络的镜像,基础的 Pause 容器,默认是 k8s.gcr.io/pause:3.1

        (4)配置kubelet.service服务:

[root@k8s-node-1 cfg]# vim /usr/lib/systemd/system/kubelet.service
[root@k8s-node-1 cfg]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Before=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet.conf
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

        (5)启动kubelet,并设置开机自启动:

[root@k8s-node-1 cfg]# systemctl daemon-reload
[root@k8s-node-1 cfg]# systemctl start kubelet
[root@k8s-node-1 cfg]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

        查看启动日志:

[root@k8s-node-1 cfg]# tail -f /k8s/kubernetes/logs/kubelet.INFO
I0330 16:28:51.590365    2965 feature_gate.go:243] feature gates: &{map[]}
I0330 16:28:51.590397    2965 feature_gate.go:243] feature gates: &{map[]}
I0330 16:28:51.590463    2965 plugins.go:100] No cloud provider specified.
I0330 16:28:51.590473    2965 server.go:537] No cloud provider specified: "" from the config file: ""
I0330 16:28:51.590486    2965 bootstrap.go:119] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
I0330 16:28:51.591183    2965 bootstrap.go:150] No valid private key and/or certificate found, reusing existing private key or creating a new one
I0330 16:28:51.636787    2965 csr.go:70] csr for this node already exists, reusing
I0330 16:28:51.642276    2965 csr.go:78] csr for this node is still valid
I0330 16:28:51.642287    2965 bootstrap.go:355] Waiting for client certificate to be issued
I0330 16:28:51.642428    2965 reflector.go:175] Starting reflector *v1beta1.CertificateSigningRequest (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146

        (6)在master节点上给node节点授权:

kubelet 启动后,还没加入到集群中,会向 apiserver 请求证书,需手动在 k8s-master-1 上对 node 授权。通过如下命令查看是否有新的客户端请求颁发证书:

[root@k8s-master-1 cfg]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-4cdu6Ig9ufwtoYPf8bY-L5C3R78YBTCr64NUmLgwHIE   8m7s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

        颁发证书,允许它加入集群:

[root@k8s-master-1 cfg]# kubectl certificate approve node-csr-4cdu6Ig9ufwtoYPf8bY-L5C3R78YBTCr64NUmLgwHIE
certificatesigningrequest.certificates.k8s.io/node-csr-4cdu6Ig9ufwtoYPf8bY-L5C3R78YBTCr64NUmLgwHIE approved

        (7)授权成功后,可在Master上查看Node节点:

[root@k8s-master-1 cfg]# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-node-1   NotReady      2m4s   v1.18.6

        此时node还处于未就绪状态,因为目前为止尚未安装CNI插件。

        授权成功后,可在node节点的/k8s/kubenetes/ssl上查看到master为kubelet颁发的证书:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第2张图片

         在 /k8s/kubenetes/cfg 下可以看到自动生成的 kubelet.kubeconfig 配置文件:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第3张图片

 

4.配置kube-proxy组件

        (1)配置连接apiserver的kube-proxy.kubeconfig文件:

[root@k8s-node-1 cfg]# vim /k8s/kubernetes/cfg/kube-proxy.kubeconfig
[root@k8s-node-1 cfg]# cat /k8s/kubernetes/cfg/kube-proxy.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /k8s/kubernetes/ssl/ca.pem
    server: https://192.168.61.161:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate: /k8s/kubernetes/ssl/kube-proxy.pem
    client-key: /k8s/kubernetes/ssl/kube-proxy-key.pem

        (2)配置kube-proxy-config.yml文件:

[root@k8s-node-1 cfg]# vim /k8s/kubernetes/cfg/kube-proxy-config.yml
[root@k8s-node-1 cfg]# vim /k8s/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metrisBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /k8s/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node-1
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true

        (3)配置kube-proxy.conf文件:

[root@k8s-node-1 cfg]# vim /k8s/kubernetes/cfg/kube-proxy.conf
[root@k8s-node-1 cfg]# cat /k8s/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--config=/k8s/kubernetes/cfg/kube-proxy-config.yml \
  --v=2 \
  --logtostderr=false \
  --log-dir=/k8s/kubernetes/logs"

        (4)配置kube-proxy.service

[root@k8s-node-1 cfg]# vim /usr/lib/systemd/system/kube-proxy.service
[root@k8s-node-1 cfg]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kube-proxy.conf
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

        (5)启动kube-proxy,并设置开机自启动:

[root@k8s-node-1 cfg]# systemctl daemon-reload
[root@k8s-node-1 cfg]# systemctl start kube-proxy
[root@k8s-node-1 cfg]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

        查看启动日志:

[root@k8s-node-1 cfg]# tail -f /k8s/kubernetes/logs/kube-proxy.INFO
I0330 16:49:01.143514    5973 config.go:315] Starting service config controller
I0330 16:49:01.143521    5973 shared_informer.go:223] Waiting for caches to sync for service config
I0330 16:49:01.143551    5973 config.go:133] Starting endpoints config controller
I0330 16:49:01.143562    5973 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0330 16:49:01.144326    5973 reflector.go:175] Starting reflector *v1.Service (15m0s) from k8s.io/client-go/informers/factory.go:135
I0330 16:49:01.144690    5973 reflector.go:175] Starting reflector *v1.Endpoints (15m0s) from k8s.io/client-go/informers/factory.go:135
I0330 16:49:01.245140    5973 shared_informer.go:230] Caches are synced for endpoints config 
I0330 16:49:01.245361    5973 proxier.go:997] Not syncing ipvs rules until Services and Endpoints have been received from master
I0330 16:49:01.245391    5973 shared_informer.go:230] Caches are synced for service config 
I0330 16:49:01.246317    5973 service.go:379] Adding new service port "default/kubernetes:https" at 10.0.0.1:443/TCP

 

5.部署其它Node工作节点

        其它节点的部署步骤和上诉过程大同小异

        (1)想要效率一点,可以从node1节点直接把文件copy到其它node节点对应目录上,然后再做小修改:

[root@k8s-node-1 ~]# cd /k8s/kubernetes/cfg/

[root@k8s-node-1 cfg]# scp bootstrap.kubeconfig root@k8s-node-2:/k8s/kubernetes/cfg/

[root@k8s-node-1 cfg]# scp kubelet.conf root@k8s-node-2:/k8s/kubernetes/cfg/

[root@k8s-node-1 cfg]# scp /usr/lib/systemd/system/kubelet.service root@k8s-node-3:/usr/lib/systemd/system/

[root@k8s-node-1 cfg]# scp kube-proxy.kubeconfig root@k8s-node-2:/k8s/kubernetes/cfg/

[root@k8s-node-1 cfg]# scp kube-proxy-config.yml root@k8s-node-3:/k8s/kubernetes/cfg/

        kubelet.conf要修改主机名:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第4张图片

         kube-proxy-config.yml要修改hostnameOverride:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第5张图片

         (2)回到master节点上去颁发证书,允许客户端加入k8s集群:

[root@k8s-master-1 cfg]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-3mFoQwQdlrcdnTk_z8V3qopdIIFoDlS38wmk-SOyAAU   91s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-4cdu6Ig9ufwtoYPf8bY-L5C3R78YBTCr64NUmLgwHIE   51m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-J0kEI0Ca6ktTCTO7kmdJ_nOBOFlTnJ-0MuiVGt_3t5Y   91s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

[root@k8s-master-1 cfg]# kubectl certificate approve node-csr-3mFoQwQdlrcdnTk_z8V3qopdIIFoDlS38wmk-SOyAAU
certificatesigningrequest.certificates.k8s.io/node-csr-3mFoQwQdlrcdnTk_z8V3qopdIIFoDlS38wmk-SOyAAU approved

[root@k8s-master-1 cfg]# kubectl certificate approve node-csr-J0kEI0Ca6ktTCTO7kmdJ_nOBOFlTnJ-0MuiVGt_3t5Y
certificatesigningrequest.certificates.k8s.io/node-csr-J0kEI0Ca6ktTCTO7kmdJ_nOBOFlTnJ-0MuiVGt_3t5Y approved

        (3)最后查看整个集群状态:

[root@k8s-master-1 cfg]# kubectl get node -o wide
NAME         STATUS     ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                                      KERNEL-VERSION          CONTAINER-RUNTIME
k8s-node-1   NotReady      53m   v1.18.6   192.168.61.162           Red Hat Enterprise Linux Server 7.3 (Maipo)   3.10.0-514.el7.x86_64   docker://20.10.14
k8s-node-2   NotReady      11m   v1.18.6   192.168.61.163           Red Hat Enterprise Linux Server 7.3 (Maipo)   3.10.0-514.el7.x86_64   docker://20.10.14
k8s-node-3   NotReady      11m   v1.18.6   192.168.61.164           Red Hat Enterprise Linux Server 7.3 (Maipo)   3.10.0-514.el7.x86_64   docker://20.10.14

[root@k8s-master-1 cfg]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1             443/TCP   170m

[root@k8s-master-1 cfg]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"} 

 

6.部署K8S集群网络

        kubernetes会用到一个叫flannel的容器网络服务,来为集群内的容器分配唯一的虚拟IP,让不同节点上的容器能够通过内网IP进行通信。

        (1)安装CNI插件之前先创建它的工作目录:

在配置kubelet服务组件时,指定了CNI的工作目录,所以这一步要创建对应的工作目录路径。

CNI:容器网络接口,由Google和Core OS主导制定的容器网络标准,它仅仅是一个接口,具体的功能由各个网络插件自己去实现。

[root@k8s-node-1 ~]# mkdir -p /opt/cni/bin /etc/cni/net.d
[root@k8s-node-2 ~]# mkdir -p /opt/cni/bin /etc/cni/net.d
[root@k8s-node-3 ~]# mkdir -p /opt/cni/bin /etc/cni/net.d

        (2)下载CNI插件,上传到Linux,解压到指定目录:

这里附上我使用的CNI版本:cni-plugins-linux-amd64-v0.8.6.tgz的下载链接   提取码:h4em

[root@k8s-node-1 ~]# rz

[root@k8s-node-1 ~]# tar zxf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/
[root@k8s-node-1 ~]# cd /opt/cni/bin/
[root@k8s-node-1 bin]# ls
bandwidth  dhcp      flannel      host-local  loopback  portmap  sbr     tuning
bridge     firewall  host-device  ipvlan      macvlan   ptp      static  vlan

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBARG9ydGU3,size_20,color_FFFFFF,t_70,g_se,x_16

         (3)部署flannel服务:

        kube-flannel.yml配置文件:kube-flannel.yml文件的下载链接  提取码:nynk

        下载好后,直接上传到master节点上的/k8s/kubernetes/cfg/目录下:

[root@k8s-master-1 ~]# cd /k8s/kubernetes/cfg/
[root@k8s-master-1 cfg]# ls
busybox.yaml  flanneld             kube-controller-manager.conf  kubernetes-dashboard.yaml  token.csv
coredns.yaml  kube-apiserver.conf  kube-flannel.yml              kube-scheduler.conf

        查看kube-controller-manager.conf文件:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第6张图片

         查看kube-flannel.yml文件:(局部截图)

Network 的地址需与 kube-controller-manager.conf 中的 --cluster-cidr=10.244.0.0/16 保持一致

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第7张图片

         (4)用kubectl apply -f 命令部署flannel:

[root@k8s-master-1 cfg]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

        Flannel会在Node节点上创建一个 Flannel的 Pod,可以查看pod的状态看flannel是否启动成功:

[root@k8s-master-1 cfg]# kubectl get pods -n kube-system -o wide
NAME                          READY   STATUS    RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES
kube-flannel-ds-amd64-45z7d   1/1     Running   0          112s   192.168.61.164   k8s-node-3              
kube-flannel-ds-amd64-8xdrs   1/1     Running   0          112s   192.168.61.163   k8s-node-2              
kube-flannel-ds-amd64-9tm4g   1/1     Running   0          112s   192.168.61.162   k8s-node-1              

        Flannel 部署成功后,可以查看Node节点是否就绪,STATUS显示已Ready:

[root@k8s-master-1 cfg]# kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                                      KERNEL-VERSION          CONTAINER-RUNTIME
k8s-node-1   Ready       3d23h   v1.18.6   192.168.61.162           Red Hat Enterprise Linux Server 7.3 (Maipo)   3.10.0-514.el7.x86_64   docker://20.10.14
k8s-node-2   Ready       3d23h   v1.18.6   192.168.61.163           Red Hat Enterprise Linux Server 7.3 (Maipo)   3.10.0-514.el7.x86_64   docker://20.10.14
k8s-node-3   Ready       3d23h   v1.18.6   192.168.61.164           Red Hat Enterprise Linux Server 7.3 (Maipo)   3.10.0-514.el7.x86_64   docker://20.10.14

        (5)在Node节点上查看网卡信息,会看到多了一张flannel.1的虚拟网卡,此网卡的作用用于接收Pod的流量及转发:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第8张图片

         (6)测试创建一个Pod,并查看状态:

[root@k8s-master-1 cfg]# kubectl create deployment web --image=nginx
deployment.apps/web created

[root@k8s-master-1 cfg]# kubectl get pods -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
web-5dcb957ccc-c6w8m   1/1     Running   0          15m   10.244.1.2   k8s-node-2              

        根据NODE这一栏显示的信息,去到对应的k8s-node-2节点查看容器,是可以查看到nginx容器的:

[root@k8s-node-2 bin]# docker ps -a
CONTAINER ID   IMAGE                     COMMAND                  CREATED          STATUS                      PORTS     NAMES
d55521746d7c   nginx                     "/docker-entrypoint.…"   3 minutes ago    Up 3 minutes                          k8s_nginx_web-5dcb957ccc-c6w8m_default_b4ba6b2c-9f2a-4c9e-85fc-208dba3726b0_0
fa033fbfa0c9   kubernetes/pause:latest   "/pause"                 9 minutes ago    Up 9 minutes                          k8s_POD_web-5dcb957ccc-c6w8m_default_b4ba6b2c-9f2a-4c9e-85fc-208dba3726b0_0
464b9ce8b20a   4e9f801d2217              "/opt/bin/flanneld -…"   19 minutes ago   Up 19 minutes                         k8s_kube-flannel_kube-flannel-ds-amd64-8xdrs_kube-system_4f95ec57-fa76-4bab-b6ac-8de853ff4079_0
646f56c41d69   4e9f801d2217              "cp -f /etc/kube-fla…"   19 minutes ago   Exited (0) 19 minutes ago             k8s_install-cni_kube-flannel-ds-amd64-8xdrs_kube-system_4f95ec57-fa76-4bab-b6ac-8de853ff4079_0
8f89c29c95c1   kubernetes/pause:latest   "/pause"                 19 minutes ago   Up 19 minutes                         k8s_POD_kube-flannel-ds-amd64-8xdrs_kube-system_4f95ec57-fa76-4bab-b6ac-8de853ff4079_0

        容器Running成功后,再在k8s-node-2节点上查看网卡信息,多了一块cni0的虚拟网卡,cni0用于Pod本地通信使用:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第9张图片

        (7)暴露端口,并进行服务访问:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第10张图片

 

7.部署集群内部DNS服务

        (1)部署CoreDNS:

下载CoreDNS配置文件:coredns.yaml文件的下载连接  提取码:6sr5

下载好后,直接上传到mater节点上的/k8s/kubernetes/cfg/目录下

[root@k8s-master-1 cfg]# rz

[root@k8s-master-1 cfg]# ll
total 44
-rw-r--r-- 1 root root  5280 Apr  3 20:15 coredns.yaml
-rw-r--r-- 1 root root   233 Apr  2 21:14 flanneld
-rw-r--r-- 1 root root  1261 Mar 29 21:59 kube-apiserver.conf
-rw-r--r-- 1 root root   571 Mar 30 15:05 kube-controller-manager.conf
-rw-r--r-- 1 root root 14366 Mar 30 18:35 kube-flannel.yml
-rw-r--r-- 1 root root   163 Mar 30 15:25 kube-scheduler.conf
-rw-r--r-- 1 root root    84 Mar 29 21:50 token.csv

        (2)修改coredns.yml文件,要确保clusterIP一定要与kube-config.yaml中的clusterDNS保持一致:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第11张图片

        还有几处要修改的地方:

全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第12张图片

 全手动搭建Kubernetes集群——Master管理节点和Node工作节点部署_第13张图片

 

        (4)修改完毕后,开始部署:

[root@k8s-master-1 cfg]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

        查看Pod:

[root@k8s-master-1 cfg]# kubectl get pod -n kube-system -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
coredns-8686db44f5-5khf9      1/1     Running   0          19m     10.244.1.5       k8s-node-2              
kube-flannel-ds-amd64-45z7d   1/1     Running   1          4h56m   192.168.61.164   k8s-node-3              
kube-flannel-ds-amd64-8xdrs   1/1     Running   2          4h56m   192.168.61.163   k8s-node-2              
kube-flannel-ds-amd64-9tm4g   1/1     Running   1          4h56m   192.168.61.162   k8s-node-1              

(5)创建busybox服务,验证CoreDNS:

[root@k8s-master-1 cfg]# vim busybox.yaml
[root@k8s-master-1 cfg]# cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  dnsPolicy: ClusterFirst
  containers:
  - name: busybox
    image: busybox:1.28.4
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

        创建busybox Pod:

[root@k8s-master-1 cfg]# kubectl apply -f busybox.yaml
pod/busybox created

[root@k8s-master-1 cfg]# kubectl get pod -o wide
NAME                    READY   STATUS             RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
busybox                 1/1     Running            0          47s     10.244.0.5   k8s-node-1              
web-5dcb957ccc-t5xpr    1/1     Running            0          15m     10.244.2.5   k8s-node-3              

        查看验证:

[root@k8s-node-1 system]# docker ps -a | grep busybox
CONTAINER ID   IMAGE                     COMMAND                  CREATED          STATUS                        PORTS     NAMES
56a8d1c13ca7   8c811b4aec35              "sleep 3600"             15 minutes ago   Up 15 minutes                           k8s_busybox_busybox_default_b3ab6359-3e76-4933-afc7-fb108dd2e3c1_0

[root@k8s-node-1 system]# docker exec -it 56a8d1c13ca7 sh
/ #
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # nslookup web
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      web
Address 1: 10.0.0.42 web.default.svc.cluster.local
/ # 
/ # exit

以上就是配置Master管理节点和Node工作节点组件模块的内容,在下一篇文章中我们将进行Kubernetes Dashboard的部署。

 

 

你可能感兴趣的:(k8s,kubernetes)