二进制安装部署k8s高可用集群V1.20

目录

一、环境准备

1.1、部署k8s的两种方式

1.2、准备环境

1.3、证书说明

二、部署Etcd集群

2.1、准备cfssl证书生成工具

2.2、生成Etcd证书

2.3、部署Etcd集群

could not connect: x509

三、安装docker

四、部署Master Node

4.1、部署kube-apiserver

4.2、部署kube-controller-manager

4.3、部署kube-scheduler

五、部署Worker Node

5.1、创建工作目录并拷贝二进制文件

5.2、部署kubelet

5.3、部署kube-proxy

5.4、部署网络组件Calico

5.5、授权apiserver访问kubelet

5.6、新增Worker Node

六、部署Dashboard和CoreDNS

6.1、部署Dashboard

6.2、部署CoreDNS

七、扩容多Master(高可用架构)

7.1、部署Master02 节点

7.2、部署Nginx+Keepalived高可用负载均衡器


一、环境准备

1.1、部署k8s的两种方式

1)方式一:kubeadm部署

Kubeadm是一个K8s部署工具,提供kubeadm initkubeadm join,用于快速部署Kubernetes集群。

2)方式二:二进制软件包

从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群

3)两种方式对比

Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护

1.2、准备环境

1.2.1、服务器要求

1)建议最小硬件配置:2核CPU、2G内存、30G硬盘

2)服务器最好可以访问外网,会有从网上拉取镜像需求,如果服务器不能上网,需要提前下载对应镜像并导入节点

1.2.2、软件环境

软件 版本
操作系统 CentOS7.x_x64 (mini)
容器引擎 Docker CE 19
Kubernetes Kubernetes v1.20

1.2.3、服务器整体规划

角色 IP 组件
k8s-master1 10.0.0.71 kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd,nginx,keepalived
k8s-master2 10.0.0.74 kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,nginx,keepalived
k8s-node1 10.0.0.72 kubelet,kube-proxy,docker,etcd
k8s-node2 10.0.0.73 kubelet,kube-proxy,docker,etcd
负载均衡器IP 10.0.0.88 (VIP) Nginx+Keepalived

搭建这套K8s高可用集群分两部分实施,先部署一套单Master架构(3台),再扩容为多Master架构(4台或6台)

单Master架构图

二进制安装部署k8s高可用集群V1.20_第1张图片

单Master服务器规划

角色 IP 组件
k8s-master 10.0.0.71 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1 10.0.0.72 kubelet,kube-proxy,docker,etcd
k8s-node2 10.0.0.73 kubelet,kube-proxy,docker,etcd

1.2.4、操作系统初始化配置

# 1、关闭防火墙 
systemctl stop firewalld 
systemctl disable firewalld 
 
# 2、关闭selinux 
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久 
setenforce 0  # 临时 
 
# 3、关闭swap 
swapoff -a  # 临时 
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久 
 
# 4、根据规划设置主机名 
hostnamectl set-hostname  
sudo hostnamectl set-hostname kube-master
 
# 5、在master添加hosts 
cat >> /etc/hosts << EOF 
10.0.0.71 k8s-master01
10.0.0.74 k8s-master02
10.0.0.72 k8s-node01 
10.0.0.73 k8s-node02 
EOF
 
# 6、将桥接的IPv4流量传递到iptables的链 
cat > /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
EOF 
sysctl --system  # 生效 
 
# 7、时间同步 
yum install ntpdate -y 
ntpdate time.windows.com

#8、生成秘钥对
 ssh-keygen #连续回车即可

#将自己的公钥发给其他服务器

 ssh-copy-id root@kube-master

ssh-copy-id root@kube-node1

 ssh-copy-id root@kube-node2

或者

1、生成秘钥对
 
[root@kube-master ~]# ssh-keygen #连续回车即可
 
2、将自己的公钥发给其他服务器
 
[root@kube-master ~]# ssh-copy-id root@kube-master
 
[root@kube-master ~]# ssh-copy-id root@kube-node1

[root@kube-master ~]# ssh-copy-id root@kube-node2
 
 
[root@kube-master ~]# ssh-copy-id k8s@kube-master
 
[root@kube-master ~]# ssh-copy-id k8s@kube-node1
 
[root@kube-master ~]# ssh-copy-id k8s@kube-node2
 

1.3、证书说明

k8s集群需要用到很多证书,先将证书做一个简单的说明:

二进制安装部署k8s高可用集群V1.20_第2张图片

二、部署Etcd集群

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。

节点名称 IP
etcd-1 10.0.0.71
etcd-2 10.0.0.72
etcd-3 10.0.0.73

注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行

2.1、准备cfssl证书生成工具

cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。找任意一台服务器操作,这里用Master节点

# 下载软件包
mkdir cfssl && cd cfssl/
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

2.2、生成Etcd证书

2.2.1、自签证书颁发机构(CA)

# 1、创建工作目录
mkdir -p ~/TLS/{etcd,k8s} && cd ~/TLS/etcd

# 2、自签CA
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

# 3、生成证书:会生成ca.pem和ca-key.pem文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

2.2.2、使用自签CA签发Etcd Https证书

# 创建证书请求文件
cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "10.0.0.71",
    "10.0.0.72",
    "10.0.0.73"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

# 生成证书,会生成server.pem和server-key.pem文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

2.3、部署Etcd集群

1)下载etcd二进制文件

地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

2)创建工作目录并解压二进制包

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

3)创建etcd配置文件

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.71:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.71:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.71:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.71:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.0.71:2380,etcd-2=https://10.0.0.72:2380,etcd-3=https://10.0.0.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

配置文件说明

ETCD_NAME:节点名称,集群中唯一

ETCD_DATA_DIR:数据目录

ETCD_LISTEN_PEER_URLS:集群通信监听地址

ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地址

ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址

ETCD_INITIAL_CLUSTER:集群节点地址

ETCD_INITIALCLUSTER_TOKEN:集群Token

ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

4)systemd管理etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5)拷贝生成的证书至指定位置

# 把刚才生成的证书拷贝到配置文件中的路径
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

6)启动并设置开机启动

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

注意:此时启动一台etcd会显示hang住,这是因为其他两个节点并没有启动,可以查看日志/var/log/messages

二进制安装部署k8s高可用集群V1.20_第3张图片

7)将上面节点1所有生成的文件拷贝到节点2和节点3

scp -r /opt/etcd/ [email protected]:/opt/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp -r /opt/etcd/ [email protected]:/opt/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

8)在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP

vim /opt/etcd/cfg/etcd.conf

#[Member]
ETCD_NAME="etcd-1"   # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.71:2380"   # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.71:2379" # 修改此处为当前服务器IP

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.71:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.71:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.0.71:2380,etcd-2=https://10.0.0.72:2380,etcd-3=https://10.0.0.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

9)启动etcd并设置开机启动

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

二进制安装部署k8s高可用集群V1.20_第4张图片

10)查看集群状态

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://10.0.0.71:2379,https://10.0.0.72:2379,https://10.0.0.73:2379" endpoint health --write-out=table

# 显示结果如下,说明部署成功
+------------------------+--------+-------------+-------+
|        ENDPOINT        | HEALTH |    TOOK     | ERROR |
+------------------------+--------+-------------+-------+
| https://10.0.0.72:2379 |   true |  15.05078ms |       |
| https://10.0.0.71:2379 |   true | 15.120261ms |       |
| https://10.0.0.73:2379 |   true |  17.14033ms |       |
+------------------------+--------+-------------+-------+

如果有问题第一步先看日志:/var/log/messagejournalctl -u etcd

##############etcd服务–错误排查###########################

搭建etcd服务启动后一直显示start状态,使用systemctl status etcd.service -l 查看详细信息如下
1、错误信息

systemctl status etcd.service

二进制安装部署k8s高可用集群V1.20_第5张图片

2、我的etcd服务配置

目录结构

[root@hdss7-21 opt]# tree etcd/
etcd/
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
├── etcd.service
├── ssl
│   ├── ca.pem
│   ├── etcd-peer-key.pem
│   └── etcd-peer.pem
└── ssl-bak
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

主要的配置文件:etcd.conf 和 etcd.service
etcd.conf文件是要是etcd服务启动参数配置信息:

 cat etcd.conf
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.6.21:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.6.21:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.6.21:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.6.21:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.6.12:2380,etcd-2=https://192.168.6.21:2380,etcd-3=https://192.168.6.22:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

参数说明:可参考:etcd配置文件详解
ETCD_NAME=“etcd-2” : 节点名称
ETCD_DATA_DIR : 数据保存的目录
ETCD_LISTEN_PEER_URLS:用于监听其他etcd member的url
ETCD_LISTEN_CLIENT_URLS:对外提供服务的地址

ETCD_INITIAL_ADVERTISE_PEER_URLS:与其他节点交互信息的地址
ETCD_ADVERTISE_CLIENT_URLS:etcd客户端交互信息的地址
ETCD_INITIAL_CLUSTER:集群中所有节点的信息。
ETCD_INITIAL_CLUSTER_TOKEN:创建集群的 token,这个值每个集群保持唯一。
ETCD_INITIAL_CLUSTER_STATE:初始集群状态
etcd.service文件

赋予执行权限并拷贝改文件到/etc/systemd/system/(个人理解:实现服务注册)
文件信息如下:

 cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
        --name=${ETCD_NAME} \
        --data-dir=${ETCD_DATA_DIR} \
        --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
        --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
        --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
        --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
        --initial-cluster=${ETCD_INITIAL_CLUSTER} \
        --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
        --initial-cluster-state=new \
        --cert-file=/opt/etcd/ssl/server.pem \
        --key-file=/opt/etcd/ssl/server-key.pem \
        --peer-cert-file=/opt/etcd/ssl/server.pem \
        --peer-key-file=/opt/etcd/ssl/server-key.pem \
        --trusted-ca-file=/opt/etcd/ssl/ca.pem \
        --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

3、问题解决

由于是从别的地方拷贝的etcd服务软件包,没有跟新证书,因此更新证书后,使用新的证书后(主要更改/etc/systemd/system/etcd.service ),如下信息:

cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
        --name=${ETCD_NAME} \
        --data-dir=${ETCD_DATA_DIR} \
        --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
        --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
        --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
        --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
        --initial-cluster=${ETCD_INITIAL_CLUSTER} \
        --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
        --initial-cluster-state=new \
        --cert-file=/opt/etcd/ssl/etcd-peer.pem \
        --key-file=/opt/etcd/ssl/etcd-peer-key.pem \
        --peer-cert-file=/opt/etcd/ssl/etcd-peer.pem \
        --peer-key-file=/opt/etcd/ssl/etcd-peer-key.pem \
        --trusted-ca-file=/opt/etcd/ssl/ca.pem \
        --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

4、重启服务

systemctl daemon-reaload
systemctl start etcd
服务ok

5、查看etcd服务集群状态

[root@hdss7-21 bin]# ./etcdctl cluster-health
member 44cf137c1267f893 is healthy: got healthy result from http://127.0.0.1:2379
member 47856ed020c3771a is healthy: got healthy result from http://127.0.0.1:2379
member 4f089e69d0c31399 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

错误
etcd: conflicting environment variable “ETCD_NAME” is shadowed by corresponding command-line flag (either unset environment variable or disable flag)
解决方法:etcd.service和etcd.conf配置文件有冲突,保留一个即可

 

could not connect: x509

二进制安装部署k8s高可用集群V1.20_第6张图片

etcd集群单节点检测正常

 supervisorctl status
    etcd-server-1-12                 RUNNING   pid 13282, uptime 0:00:42

集群节点检测报错如下:

./etcdctl member list
    client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: getsockopt: connection refused
    ; error #1: client: endpoint http://127.0.0.1:2379 exceeded header timeout

查看日志
 tail  -200f  /data/logs/etcd-server/etcd.stdout.log

    

可以看到报错是could not connect: x509,证书有问题

结决办法是重新生成etcd-peer证书,再次上传文件到对应的目录下

再次验证如下

 ./etcdctl cluster-health
    member 22cb69b2fd1bb417 is healthy: got healthy result from http://127.0.0.1:2379
    member 3c3bd4fd7d7e553e is healthy: got healthy result from http://127.0.0.1:2379
    member bb9bd2baaebf7d95 is healthy: got healthy result from http://127.0.0.1:2379
    cluster is healthy
 

三、在NODE上安装docker

这里使用Docker作为容器引擎,也可以换成别的,例如containerd

下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

以下在所有节点操作。这里采用二进制安装,用yum安装也一样。

1)解压二进制软件包

tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin
docker version

2)systemd管理docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

3)创建配置文件

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

4)启动并设置开机启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker

四、部署Master Node

4.1、部署kube-apiserver

4.1.1、自签证书签发机构(CA)

cd ~/TLS/k8s

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

# 生成证书:生成ca.pem和ca-key.pem文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

4.1.2、使用自签CA签发kube-apiserver HTTPS证书

# 创建证书请求文件
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "10.0.0.71",
      "10.0.0.72",
      "10.0.0.73",
      "10.0.0.74",
      "10.0.0.88",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

注意:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP

注:
如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。
由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.254.0.1)

# 生成证书,生成server.pem和server-key.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

4.1.3、部署kube-apiserver步骤

下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md

注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件

二进制安装部署k8s高可用集群V1.20_第7张图片

1)下载并解压二进制软件包

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

2)创建配置文件

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://10.0.0.71:2379,https://10.0.0.72:2379,https://10.0.0.73:2379 \\
--bind-address=10.0.0.71 \\
--secure-port=6443 \\
--advertise-address=10.0.0.71 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符

参数说明

--logtostderr:启用日志

--v:日志等级

--log-dir:日志目录

--etcd-servers:etcd集群地址

--bind-address:监听地址

--secure-port:https安全端口

--advertise-address:集群通告地址

--allow-privileged:启用授权

--service-cluster-ip-range:Service虚拟IP地址段

--enable-admission-plugins:准入控制模块

--authorization-mode:认证授权,启用RBAC授权和节点自管理

--enable-bootstrap-token-auth:启用TLS bootstrap机制

--token-auth-file:bootstrap token文件

--service-node-port-range:Service nodeport类型默认分配端口范围

--kubelet-client-xxx:apiserver访问kubelet客户端证书

--tls-xxx-file:apiserver https证书

1.20版本必须加的参数:--service-account-issuer,--service-account-signing-key-file

--etcd-xxxfile:连接Etcd集群证书

--audit-log-xxx:审计日志

启动聚合层相关配置:--requestheader-client-ca-file,--proxy-client-cert-file,--proxy-client-key-file,--requestheader-allowed-names,--requestheader-extra-headers-prefix,--requestheader-group-headers,--requestheader-username-headers,--enable-aggregator-routing

 或者

vim kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=172.10.1.11 \
  --secure-port=6443 \
  --advertise-address=172.10.1.11 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
    --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \      # 1.20以上版本必须有此参数
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \   # 1.20以上版本必须有此参数
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

注:
–logtostderr:启用日志
–v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志
 

3)拷贝生成的证书

# 把刚才生成的证书拷贝到配置文件中的路径
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

4)启用 TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

TLS bootstraping 工作流程:

二进制安装部署k8s高可用集群V1.20_第8张图片

5)创建token文件

# 格式:token,用户名,UID,用户组
cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

token也可自行生成替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

6)systemd管理apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

7)启动并设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver 
systemctl enable kube-apiserver

测试
curl --insecure https://172.10.1.11:6443/
有返回说明启动正常 

4.2、部署kube-controller-manager

1)创建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF

配置说明

--kubeconfig:连接apiserver配置文件

--leader-elect:当该组件启动多个时,自动选举(HA)

--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

2)生成kubeconfig文件

生成kube-controller-manager证书:

# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [],  #hosts 列表包含所有 kube-controller-manager 节点 IP;
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

注:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
 
 EX:{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "172.10.1.11",
      "172.10.1.12",
      "172.10.1.13"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}
 

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

生成kubeconfig文件(以下是shell命令,直接在终端执行):

KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://10.0.0.71:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
  --client-certificate=./kube-controller-manager.pem \
  --client-key=./kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

3)systemd管理controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

4)启动并设置开机启动

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

4.3、部署kube-scheduler和kubectl

1)创建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF

参数说明:

--kubeconfig:连接apiserver配置文件

--leader-elect:当该组件启动多个时,自动选举(HA)

2)生成kubeconfig文件

生成kube-scheduler证书:

# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [],#hosts 列表包含所有 kube-scheduler 节点 IP;
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

注:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
EX:
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "172.10.1.11",
      "172.10.1.12",
      "172.10.1.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

 

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

生成kubeconfig文件(以下是shell命令,直接在终端执行):

KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://10.0.0.71:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

3)systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

4)启动并设置开机启动

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

5)查看集群状态

生成kubectl连接集群的证书:

cat > admin-csr.json <

说明:
后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;
注:
这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;
“O”: “system:masters”, 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
 

 创建kubeconfig配置文件
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书

生成kubeconfig文件:

mkdir /root/.kube

KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://10.0.0.71:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
  --client-certificate=./admin.pem \
  --client-key=./admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下。 !!!!不能忽略
[root@master k8s]# scp bootstrap.kubeconfig  kube-proxy.kubeconfig node1:/opt/kubernetes/cfg
[root@master k8s]# scp bootstrap.kubeconfig  kube-proxy.kubeconfig node2:/opt/kubernetes/cfg

或者

设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube.config
设置客户端认证参数
[root@master1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
设置上下文参数
[root@master1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
设置默认上下文
[root@master1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config
[root@master1 work]# mkdir ~/.kube
[root@master1 work]# cp kube.config ~/.kube/config
授权kubernetes证书访问kubelet api权限
[root@master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

查看集群组件状态
上面步骤完成后,kubectl就可以与kube-apiserver通信了

[root@master1 work]# kubectl cluster-info
[root@master1 work]# kubectl get componentstatuses
[root@master1 work]# kubectl get all --all-namespaces

同步kubectl配置文件到其他节点

[root@master1 work]# rsync -vaz /root/.kube/config master2:/root/.kube/
[root@master1 work]# rsync -vaz /root/.kube/config master3:/root/.kube/

配置kubectl子命令补全

[root@master1 work]# yum install -y bash-completion
[root@master1 work]# source /usr/share/bash-completion/bash_completion
[root@master1 work]# source <(kubectl completion bash)
[root@master1 work]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@master1 work]# source '/root/.kube/completion.bash.inc'  
[root@master1 work]# source $HOME/.bash_profile

 

通过kubectl工具查看当前集群组件状态:

kubectl get cs

# 如下输出说明Master节点组件运行正常
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

6)授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

五、部署Worker Node

下面还是在Master Node上操作,即同时作为Worker Node

5.1、创建工作目录并拷贝二进制文件

# 在所有worker node创建工作目录(master已创建,新加入节点需要创建)
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar xvf  kubernetes-server-linux-amd64.tar.gz

# 从解压的k8s server压缩包中拷贝文件
cd /root/soft/kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin

5.2、部署kubelet

1)创建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF

参数说明:

--hostname-override:显示名称,集群中唯一

--network-plugin:启用CNI

--kubeconfig:空路径,会自动生成,后面用于连接apiserver

--bootstrap-kubeconfig:首次启动向apiserver申请证书

--config:配置参数文件

--cert-dir:kubelet证书生成目录

--pod-infra-container-image:管理Pod网络容器的镜像

2)配置参数文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

3)生成kubelet初次加入集群引导kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://10.0.0.71:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4)systemd管理kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5)启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

6)批准kubelet证书申请并加入集群

# 查看kubelet证书请求
kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-jaqXhwxFBnD-1ui9omPdF__0SGovk2ZRhszz_QMGJxI   62s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准申请
kubectl certificate approve node-csr-jaqXhwxFBnD-1ui9omPdF__0SGovk2ZRhszz_QMGJxI

# 查看节点(由于网络插件还没有部署,节点会没有准备就绪 NotReady)
kubectl get node
NAME          STATUS     ROLES    AGE   VERSION
k8s-master1   NotReady      7s    v1.20.4

master节点:
在Master审批Node加入集群:
启动后还没加入到集群中,需要手动允许该节点才可以。
在Master节点查看请求签名的Node:
[root@master ~]# /opt/kubernetes/bin/kubectl get csr
找到NAME
[root@master ~]# /opt/kubernetes/bin/kubectl certificate approve 刚刚获取到的NAME
[root@master ~]# /opt/kubernetes/bin/kubectl get node

5.3、部署kube-proxy

1)创建配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

2)配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.0.0.0/24
EOF

3)生成kube-proxy.kubeconfig文件

生成kube-proxy证书:

# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

生成kubeconfig文件:

KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://10.0.0.71:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4)systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5)启动并设置开机启动

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

5.4、部署网络组件Calico

Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。

wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml

下载地址:Project Calico Documentation

# 部署Calico
kubectl apply -f calico.yaml
kubectl get pods -n kube-system

NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-q9thh   1/1     Running   0          2m12s
calico-node-pjxtr                         1/1     Running   0          2m12s

# 等Calico Pod都Running,节点也会准备就绪
kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready       18m   v1.20.4

5.5、授权apiserver访问kubelet

应用场景:例如kubectl logs

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml

5.6、新增Worker Node

1)拷贝已部署好的Node相关文件到新节点

# 在Master节点将Worker Node涉及文件拷贝到新节点10.0.0.72/73
scp -r /opt/kubernetes [email protected]:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system

scp /opt/kubernetes/ssl/ca.pem [email protected]:/opt/kubernetes/ssl

2)删除kubelet证书和kubeconfig文件

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*

3)修改配置文件中的主机名

vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1

vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1

4)启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy

5)在Master上批准新Node kubelet证书申请

# 查看证书请求
kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-E0cAzR_Tv7S1l6eThCsx5IeVG20AYSeTLOrgW5mAZFA   70s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 同意授权请求
kubectl certificate approve node-csr-E0cAzR_Tv7S1l6eThCsx5IeVG20AYSeTLOrgW5mAZFA

6)查看Node状态

kubectl get nodes
NAME          STATUS   ROLES    AGE    VERSION
k8s-master1   Ready       31m    v1.20.4
k8s-node01    Ready       111s   v1.20.4

Node2(10.0.0.73 )节点同上。记得修改主机名!

六、部署Dashboard和CoreDNS

6.1、部署Dashboard

https://github.com/luckylucky421/kubernetes1.17.3

kubectl apply -f kubernetes-dashboard.yaml

# 查看部署
kubectl get pods,svc -n kubernetes-dashboard

访问地址:https://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard:

二进制安装部署k8s高可用集群V1.20_第9张图片

二进制安装部署k8s高可用集群V1.20_第10张图片

6.2、部署CoreDNS

CoreDNS用于集群内部Service名称解析:

kubectl apply -f coredns.yaml 
 
kubectl get pods -n kube-system  
NAME                          READY   STATUS    RESTARTS   AGE 
coredns-5ffbfd976d-j6shb      1/1     Running   0          32s

DNS解析测试:

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh

If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

解析没问题。

至此一个单Master集群就搭建完成了!这个环境就足以满足学习实验了,如果你的服务器配置较高,可继续扩容多Master集群!

七、扩容多Master(高可用架构)

Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。

针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。

Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。

Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

多Master架构图:

二进制安装部署k8s高可用集群V1.20_第11张图片

7.1、部署Master02 节点

现在需要再增加一台新服务器,作为Master02节点,IP是10.0.0.74。

Master02 与已部署的Master01所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可

1)安装docker

scp /usr/bin/docker* [email protected]:/usr/bin
scp /usr/bin/runc [email protected]:/usr/bin
scp /usr/bin/containerd* [email protected]:/usr/bin
scp /usr/lib/systemd/system/docker.service [email protected]:/usr/lib/systemd/system
scp -r /etc/docker [email protected]:/etc

# 在Master2启动Docker
systemctl daemon-reload
systemctl start docker
systemctl enable docker

2)创建etcd证书目录

# 在Master02创建etcd证书目录
mkdir -p /opt/etcd/ssl

3)拷贝master01上文件到master02

# 拷贝Master01上所有K8s文件和etcd证书到Master02
scp -r /opt/kubernetes [email protected]:/opt
scp -r /opt/etcd/ssl [email protected]:/opt/etcd
scp /usr/lib/systemd/system/kube* [email protected]:/usr/lib/systemd/system
scp /usr/bin/kubectl  [email protected]:/usr/bin
scp -r ~/.kube [email protected]:~

4)删除证书文件

# 删除kubelet证书和kubeconfig文件
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*

5)修改配置文件IP和主机名

# 修改apiserver、kubelet和kube-proxy配置文件为本地IP
vim /opt/kubernetes/cfg/kube-apiserver.conf 
...
--bind-address=10.0.0.74 \
--advertise-address=10.0.0.74 \
...

vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master02

vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master02

6)启动并设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy

7)查看集群状态

# 修改连接master为本机IP
vim ~/.kube/config
...
server: https://10.0.0.74:6443

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

8)批准kubelet证书申请

# 查看证书请求
kubectl get csr
NAME                      AGE          SIGNERNAME          REQUESTOR           CONDITION
node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU   85m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
# 授权请求
kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU

# 查看Node
kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
k8s-master1    Ready       34h   v1.20.4
k8s-master2    Ready       2m   v1.20.4
k8s-node1     Ready       33h   v1.20.4
k8s-node2     Ready       33h   v1.20.4

7.2、部署Nginx+Keepalived高可用负载均衡器

kube-apiserver高可用架构图:

二进制安装部署k8s高可用集群V1.20_第12张图片

Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。

Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。

注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。

注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。

在两台Master节点操作:

1)安装软件包(主/备)

yum install epel-release -y
yum install nginx keepalived -y

2)Nginx配置文件(主备一样)

cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 10.0.0.71:6443;   # Master1 APISERVER IP:PORT
       server 10.0.0.72:6443;   # Master2 APISERVER IP:PORT
    }
    
    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF

3)keepalived配置文件(Nginx Master)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface eth0 # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虚拟IP
    virtual_ipaddress { 
        10.0.0.88/24
    } 
    track_script {
        check_nginx
    } 
}
EOF

参数说明:

vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)

virtual_ipaddress:虚拟IP(VIP)

准备上述配置文件中检查nginx运行状态的脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移

4)keepalived配置文件(Nginx Backup)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface eth0
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90
    advert_int 1
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        10.0.0.88/24
    } 
    track_script {
        check_nginx
    } 
}
EOF

准备上述配置文件中检查nginx运行状态的脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

5)启动并设置开机启动

systemctl daemon-reload
systemctl start nginx keepalived
systemctl enable nginx keepalived

6)查看keepalived工作状态

可以看到,在eth0网卡绑定了10.0.0.88 虚拟IP,说明工作正常。

二进制安装部署k8s高可用集群V1.20_第13张图片

7)Nginx+Keepalived高可用测试

关闭主节点Nginx,测试VIP是否漂移到备节点服务器。

在Nginx Master执行systemctl stop nginx;

在Nginx Backup,ip addr命令查看已成功绑定VIP。

8)访问负载均衡器测试

找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:

curl -k https://10.0.0.88:16443/version
{
  "major": "1",
  "minor": "20",
  "gitVersion": "v1.20.4",
  "gitCommit": "e87da0bd6e03ec3fea7933c4b5263d151aafd07c",
  "gitTreeState": "clean",
  "buildDate": "2021-02-18T16:03:00Z",
  "goVersion": "go1.15.8",
  "compiler": "gc",
  "platform": "linux/amd64"
}

可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver

通过查看Nginx日志也可以看到转发apiserver IP:

tail -f /var/log/nginx/k8s-access.log
10.0.0.73 10.0.0.71:6443 - [12/Apr/2021:18:08:43 +0800] 200 411
10.0.0.73 10.0.0.72:6443, 10.0.0.71:6443 - [12/Apr/2021:18:08:43 +0800] 200 0, 411

9)修改所有Worker Node连接LB VIP

试想下,虽然我们增加了Master02 Node和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Worker Node组件连接都还是Master01 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来10.0.0.71修改为10.0.0.88(VIP)。

在所有Worker Node执行:

sed -i 's#10.0.0.71:6443#10.0.0.88:16443#' /opt/kubernetes/cfg/*
systemctl restart kubelet kube-proxy

检查节点状态:

kubectl get node 
NAME           STATUS   ROLES    AGE     VERSION
k8s-master02   Ready       26m     v1.20.4
k8s-master1    Ready       3h21m   v1.20.4
k8s-node01     Ready       172m    v1.20.4
k8s-node02     Ready       167m    v1.20.4

至此,一套完整的 Kubernetes 高可用集群就部署完成了!

你可能感兴趣的:(K8S,docker,kubernetes,linux)