1)方式一:kubeadm
部署
Kubeadm是一个K8s部署工具,提供kubeadm init
和kubeadm join
,用于快速部署Kubernetes集群。
2)方式二:二进制软件包
从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群
3)两种方式对比
Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护
1.2.1、服务器要求
1)建议最小硬件配置:2核CPU、2G内存、30G硬盘
2)服务器最好可以访问外网,会有从网上拉取镜像需求,如果服务器不能上网,需要提前下载对应镜像并导入节点
1.2.2、软件环境
软件 | 版本 |
---|---|
操作系统 | CentOS7.x_x64 |
容器引擎 | Docker CE 19 |
Kubernetes | Kubernetes v1.20 |
1.2.3、服务器整体规划
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 172.17.87.0 | kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd,nginx,keepalived |
k8s-master2 | 172.17.87.1 | kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd,nginx,keepalived |
k8s-master3 | 172.17.87.2 | kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd,nginx,keepalived |
k8s-node1 | 172.17.87.3 | kubelet,kube-proxy,docker |
k8s-node2 | 172.17.86.255 | kubelet,kube-proxy,docker |
k8s-node3 | 172.17.86.254 | kubelet,kube-proxy,docker |
负载均衡器IP | 10.0.0.88 (VIP) | 虚拟VIP,可填写任意地址,由keepalived生成,建议与小编保持一致 |
搭建这套K8s高可用集群分两部分实施,先部署一套单Master架构(3台),再扩容为多Master架构(4台或6台)
单Master架构图:
单Master服务器规划:
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 172.17.87.0 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd(k8s-master1,k8s-master2, k8s-master3),kubelet,kube-proxy,docker |
k8s-node1 | 172.17.87.3 | kubelet,kube-proxy,docker |
k8s-node2 | 172.17.86.255 | kubelet,kube-proxy,docker |
1.2.4、节点访问免密设置(只在k8s-master1执行 )
目标:实现 k8s-master1 到集群内任意节点免密访问
cd /root/.ssh/
ssh-keygen -t rsa -b 2048
# 此处,手动粘贴密码即可
for i in k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do ssh-copy-id -p 22 root@$i ;done
1.2.5、操作系统初始化配置(所有节点执行)
# 1、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 2、关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
# 3、关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
# 5、在master添加hosts
cat >> /etc/hosts << EOF
172.17.87.0 k8s-master1
172.17.87.1 k8s-master2
172.17.87.2 k8s-master3
172.17.87.3 k8s-node1
172.17.86.255 k8s-node2
172.17.86.254 k8s-node3
EOF
# 5、根据规划设置主机名
hostnamectl set-hostname
bash
# 6、将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
# 7、时间同步
yum install ntpdate -y
ntpdate time.windows.com
k8s集群需要用到很多证书,先将证书做一个简单的说明:
Etcd 是一个分布式键值存储系统
,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。
节点名称 | IP |
---|---|
etcd-1 | 172.17.87.0 |
etcd-2 | 172.17.87.1 |
etcd-3 | 172.17.87.2 |
注:为了节省机器,这里与K8s-master1、K8s-master2,K8s-master3 机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行
cfssl
是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。找任意一台服务器操作,这里用K8s-master1节点
# 下载软件包
mkdir cfssl && cd cfssl/
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.2.1、自签证书颁发机构(CA)
在线json格式校验 https://www.bejson.com/
# 1、创建工作目录
mkdir -p ~/TLS/{etcd,k8s} && cd ~/TLS/etcd
# 2、自签CA
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
# 3、生成证书:会生成ca.pem和ca-key.pem文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2.2.2、使用自签CA签发Etcd Https证书
# 创建证书请求文件
cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"172.17.87.0",
"172.17.87.1",
"172.17.87.2"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
注:上述文件hosts
字段中IP为所有etcd节点的集群内部通信IP(k8s-master1,k8s-master2,k8s-master3的IP,替换为自己的),一个都不能少!为了方便后期扩容可以多写几个预留的IP。
# 生成证书,会生成server.pem和server-key.pem文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
1)下载etcd二进制文件
地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
2)创建工作目录并解压二进制包
cd ~/TLS/etcd
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
3)创建etcd配置文件(以k8s-master1 172.17.87.0为例)
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.17.87.0:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.17.87.0:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.17.87.0:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.17.87.0:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://172.17.87.0:2380,etcd-2=https://172.17.87.1:2380,etcd-3=https://172.17.87.2:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
配置文件说明:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIALCLUSTER_TOKEN:集群Token
ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
4)systemd管理etcd
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5)拷贝生成的证书至指定位置
# 把刚才生成的证书拷贝到配置文件中的路径
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
6)启动并设置开机启动
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
systemctl status etcd
注意:此时启动一台etcd(k8s-master1)会显示hang住,这是再等待其他两个节点(k8s-master2,k8s-master3)启动,可以查看日志/var/log/messages
7)将 k8s-master1 节点所有生成的文件拷贝到 k8s-master2 和 k8s-master3
scp -r /opt/etcd/ root@k8s-master2:/opt/etcd/
scp /usr/lib/systemd/system/etcd.service root@k8s-master2:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@k8s-master3:/opt/etcd/
scp /usr/lib/systemd/system/etcd.service root@k8s-master3:/usr/lib/systemd/system/
8)在 k8s-master2 和 k8s-master3 分别修改etcd.conf
配置文件中的节点名称和当前服务器IP
vim /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.17.87.0:2380" # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://172.17.87.0:2379" # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.17.87.0:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://172.17.87.0:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://172.17.87.0:2380,etcd-2=https://172.17.87.1:2380,etcd-3=https://172.17.87.2:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
9)启动etcd并设置开机启动
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
10)查看集群状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.17.87.0:2379,https://172.17.87.1:2379,https://172.17.87.2:2379" endpoint health --write-out=table
# 显示结果如下,说明部署成功
+--------------------------+--------+------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+--------------------------+--------+------------+-------+
| https://172.17.87.1:2379 | true | 8.57318ms | |
| https://172.17.87.0:2379 | true | 8.398224ms | |
| https://172.17.87.2:2379 | true | 8.704974ms | |
+--------------------------+--------+------------+-------+
如果有问题第一步先看日志:/var/log/message
或 journalctl -u etcd
这里使用Docker作为容器引擎,也可以换成别的,例如containerd
下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
以下在所有节点操作。这里采用二进制安装,用yum安装也一样。
1)解压二进制软件包
wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin
docker version
2)systemd管理docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
3)启动并设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker
systemctl status docker
4.1.1、自签证书签发机构(CA)
cd ~/TLS/k8s
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
# 生成证书:生成ca.pem和ca-key.pem文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
4.1.2、使用自签CA签发kube-apiserver HTTPS证书
# 创建证书请求文件
cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"172.17.87.0",
"172.17.87.1",
"172.17.87.2",
"172.17.87.3",
"172.17.86.255",
"172.17.86.254",
"10.0.0.88",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
注意:上述文件hosts字段中IP为集群内所有节点(k8s-master1,k8s-master2,k8s-master3,k8s-node1,k8s-node2,k8s-node3),10.0.0.1供未来coredns使用;127.0.0.1供本机使用;10.0.0.88未来搭建keepalived使用虚拟VIP,一个都不能少!为了方便后期扩容可以多写几个预留的IP
# 生成证书,生成server.pem和server-key.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
4.1.3、部署kube-apiserver步骤
官网:https://github.com/kubernetes/kubernetes
目录:master/CHANGELOG/CHANGELOG-1.20.md/Server Binaries
地址:kubernetes/CHANGELOG-1.20.md at master · kubernetes/kubernetes · GitHub
注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件
下载:https://dl.k8s.io/v1.20.0/kubernetes-server-linux-amd64.tar.gz (不用执行)
1)下载并解压二进制软件包
cd /root/TLS/k8s
wget https://dl.k8s.io/v1.20.0/kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
2)创建配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://172.17.87.0:2379,https://172.17.87.1:2379,https://172.17.87.2:2379 \\
--bind-address=172.17.87.0 \\
--secure-port=6443 \\
--advertise-address=172.17.87.0 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符
参数说明:
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
1.20版本必须加的参数:--service-account-issuer,--service-account-signing-key-file
--etcd-xxxfile:连接Etcd集群证书
--audit-log-xxx:审计日志
启动聚合层相关配置:--requestheader-client-ca-file,--proxy-client-cert-file,--proxy-client-key-file,--requestheader-allowed-names,--requestheader-extra-headers-prefix,--requestheader-group-headers,--requestheader-username-headers,--enable-aggregator-routing
3)拷贝生成的证书
# 把刚才生成的证书拷贝到配置文件中的路径
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
4)启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
TLS bootstraping
工作流程:
5)创建token文件
# 格式:token,用户名,UID,用户组
cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
token也可自行生成替换:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
6)systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
7)启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver
1)创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF
配置说明:
--kubeconfig:连接apiserver配置文件
--leader-elect:当该组件启动多个时,自动选举(HA)
--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
2)生成kubeconfig文件
生成kube-controller-manager证书:
# 切换工作目录
cd ~/TLS/k8s
# 创建证书请求文件
cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
生成kubeconfig文件(以下是shell命令,直接在终端执行):
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://172.17.87.0:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
--client-certificate=./kube-controller-manager.pem \
--client-key=./kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3)systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4)启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager
1)创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF
参数说明:
--kubeconfig:连接apiserver配置文件
--leader-elect:当该组件启动多个时,自动选举(HA)
2)生成kubeconfig文件
生成kube-scheduler证书:
# 切换工作目录
cd ~/TLS/k8s
# 创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
生成kubeconfig文件(以下是shell命令,直接在终端执行):
KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://172.17.87.0:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
--client-certificate=./kube-scheduler.pem \
--client-key=./kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3)systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4)启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler
5)查看集群状态
生成kubectl连接集群的证书:
cat > admin-csr.json <
生成kubeconfig文件:
mkdir /root/.kube
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://172.17.87.0:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
--client-certificate=./admin.pem \
--client-key=./admin-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=cluster-admin \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
通过kubectl工具查看当前集群组件状态:
kubectl get cs
# 如下输出说明Master节点组件运行正常
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
6)授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
# 如果新增节点,请创建工作目录(master已创建,新加入节点需要创建)
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
# 从解压的k8s server压缩包中拷贝文件(在 k8s-master1 上执行)
cd /root/TLS/k8s/kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin
[root@k8s-master1 bin]# pwd
/opt/kubernetes/bin
[root@k8s-master1 bin]# ll
total 416744
-rwxr-xr-x 1 root root 118128640 Mar 30 08:30 kube-apiserver
-rwxr-xr-x 1 root root 112308224 Mar 30 08:30 kube-controller-manager
-rwxr-xr-x 1 root root 113974120 Mar 30 09:18 kubelet
-rwxr-xr-x 1 root root 39485440 Mar 30 09:18 kube-proxy
-rwxr-xr-x 1 root root 42848256 Mar 30 08:30 kube-scheduler
1)创建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
参数说明:
--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像
2)配置参数文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
3)生成kubelet初次加入集群引导kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://172.17.87.0:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
4)systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5)启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet
6)批准kubelet证书申请并加入集群
# 查看kubelet证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-jaqXhwxFBnD-1ui9omPdF__0SGovk2ZRhszz_QMGJxI 62s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 批准申请
kubectl certificate approve node-csr-jaqXhwxFBnD-1ui9omPdF__0SGovk2ZRhszz_QMGJxI
# 查看节点(由于网络插件还没有部署,节点会没有准备就绪 NotReady)
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady 7s v1.20.4
1)创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
2)配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.0.0.0/24
EOF
3)生成kube-proxy.kubeconfig文件
生成kube-proxy证书:
# 切换工作目录
cd ~/TLS/k8s
# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
生成kubeconfig文件:
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://172.17.87.0:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
4)systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5)启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy
Calico
是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
官网:https://docs.projectcalico.org/
下载: https://docs.projectcalico.org/v3.20/manifests/calico.yaml
# 指定目录
mkdir -p /opt/kubernetes/calico
cd /opt/kubernetes/calico
#下载
wget https://docs.projectcalico.org/v3.20/manifests/calico.yaml --no-check-certificate
# 部署Calico
kubectl apply -f calico.yaml
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-97769f7c7-q9thh 1/1 Running 0 2m12s
calico-node-pjxtr 1/1 Running 0 2m12s
# 等Calico Pod都Running,节点也会准备就绪
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 18m v1.20.4
应用场景:例如kubectl logs
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml
1)在 k8s-master1 拷贝已相关文件到新节点 k8s-node1,k8s-node2
# 在 k8s-master1 涉及文件拷贝到新节点 k8s-node1,k8s-node2
scp -r /opt/kubernetes root@k8s-node1:/opt/kubernetes
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@k8s-node1:/usr/lib/systemd/system
scp -r /opt/kubernetes root@k8s-node2:/opt/kubernetes
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@k8s-node2:/usr/lib/systemd/system
2)删除kubelet证书和kubeconfig文件(在k8s-node1,k8s-node2执行)
注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
3)修改配置文件中的主机名
vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1
vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
4)启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
systemctl status kubelet kube-proxy
5)在 k8s-master1上批准新Node kubelet证书申请
# 查看证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-E0cAzR_Tv7S1l6eThCsx5IeVG20AYSeTLOrgW5mAZFA 70s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 同意授权请求
kubectl certificate approve node-csr-E0cAzR_Tv7S1l6eThCsx5IeVG20AYSeTLOrgW5mAZFA
6)查看Node状态
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 31m v1.20.4
k8s-node01 Ready 111s v1.20.4
k8s-node2(172.17.86.255 )节点同上。记得修改主机名!
官网:https://v1-20.docs.kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/
下载:https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
# 指定目录
mkdir -p /opt/kubernetes/dashboard
cd /opt/kubernetes/dashboard
# 下载
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
# 重命名
mv recommended.yaml kubernetes-dashboard.yaml
官方的kubernetes-dashboard.yaml文件中service的type类型为ClusterIP,这种方式要访问dashboard需要通过代理,所以改为spec.type: NodePort; spec.ports.nodePort: 30001方式,这样部署完后,就可以直接通过nodeIP:port的方式访问
kubectl apply -f kubernetes-dashboard.yaml
# 查看部署
kubectl get pods,svc -n kubernetes-dashboard
访问地址:https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# kubernetes DashBoard token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
使用输出的token登录Dashboard:
直接输NodeIP: Port 出现Client sent an HTTP request to an HTTPS server. 问题。手输 https://NodeIP: Port 就没问题
官方 https://kubernetes.io/zh/docs/tasks/administer-cluster/coredns/
有关手动部署或替换 kube-dns,请参阅 CoreDNS GitHub 项目
deployment/kubernetes at master · coredns/deployment · GitHub
复制粘贴里面内容coredns.yaml.sed(在下面vim coredns里粘贴以下内容)
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: coredns/coredns:1.9.0
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.0.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
ConfigMap
data.Corefile: |
kubernetes cluster.local in-addr.arpa ip6.arpa
forward . /etc/resolv.conf
Service
spec.clusterIP: 10.0.0.2(kubelet配置文件中的clusterDNS)
CoreDNS用于集群内部Service名称解析:
# 指定目录
mkdir -p /opt/kubernetes/coredns
cd /opt/kubernetes/coredns
vim coredns.yaml #复制上个窗口内容
kubectl apply -f coredns.yaml
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5ffbfd976d-j6shb 1/1 Running 0 32s
DNS解析测试:
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
解析没问题。
至此一个单Master集群就搭建完成了!这个环境就足以满足学习实验了,如果你的服务器配置较高,可继续扩容多Master集群!
Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。
针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。
Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。
Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。
多Master架构图:
现在需要再增加一台新服务器,作为 k8s-master2 节点,IP是172.17.87.1。
k8s-master2 与已部署的 k8s-master1 所有操作一致。所以我们只需将 k8s-master1 所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可
1)安装docker(此前已部署ok,不用执行,后续新增除 k8s-master2、k8s-master3 之外的节点可以使用)
scp /usr/bin/docker* [email protected]:/usr/bin
scp /usr/bin/runc [email protected]:/usr/bin
scp /usr/bin/containerd* [email protected]:/usr/bin
scp /usr/lib/systemd/system/docker.service [email protected]:/usr/lib/systemd/system
scp -r /etc/docker [email protected]:/etc
# 在Master2启动Docker
systemctl daemon-reload
systemctl start docker
systemctl enable docker
2)创建etcd证书目录(此前已部署ok,不用执行,后续新增除 k8s-master2、k8s-master3 之外的节点etcd可以使用)
# 在新增 k8s-master4,k8s-master5 创建etcd 证书目录执行
mkdir -p /opt/etcd/ssl
3)拷贝 k8s-master1 上文件到 k8s-master2
# k8s-master1 上拷贝所有K8s文件和证书到 k8s-master2
scp -r /opt/kubernetes root@k8s-master2:/opt/kubernetes
scp /usr/lib/systemd/system/kube* root@k8s-master2:/usr/lib/systemd/system
scp /usr/bin/kubectl root@k8s-master2:/usr/bin
scp -r ~/.kube root@k8s-master2:~/.kube
4)删除证书文件(k8s-master2)
# 删除kubelet证书和kubeconfig文件
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
5)修改配置文件IP和主机名(k8s-master)
# 修改apiserver、kubelet和kube-proxy配置文件为本地IP
vim /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=172.17.87.1 \
--advertise-address=172.17.87.1 \
...
vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master02
vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master02
6)启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
7)查看集群状态
# 修改连接master为本机IP
vim ~/.kube/config
...
server: https://172.17.87.1:6443
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
8)批准kubelet证书申请
# 查看证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU 85m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 授权请求
kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU
# 查看Node
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 34h v1.20.4
k8s-master2 Ready 2m v1.20.4
k8s-node1 Ready 33h v1.20.4
k8s-node2 Ready 33h v1.20.4
kubectl label node k8s-master2 node-role.kubernetes.io/master=
kubectl label node k8s-node1 k8s-node2 node-role.kubernetes.io/worker=worker
[root@k8s-master1 ]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready master 8h v1.20.0
k8s-master2 Ready master 89m v1.20.0
k8s-node1 Ready worker 7h43m v1.20.0
k8s-node2 Ready worker 7h26m v1.20.0
kube-apiserver高可用架构图:
Nginx
是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
Keepalived
是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。
注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。
注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。
在两台Master(k8s-master1, k8s-master2)节点操作:
1)安装软件包(主/备)
yum install epel-release -y
yum install nginx keepalived -y
2)Nginx配置文件(主备一样,k8s-master1,k8s-master2执行)
cat > /etc/nginx/nginx.conf << "EOF"
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/k8s-access.log main;
sendfile on;
keepalive_timeout 65;
upstream k8s-apiserver {
server 172.17.87.0:6443; # Master1 APISERVER IP:PORT
server 172.17.87.1:6443; # Master2 APISERVER IP:PORT
}
server {
listen 16443;
server_name localhost;
location / {
proxy_pass https://k8s-apiserver;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
EOF
3)keepalived配置文件(Nginx Master,在k8s-master1执行)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface eth0 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
10.0.0.88/24
}
track_script {
check_nginx
}
}
EOF
参数说明:
vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
virtual_ipaddress:虚拟IP(VIP)
准备上述配置文件中检查nginx运行状态的脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移
4)keepalived配置文件(Nginx Backup,在k8s-master2执行)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.88/24
}
track_script {
check_nginx
}
}
EOF
准备上述配置文件中检查nginx运行状态的脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
5)启动并设置开机启动
systemctl daemon-reload
systemctl start nginx keepalived
systemctl enable nginx keepalived
6)查看keepalived工作状态
可以看到,在eth0网卡绑定了10.0.0.88 虚拟IP,说明工作正常。
7)Nginx+Keepalived高可用测试
关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
在Nginx Master(k8s-master1)执行systemctl stop nginx;
在Nginx Backup(k8s-master2),ip addr命令查看已成功绑定VIP。
8)访问负载均衡器测试
找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:
curl -k https://10.0.0.88:16443/version
{
"major": "1",
"minor": "20",
"gitVersion": "v1.20.4",
"gitCommit": "e87da0bd6e03ec3fea7933c4b5263d151aafd07c",
"gitTreeState": "clean",
"buildDate": "2021-02-18T16:03:00Z",
"goVersion": "go1.15.8",
"compiler": "gc",
"platform": "linux/amd64"
}
可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver
通过查看Nginx日志也可以看到转发apiserver IP:/var/log/nginx/k8s-access.log
9)修改所有Worker Node连接LB VIP
接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来172.17.87.0修改为10.0.0.88(VIP)。
在所有Worker Node(k8s-node1, k8s-node2 ..)执行:
sed -i 's#172.17.87.0:6443#10.0.0.88:16443#' /opt/kubernetes/cfg/*
systemctl restart kubelet kube-proxy
检查节点状态:
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master02 Ready 26m v1.20.4
k8s-master1 Ready 3h21m v1.20.4
k8s-node01 Ready 172m v1.20.4
k8s-node02 Ready 167m v1.20.4
至此,一套完整的 Kubernetes 高可用集群就部署完成了!