目录
一、前置知识点
1.1 生成环境部署k8s集群的两种方式
1.2 环境准备
1.3 操作系统初始化配置(三台操作一致)
二、部署Etcd集群
2.1 准备cfssl证书生成工具
2.2生成Etcd证书
2.2.1 自签证书颁发机构(CA)
2.2.2 使用自签CA签发EtcdHTTPS证书
2.3 从Github下载二进制文件
2.4 部署Etcd集群
三、安装Docker
四、部署MasterNode
4.1 生成kube-apiserver证书
4.1.1 自签证书颁发机构(CA)
4.1.2 使用自签CA签发Kube-apiserverHTTPS证书
4.2 从Github下载二进制文件
4.3 解压二进制包
4.4 部署kube-apiserver
4.4.1 启动TLS Bootstrapping 机制
4.4.2 systemd管理apiserver
4.5 部署kube-controller-manager
4.6 部署kube-scheduler
五、部署 Work Node
5.1 创建工作目录并拷贝二进制文件
5.2 部署kubelet
5.3 批准kubelet证书申请加入集群
5.4 部署kube-proxy
5.5 部署网络组件
5.6 授权apiserver访问kubelet
5.7 新增Work Node
六、部署Dashboard和CoreDNS
6.1 部署Dashboard
6.2 部署Core DNS
七、扩容多Master(高可用架构)
7.1 部署Master2 Node
7.2 部署Nginx+Keepalived高可用负载均衡器
kubeadmin:
Kubeadmin是一个K8s部署工具,提供kubeadmin init和kubeadm join,用于快速部署Kubernetes集群。
二进制包:
从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群
小结:Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署有点麻烦,期间可以学习很多工作原理,也利于后期维护。
服务器要求:
- 建议最小配置:2核CPU、2G内存、30G硬盘
- 服务器最好可以访问外网,会有从网上拉取镜像需求,如果服务器不能上外网,需要提前下载对应的镜像并导入节点
软件 | 版本 |
操作系统 |
CentOS7.x_x64(mini) |
容器引擎 |
DockerCE 19 |
Kubernetes |
Kubernetesv1.20 |
角色 |
IP |
组件 |
k8s-master1 |
10.0.0.5 | kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd,nginx,keepalived |
k8s-master2 |
10.0.0.4 | kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd,nginx,keepalived |
k8s-node1 |
10.0.0.6 | kubelet,kube-proxy,docker,etcd |
k8s-node2 |
10.0.0.7 | kubelet,kube-proxy,docker,etcd |
负载均衡器IP |
10.0.0.200(VIP) |
须知:考虑到有些朋友电脑配置较低,一次性开四台机器会跑不动,所有搭建这套K8s高可用集群分为两部分实施,先部署一套单Master架构(三台),再扩容为多Master架构(4台或6台),顺便熟悉下Master扩容流程。
单Master架构
角色 |
IP |
组件 |
k8s-master |
10.0.0.5 |
kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
k8s-node1 |
10.0.0.6 |
kubelet,kube-proxy,docker,etcd |
k8s-node2 |
10.0.0.7 |
kubelet,kube-proxy,docker,etcd |
1.关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
2.关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
3.关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
4.根据规划设置主机名
hostnamectl set-hostname
#主机名生效
#重新链接xshell或者执行bash命令
5.在master添加hosts
cat >> /etc/hosts << EOF
10.0.0.5 k8s-master1
10.0.0.6 k8s-node1
10.0.0.7 k8s-node2
10.0.0.4 k8s-master2
EOF
6.将桥接的IPv4流量传递到iptables的链接
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
7.时间同步
yum install ntpdate -y ntpdate
ntpdate time.windows.com
Etcd是一个分布式键值对存储系统,Kubernetes使用Etcd进行数据存储,所以准备一个Etcd数据,为解决Etcd单点故障,应采用集群部署方式,这里使用三台组件集群,可容忍1台机器故障,当然,你可以使用5台组建集群,可容忍2台机器故障。
节点名称 |
IP |
etcd-1 |
10.0.0.5 |
etcd-2 |
10.0.0.6 |
etcd-3 |
10.0.0.7 |
注:为了节省机器,这里与k8s机器节点复用。也可以独立于k8s集群之外部署,只要apiserver能连到就行
- 认识kubernetes HTTPS证书。
- k8s所有组件均采用https加密通信,这些组件一般有两套根证书生成:k8s组件(apiserver)和Etcd。
- 假如按角色来分,证书分为管理节点和工作节点。
- 管理节点:指controller-manager和scheduler连接apiserver所需要的客户端证书。
- 工作节点:指kubelet和kube-proxy连接apiserver所需要的客户端证书,而一般都会启用Bootstrap TLS机制,所以kubelet的证书初次启动会向apiserver申请颁发证书,由controller-manager组件自动颁发。
cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。
#找任意一台服务器操作,这里用Master节点
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master1 ~]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@k8s-master1 ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@k8s-master1 ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@k8s-master1 ~]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
1.创建工作目录:
[root@k8s-master1 ~]# mkdir -p ~/TLS/{etcd,k8s}
[root@k8s-master1 ~]# cd ~/TLS/etcd/
2.自签CA:
[root@k8s-master1 ~]# cd ~/TLS/etcd/
[root@k8s-master1 etcd]# cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
[root@k8s-master1 etcd]#cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
3.生成证书
[root@k8s-master1 etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#会生成ca.pem和ca-key.pem文件。
1.创建证书申请文件:
[root@k8s-master1 etcd]# cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"10.0.0.5",
"10.0.0.6",
"10.0.0.7"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
2.生成证书:
[root@k8s-master1 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
#会生成server.pem和server-key.pem文件。
下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
以下在节点5上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点6和节点7.
1.创建工作目录并解压二进制包
[root@k8s-master1 ~]# mkdir /opt/etcd/{bin,cfg,ssl} -p
[root@k8s-master1 ~]# tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master1 ~]# mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2.创建etcd配置文件
[root@k8s-master1 ~]# cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.5:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.5:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.5:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.5:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.0.5:2380,etcd-2=https://10.0.0.6:2380,etcd-3=https://10.0.0.7:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#配置文件详解
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIALCLUSTER_TOKEN:集群Token
ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
3.ystemd管理etcd
[root@k8s-master1 ~]# cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4.拷贝刚才生成的证书
[root@k8s-master1 ~]# cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
5.启动并设置开机启动
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start etcd #会一直卡在着,可以通过日志看出,正在等待其他etcd集群加入
Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.
[root@k8s-master1 ~]# systemctl enable etcd
6.将上面节点1所有生成的文件拷贝到节点6和节点7
[root@k8s-master1 ~]# scp -r /opt/etcd/ root@k8s-node1:/opt/
[root@k8s-master1 ~]# scp /usr/lib/systemd/system/etcd.service root@k8s-node1:/usr/lib/systemd/system/
[root@k8s-master1 ~]# scp -r /opt/etcd/ root@k8s-node2:/opt/
[root@k8s-master1 ~]# scp /usr/lib/systemd/system/etcd.service root@k8s-node2:/usr/lib/systemd/system/
7.然后在节点6和节点7分别修改etcd.conf配置文件中的节点名称和当前服务器IP:
#k8s-node1上执行
[root@k8s-node1 ~]# vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-2" # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.6:2380" # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.6:2379" # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.6:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.6:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.0.5:2380,etcd-2=https://10.0.0.6:2380,etcd-3=https://10.0.0.7:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#最后启动etcd并设置开机启动,k8s-node2同上。
8.查看集群状态
[root@k8s-master1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://10.0.0.5:2379,https://10.0.0.6:2379,https://10.0.0.7:2379" endpoint health --write-out=table
+-----------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+-----------------------+--------+-------------+-------+
| https://10.0.0.7:2379 | true | 35.569345ms | |
| https://10.0.0.5:2379 | true | 13.400931ms | |
| https://10.0.0.6:2379 | true | 46.552959ms | |
+-----------------------+--------+-------------+-------+
#如果输出以上信息,就说明集群部署成功。
#如果有问题第一步先看日志:/var/log/messages 或 journalctl -u etcd
这里使用Docker作为容器引擎,也可以换成别的,例如containerd
下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
以下在所有节点操作。这里采用二进制安装,用yum安装也一样。
1.解压二进制包
[root@k8s-master1 ~]# tar zxvf docker-19.03.9.tgz
[root@k8s-master1 ~]# mv docker/* /usr/bin
2.systemd管理docker
[root@k8s-master1 ~]# cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
3.创建配置文件
[root@k8s-master1 ~]# mkdir /etc/docker
[root@k8s-master1 ~]# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
#registry-mirrors 阿里云镜像加速器
4.启动并设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker
[root@k8s-master1 ~]# cd ~/TLS/k8s
[root@k8s-master1 k8s]# cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
[root@k8s-master1 k8s]# cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
2.生成证书:
[root@k8s-master1 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#会生成ca.pem和ca-key.pem文件。
1 #创建证书申请文件:
[root@k8s-master1 k8s]# cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"20.0.0.1",
"127.0.0.1",
"10.0.0.5",
"10.0.0.4",
"10.0.0.200",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
#注:上述文件hosts字段中IP为所有Master/LB/VIPIP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
2.生成证书:
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#会生成server.pem和server-key.pem文件。
下载地址: 浏览器访问,下载Server Binaries下kubernetes-server-linux-amd64.tar.gz包即可
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
具体下载地址:wget https://dl.k8s.io/v1.20.9/kubernetes-server-linux-amd64.tar.gz
注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。
[root@k8s-master1 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
[root@k8s-master1 ~]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 ~]# cd kubernetes/server/bin
[root@k8s-master1 bin]# cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
[root@k8s-master1 bin]# cp kubectl /usr/bin/
1. #创建配置文件
[root@k8s-master1 ~]# cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://10.0.0.5:2379,https://10.0.0.6:2379,https://10.0.0.7:2379 \\
--bind-address=10.0.0.5 \\
--secure-port=6443 \\
--advertise-address=10.0.0.5 \\
--allow-privileged=true \\
--service-cluster-ip-range=20.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
#注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。
#配置文件详解
--logtostderr:启用日志
---v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS
--token-auth-file:bootstrap:bootstrap机制、token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserverhttps证书
1.20版本必须加的参数:--service-account-issuer,--service-account-signing-key-file
--etcd-xxxfile:连接Etcd集群证书ee
--audit-log-xxx:审计日志
启动聚合层相关配置:--requestheader-client-ca-file,--proxy-client-cert-file,--proxy-client-key-file,--requestheader-allowed-names,--requestheader-extra-headers-prefix,--requestheader-group-headers,--requestheader-username-headers,--enable-aggregator-routing
2.拷贝刚才生成的证书
#把刚才生成的证书拷贝到配置文件中的路径:
[root@k8s-master1 ~]# cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
TLS Bootstraping: Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kublet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用与kubelet。kube-proxy还是由我们统一颁发一个证书。
TLS bootstraping工作流程:
1.#创建上述配置文件中token文件:
[root@k8s-master1 ~]# cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
格式:token,用户名,UID,用户组
token也可自行生成替换:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
1.systemd管理apiserver
[root@k8s-master1 ~]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF #转义符\是为了使用EOF
2.启动并设置开机启动
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start kube-apiserver
[root@k8s-master1 ~]# systemctl enable kube-apiserver
1.创建配置文件
[root@k8s-master1 ~]# cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=20.244.0.0/16 \\
--service-cluster-ip-range=20.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF
#配置文件详解
--kubeconfig:连接apiserver配置文件
--leader-elect:当该组件启动多个时,自动选举(HA)
--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致。
2.生成kubeconfig文件
#生成kube-controller-manager证书:
#切换工作目录
[root@k8s-master1 ~]# cd ~/TLS/k8s
#创建证书请求文件
[root@k8s-master1 k8s]# cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
3.生成证书
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
4.生成kubeconfig文件(以下是shell命令,直接在终端执行):
[root@k8s-master1 k8s]# KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
[root@k8s-master1 k8s]# KUBE_APISERVER="https://10.0.0.5:6443"
[root@k8s-master1 k8s]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config set-credentials kube-controller-manager \
--client-certificate=./kube-controller-manager.pem \
--client-key=./kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
5.systemd管理controller-manager
[root@k8s-master1 k8s]# cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
6.启动并设置开机启动
[root@k8s-master1 k8s]# systemctl daemon-reload
[root@k8s-master1 k8s]# systemctl start kube-controller-manager
[root@k8s-master1 k8s]# systemctl enable kube-controller-manager
1.创建配置文件
[root@k8s-master1 ~]# cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF
#配置文件详解
--kubeconfig:连接apiserver配置文件
--leader-elect:当该组件启动多个时,自动选举(HA)
2.生成kubeconfig文件
#切换工作目录
[root@k8s-master1 ~]# cd ~/TLS/k8s
#创建证书请求文件
[root@k8s-master1 k8s]# cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
3.生成证书
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
4.生成kubeconfig文件(以下是shell命令,直接在终端执行):
[root@k8s-master1 k8s]# KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
[root@k8s-master1 k8s]# KUBE_APISERVER="https://10.0.0.5:6443"
[root@k8s-master1 k8s]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config set-credentials kube-scheduler \
--client-certificate=./kube-scheduler.pem \
--client-key=./kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
5.systemd管理scheduler
[root@k8s-master1 k8s]# cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
6.启动并设置开机启动
[root@k8s-master1 k8s]# systemctl daemon-reload
[root@k8s-master1 k8s]# systemctl start kube-scheduler
[root@k8s-master1 k8s]# systemctl enable kube-scheduler
7.查看集群状态
生成kubectl连接集群的证书:(kubectl连接apiserver 需要证书及kubeconfig文件)
#切换工作目录
[root@k8s-master1 k8s]# cd ~/TLS/k8s
[root@k8s-master1 k8s]# cat > admin-csr.json <
下面还是在Master Node上操作,即同时作为Worker Node
1.在所有workernode创建工作目录:
[root@k8s-master1 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
2.从master节点拷贝:
[root@k8s-master1 ~]# cd kubernetes/server/bin
[root@k8s-master1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin
1.创建配置文件
[root@k8s-master1 ~]# cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
#配置文件详解
--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像
2.配置参数文件
[root@k8s-master1 ~]# cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 20.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
3.生成kubelet初次加入集群引导kubeconfig文件
(以下是shell命令,直接在终端执行):
[root@k8s-master1 ~]# KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
[root@k8s-master1 ~]# KUBE_APISERVER="https://10.0.0.5:6443"
[root@k8s-master1 ~]# TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致
#生成 kubelet bootstrap kubeconfig 配置文件
[root@k8s-master1 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 ~]# kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 ~]# kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 ~]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
4.systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5.启动并设置开机启动
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start kubelet
[root@k8s-master1 ~]# systemctl enable kubelet
1.查看kubelet证书请求
[root@k8s-master1 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-1n2Vbxh8b378muwatZy6yRrD0PgmmgtBmD41qWUEmS8 2m1s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
2.批准申请
[root@k8s-master1 ~]# kubectl certificate approve node-csr-1n2Vbxh8b378muwatZy6yRrD0PgmmgtBmD41qWUEmS8
3.查看节点
[root@k8s-master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady 28s v1.20.9
#注:由于网络插件还没有部署,节点会没有准备就绪 NotReady
1. 创建配置文件
[root@k8s-master1 ~]# cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
2. 配置参数文件
[root@k8s-master1 ~]# cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 20.0.0.0/24
EOF
3. 生成kube-proxy.kubeconfig文件
#切换工作目录
[root@k8s-master1 ~]# cd ~/TLS/k8s
#创建证书请求文件
[root@k8s-master1 k8s]# cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
#生成证书
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
#生成kubeconfig文件:(以下是shell命令,直接在终端执行):
[root@k8s-master1 k8s]# KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
[root@k8s-master1 k8s]# KUBE_APISERVER="https://10.0.0.5:6443"
[root@k8s-master1 k8s]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
[root@k8s-master1 k8s]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
4.systemd管理kube-proxy
[root@k8s-master1 k8s]# cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5. 启动并设置开机启动
[root@k8s-master1 k8s]# systemctl daemon-reload
[root@k8s-master1 k8s]# systemctl start kube-proxy
[root@k8s-master1 k8s]# systemctl enable kube-proxy
Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
部署Calico: 此yaml文件使用的控制器是DaemonSet,所以所有的Node节点都会启动一个pod
[root@k8s-master1 k8s]# mkdir /root/yaml/Calico -p
cd /root/yaml/Calico
[root@k8s-master1 k8s]# wget --no-check-certificate https://docs.projectcalico.org/v3.9/manifests/calico.yaml #yaml文件下载地址
kubectl apply -f calico.yaml
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-56b44cd6d5-j2kch 1/1 Running 0 35m
calico-node-5rr2b 1/1 Running 0 35m
等Calico Pod都Running,节点也会准备就绪:
[root@k8s-master1 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 52m v1.20.9
应用场景:例如kubectl logs
[root@k8s-master1 ~]# cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
[root@k8s-master1 ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
1. 拷贝已部署好的Node相关文件到新节点
#在Master节点将Worker Node涉及文件拷贝到新节点10.0.0.6/7(其他节点不用创建Calico组件)
[root@k8s-master1 ~]# scp -r /opt/kubernetes/ [email protected]:/opt
[root@k8s-master1 ~]# scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system
[root@k8s-master1 ~]# scp /usr/bin/kubectl [email protected]:/usr/bin
2. 删除kubelet证书和kubeconfig文件
[root@k8s-node1 ~]# rm -rf /opt/kubernetes/cfg/kubelet.kubeconfig
[root@k8s-node1 ~]# rm -rf /opt/kubernetes/ssl/kubelet*
#注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除
3.修改主机名
[root@k8s-node1 ~]# vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1 \ #修该当前节点的主机名
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1 #修该当前节点的主机名
4. 启动并设置开机启动
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl start kube-proxy
[root@k8s-node1 ~]# systemctl enable kube-proxy
[root@k8s-node1 ~]# systemctl start kubelet
[root@k8s-node1 ~]# systemctl enable kubelet
5. 在Master上批准新Node kubelet证书申请
# 查看证书请求
[root@k8s-master1 ~]# kubectl get csr
node-csr-yzGTPunwmJ7xmmK4rhFO39svTXgo04UEQAcHnD280kw 3m42s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 授权请求
[root@k8s-master1 ~]# kubectl certificate approve node-csr-yzGTPunwmJ7xmmK4rhFO39svTXgo04UEQAcHnD280kw (对应NAME,等几分钟node状态将变为Ready)
6. 查看Node状态
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 4h58m v1.20.9
k8s-node1 Ready 5m40s v1.20.9
Node2(10.0.0.7)节点同上。记得修改主机名!
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 4d13h v1.20.9
k8s-node1 Ready 29m v1.20.9
k8s-node2 Ready 8m12s v1.20.9
单Master部署成功
[root@k8s-master1 ~]# mkdir /root/yaml/Dashboard
[root@k8s-master1 ~]# cd /root/yaml/Dashboard/
[root@k8s-master1 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
#查看部署
[root@k8s-master1 Dashboard]# kubectl get pods,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-7b59f7d4df-czqgg 1/1 Running 0 5m10s
pod/kubernetes-dashboard-5dbf55bd9d-mwjx2 1/1 Running 0 5m11s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 20.0.0.125 8000/TCP 5m10s
service/kubernetes-dashboard ClusterIP 20.0.0.7 443/TCP 5m11s
#然后将clusterip改为nodeport
[root@k8s-master1 Dashboard]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
改成这个样子
访问地址:https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:
[root@k8s-master1 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@k8s-master1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@k8s-master1 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') (输出token即是登录密码)
使用输出的token登录Dashboard。
CoreDNS用于集群内部Service名称解析。
[root@k8s-master1 ~]# mkdir /root/yaml/CoreDNS
[root@k8s-master1 ~]# cd /root/yaml/CoreDNS/
[root@k8s-master1 CoreDNS]# ls
coredns.yaml
[root@k8s-master1 CoreDNS]# cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: coredns
image: lizhenliang/coredns:1.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 20.0.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
[root@k8s-master1 CoreDNS]# kubectl apply -f coredns.yaml
[root@k8s-master1 CoreDNS]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58d8cd457b-whlzb 1/1 Running 0 48s
DNS解析测试:
[root@k8s-master1 CoreDNS]# kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 20.0.0.2
Address 1: 20.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 20.0.0.1 kubernetes.default.svc.cluster.local
解析没问题。
至此一个单Master集群就搭建完成了!这个环境就足以满足学习实验了,如果你的服务器配置较高,可继续扩容多Master集群!
- Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。
- 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。
- Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。
- Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。
多Master架构图:
现在需要再增加一台新服务器,作为Master2Node,IP是10.0.0.4。
为了节省资源你也可以将之前部署好的WorkerNode1复用为Master2 Node角色(即部署Master组件)
Master2 与已部署的Master1所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可。(操作系统初始化配置,/etc/hosts文件增加master2并与master1的hosts文件同步)
1.安装Docker
[root@k8s-master1 ~]# scp /usr/bin/docker* [email protected]:/usr/bin
[root@k8s-master1 ~]# scp /usr/bin/runc [email protected]:/usr/bin
[root@k8s-master1 ~]# scp /usr/bin/containerd* [email protected]:/usr/bin
[root@k8s-master1 ~]# scp /usr/lib/systemd/system/docker.service [email protected]:/usr/lib/systemd/system
[root@k8s-master1 ~]# scp -r /etc/docker [email protected]:/etc
在Master2启动Docker
[root@k8s-master2 ~]# systemctl daemon-reload
[root@k8s-master2 ~]# systemctl start docker
[root@k8s-master2 ~]# systemctl enable docker
2.创建etcd证书目录
在Master2创建etcd证书目录
[root@k8s-master2 ~]# mkdir -p /opt/etcd/ssl
3.拷贝文件(Master1操作)
拷贝Master1上所有K8s文件和etcd证书到Master2:
[root@k8s-master1 ~]# scp -r /opt/kubernetes [email protected]:/opt
[root@k8s-master1 ~]# scp -r /opt/etcd/ssl [email protected]:/opt/etcd
[root@k8s-master1 ~]# scp /usr/lib/systemd/system/kube* [email protected]:/usr/lib/systemd/system
[root@k8s-master1 ~]# scp /usr/bin/kubectl [email protected]:/usr/bin
[root@k8s-master1 ~]# scp -r ~/.kube [email protected]:~ #为了节点可执行kubectl命令
4.删除证书文件
[root@k8s-master2 ~]# rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
[root@k8s-master2 ~]# rm -f /opt/kubernetes/ssl/kubelet*
注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除
5.修改配置文件IP和主机名
修改apiserver、kubelet和kube-proxy配置文件为本地IP:
[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=10.0.0.4 \
--advertise-address=10.0.0.4 \
...
[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-controller-manager.kubeconfig
server: https://10.0.0.4:6443
[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-scheduler.kubeconfig
server: https://10.0.0.4:6443
[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2
[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2
[root@k8s-master2 ~]# vi ~/.kube/config
server: https://10.0.0.4:6443
6. 启动设置开机启动
[root@k8s-master2 ~]# systemctl daemon-reload
[root@k8s-master2 ~]# systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
[root@k8s-master2 ~]# systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
7.查看集群状态
[root@k8s-master2 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
8. 批准kubelet证书申请
#查看证书请求
[root@k8s-master2 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-n65w4pDDpZblvXo9ndOXfCNlMntOoyQ3-APwxtTg0Qw 39s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
#授权请求
[root@k8s-master2 ~]# kubectl certificate approve node-csr-n65w4pDDpZblvXo9ndOXfCNlMntOoyQ3-APwxtTg0Qw
#查看Node
[root@k8s-master2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 28h v1.20.9
k8s-master2 Ready 4m33s v1.20.9
k8s-node1 Ready 23h v1.20.9
k8s-node2 Ready 12h v1.20.9
kube-apiserver高可用架构图
- Nginx是一个主流Web服务反向代理服务器,这里用四层实现对apiserver实现负载均衡。
- Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双击热备,在上述拓扑图中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(飘逸VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点上,从而保证VIP一直可用,实现Ngin高可用。
- 注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。
注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。
在两台Master节点操作。
1. 安装软件包(主/备)
[root@k8s-master1 ~]# yum install epel-release -y
[root@k8s-master1 ~]# yum install nginx keepalived -y
2. Nginx配置文件(主/备一样)
[root@k8s-master1 ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.0.0.5:6443; # Master1 APISERVER IP:PORT
server 10.0.0.4:6443; # Master2 APISERVER IP:PORT
}
server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
3. keepalived配置文件(Nginx Master)
[root@k8s-master1 ~]# cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # 修改为实际网卡名
virtual_router_id 50 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.200/8
}
track_script {
check_nginx
}
}
EOF
vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
virtual_ipaddress:虚拟IP(VIP)
#准备上述配置文件中检查nginx运行状态的脚本:
[root@k8s-master1 ~]# cat > /etc/keepalived/check_nginx.sh << EOF
#!/bin/bash
counter=$(ps -C nginx --no-heading|wc -l)
if [ ${counter} = 0 ]; then
/usr/sbin/nginx
sleep 2
counter=$(ps -C nginx --no-heading|wc -l)
if [ ${counter} = 0 ]; then
/etc/init.d/keepalived stop
fi
fi
EOF
[root@k8s-master1 ~]# chmod +x /etc/keepalived/check_nginx.sh
4.keepalived配置文件(Nginx Backup)
[root@k8s-master2 ~]# cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens33 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90 # 优先级,主服务器设置100
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.200/8
}
track_script {
check_nginx
}
}
EOF
准备上述配置文件中检查nginx运行状态的脚本:
[root@k8s-master2 ~]# cat > /etc/keepalived/check_nginx.sh << EOF
#!/bin/bash
counter=$(ps -C nginx --no-heading|wc -l)
if [ ${counter} = 0 ]; then
/usr/sbin/nginx
sleep 2
counter=$(ps -C nginx --no-heading|wc -l)
if [ ${counter} = 0 ]; then
/etc/init.d/keepalived stop
fi
fi
EOF
[root@k8s-master2 ~]# sudo chmod +x /etc/keepalived/check_nginx.sh
5. 启动并设置开机启动 (主备启动)
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start nginx keepalived
[root@k8s-master1 ~]# systemctl enable nginx keepalived
如果nginx报错stream模块:unknown directive stream in /etc/nginx/nginx.conf:13
则需要下载stream模块:yum install nginx-mod-stream -y nginx -V 检查模块
6. 查看keepalived工作状态
[root@k8s-master1 ~]# ip addr
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:45:83:26 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.5/8 brd 10.255.255.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 10.0.0.200/8 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::2603:ad48:33fb:70f4/64 scope link noprefixroute
valid_lft forever preferred_lft forever
可以看到,在ens33网卡绑定了10.0.0.200 虚拟IP,说明工作正常。
7.关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
在Nginx Master执行 pkill nginx;
在Nginx Backup,ip addr命令查看已成功绑定VIP。
[root@k8s-master2 ~]# ip addr
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:a2:e6:73 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.4/8 brd 10.255.255.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 10.0.0.200/8 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::b09a:5eb9:acc5:23ff/64 scope link noprefixroute
valid_lft forever preferred_lft forever
8. 访问负载均衡器测试
找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问
[root@k8s-master2 ~]# curl -k https://10.0.0.200:16443/version
{
"major": "1",
"minor": "20",
"gitVersion": "v1.20.9",
"gitCommit": "7a576bc3935a6b555e33346fd73ad77c925e9e4a",
"gitTreeState": "clean",
"buildDate": "2021-07-15T20:56:38Z",
"goVersion": "go1.15.14",
"compiler": "gc",
"platform": "linux/amd64
可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver
通过查看Nginx日志也可以看到转发apiserver IP:
[root@k8s-master1 ~]# tail /var/log/nginx/k8s-access.log -f
10.0.0.5 10.0.0.5:6443 - [03/Sep/2022:22:42:58 +0800] 200 416
10.0.0.4 10.0.0.5:6443 - [03/Sep/2022:22:45:52 +0800] 200 422
到此还没结束,还有下面最关键的一步。
7.3 修改所有Worker Node连接LB VIP
试想下,虽然我们增加了Master2 Node和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Worker Node组件连接都还是Master1 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。
因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来10.0.0.5修改为10.0.0.200(VIP)。
[root@k8s-master1 ~]# sed -i 's#10.0.0.5:6443#10.0.0.200:16443#' /opt/kubernetes/cfg/*
[root@k8s-master1 ~]# systemctl restart kubelet kube-proxy
#谨慎操作,此处批量操作会将Master2节点的master组件ip换成10.0.0.200:16443
下面2个配置文件ip地址,应为Master2节点的ip地址10.0.0.4:6443
[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-scheduler.kubeconfig
[root@k8s-master2 ~]# vi /opt/kubernetes/cfg/kube-controller-manager.kubeconfig
为了高可用环境使用,必须将所有节点kubectl的配置文件/root/.kube/config ip地址,修改为VIP 10.0.0.200:16443
8.检查节点状态:
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 29h v1.20.9
k8s-master2 Ready 77m v1.20.9
k8s-node1 Ready 24h v1.20.9
k8s-node2 Ready 13h v1.20.9
9.访问验证:
[root@k8s-master1 ~]# kubectl create deployment web --image=nginx
deployment.apps/web created
[root@k8s-master1 ~]# kubectl expose deployment web --port=80 --target-port=80 --type=NodePort
service/web exposed
[root@k8s-master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 20.0.0.1 443/TCP 23h
web NodePort 20.0.0.154 80:30189/TCP 30s
浏览器访问http://10.0.0.200:30189/或者http://10.0.0.5(6,7,4):30189/均可访问到nginx界面。
至此,一套完整的 Kubernetes 高可用集群就部署完成了!