四台虚拟机,角色分别为master0~2,node0
操作系统:centos71804
配置:4核cpu,6G内存,两块60G硬盘
192.168.20.196 —— master0
192.168.20.197 —— master1
192.168.20.198 —— master2
192.168.20.199 —— node0
192.168.20.200 —— VIP
【本次部署是演示性部署,没有配置静态IP;进行测试实验时建议为4台虚拟机配置静态IP】
配置4台虚拟机的yum源、docker源和kubernetes源,建议个人使用网易的源,如果网络条件较好可以使用阿里的源
配置4台虚拟机的主机环境:防火墙、SELinux、NTP、crontab、SWAP、透明网桥、hostname、/etc/hosts文件、SSH免密互登
在4台虚拟机上安装和配置docker
在4台虚拟机上安装kubeadm,kubelet和kubectl
在master0~2上安装keepalived+lvs,并适当修改配置文件/etc/keepalived/keepalived.conf
依次在master0~2上以非抢占模式启动keepalived
在master0上执行kubeadm init并通过"--image-repository"指定自己常用的镜像仓库地址
记下kubeadm join192.168.20.200:6443 --token 信息,或者在添加节点时用 kubeadm token list查看--token内容
网络方案采用支持Network Policy的Canal,执行https://docs.projectcalico.org/manifests/canal.yaml安装部署网络插件Canal【如果是在支持BGP的大型网络环境中建议直接使用网络插件calico】,执行kubectl get --namespace=kube-system pod -o wide|grep canal查看Canal的部署结果
【注意:Canal这个项目本身已经停止维护了,Canal其实就是Flannel和Calico的组合,在https://docs.projectcalico.org/getting-started/kubernetes/flannel/flannel上还有它的manifests】
等待一会儿,在master0上执行kubectl get nodes和kubectl get pods -n kube-system
把master0节点的证书拷贝到master1和master2上,在master1和master2上创建证书存放路径:cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
在master0上执行:
scp /etc/kubernetes/pki/ca.crt master1:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master1:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master1:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master1:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master1:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master1:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master1:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master1:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernete/pki/etcd/
将master1和master2加入集群,在master1和master2上执行:
kubeadm join192.168.20.200:6443 --token 7de7h55rnwluq.x6nypjrhl
--discovery-token-ca-cert-hash sha256:fa75619ab50a9dbda9aa6c89828c2c0bb627312634650299fe1647ab510a7e6c --control-plane
在master1和master2上执行:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g)$HOME/.kube/config
执行kubectl get nodes查看节点角色及状态
把node0加入到集群,在node0上执行:kubeadm join 192.168.20.200:6443 --token 7de7h55rnwluq.x6nypjrhl --discovery-token-ca-cert-hash sha256:fa75619ab50a9dbda9aa6c89828c2c0bb627312634650299fe1647ab510a7e6c
在master0上执行kubectl get nodes查看集群节点状态
安装helm,使用helm部署反向代理/负载均衡工具traefik
使用helm部署GUI管理工具kubernetes-dashboard
使用Helm部署集群监控Prometheus Operator
【注意:如果是大型测试环境或者生产环境,建议配置容器健康监测Liveness和Readiness】

孟伯,20200522

交流联系:微信 1807479153 ,QQ 1807479153