一、Ansible自动化部署k8s二进制集群
Ansible是一种IT自动化工具。它可以配置系统,部署软件以及协调更高级的IT任务,例如持续部署,滚动更新。Ansible适用于管理企业IT基础设施。 这里我通过Ansible来实现Kubernetes v1.16 高可用集群自动部署(离线版) (但是还是需要网络,因为这里需要去部署flannel,coredns,ingress,dashboard插件,需要拉取镜像
Ansible自动化部署k8s-1.16 集群
二、介绍
使用ansible自动化部署k8s集群(支持单master,多master)离线版
软件架构
2.1、软件架构说明
2.1.1、单master架构
2.1.2、多master架构
三、安装教程
3.1、环境说明:(单master)
192.168.33.151 ansible-server
192.168.33.161 k8s-master-1
192.168.33.162 k8s-node-1
192.168.33.163 k8s-node-2
3.2、使用说明
单master,4c,8g,(1台master,2台node,1台ansible)
多master,4c,8g,(2台master,2台node,1台ansible,2台nginx)
如果部署的是多master主机,那么需要在nginx上再跑1个keepalived,如果是云主机可以拿slb来补充
3.3、安装ansible
yum install epel-release -y
yum install git ansible -y
3.4、ssh免密钥配置(ansible机器到所有机器)
ssh-keygen -t rsa -P ''
ssh-copy-id -i 192.168.33.151
ssh-copy-id -i 192.168.33.161
ssh-copy-id -i 192.168.33.162
ssh-copy-id -i 192.168.33.163
3.5、同步所有节点系统时间
ntpdate -u ntp.api.bz #所有节点操作
四、自动化脚本说明
五、下载部署文件
5.1、下载Ansible部署文件及镜像文件
git clone [email protected]:ljx321/ansible_deployment_k8s.git
说明:下载的时候请将你的公钥发给我 vx:ljx97609760
下载二进制包:
链接: https://pan.baidu.com/s/1Xh1eocWwGdiOGs7KVaeDdw 提取码: fy68
5.2、解压文件
tar zxf binary_pkg.tar.gz
说明:将两个文件都解压到ansible服务器上,我的工作目录是在/opt/下,将解压的目录都放在/opt下
修改hosts文件,指定部署是单master,还是多master,以及group_var下的all的变量,将ip指定需要修改的
5.3、修改Ansible文件
修改hosts文件,根据规划修改对应IP和名称。
[root@ansible-server ansible-install-k8s]# cat hosts
[master]
# 如果部署单Master,只保留一个Master节点
# 默认Naster节点也部署Node组件
192.168.33.161 node_name=k8s-master-1
#192.168.33.162 node_name=k8s-master2
[node]
192.168.33.162 node_name=k8s-node-1
192.168.33.163 node_name=k8s-node-2
[etcd]
192.168.33.161 etcd_name=etcd-1
192.168.33.162 etcd_name=etcd-2
192.168.33.163 etcd_name=etcd-3
#[lb]
# 如果部署单Master,该项忽略
#192.168.33.163 lb_name=lb-master
#192.168.33.171 lb_name=lb-backup
[k8s:children]
master
node
#[newnode]
#192.168.33.191 node_name=k8s-node3
修改group_vars/all.yml文件,修改nic网卡地址和证书可信任IP
vim group_vars/all.yml
nic: eth0 根据自己的网卡去写
k8s:信任的ip
5.4、一键部署
5.4.1、单Master版
ansible-playbook -i hosts single-master-deploy.yml -uroot -k
5.4.2、多Master版:
ansible-playbook -i hosts multi-master-deploy.yml -uroot -k
5.5、部署控制
如果安装某个阶段失败,可针对性测试.
例如:只运行部署插件
ansible-playbook -i hosts single-master-deploy.yml -uroot -k --tags master
5.6、部署单master完效果
[root@k8s-master-1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-1 Ready 5d8h v1.16.0
k8s-node-1 Ready 5d8h v1.16.0
k8s-node-2 Ready 5d8h v1.16.0
[root@k8s-master-1 ~]# kubectl get cs
NAME AGE
controller-manager
scheduler
etcd-2
etcd-0
etcd-1
[root@k8s-master-1 ~]# kubectl get pod,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx pod/nginx-ingress-controller-8zp8r 1/1 Running 0 2d3h
ingress-nginx pod/nginx-ingress-controller-bfgj6 1/1 Running 0 2d3h
ingress-nginx pod/nginx-ingress-controller-n5k22 1/1 Running 0 2d3h
kube-system pod/coredns-59fb8d54d6-n6m5w 1/1 Running 0 2d3h
kube-system pod/kube-flannel-ds-amd64-jwvw6 1/1 Running 0 2d3h
kube-system pod/kube-flannel-ds-amd64-m92sg 1/1 Running 0 2d3h
kube-system pod/kube-flannel-ds-amd64-xwf2h 1/1 Running 0 2d3h
kubernetes-dashboard pod/dashboard-metrics-scraper-566cddb686-smw6p 1/1 Running 0 2d3h
kubernetes-dashboard pod/kubernetes-dashboard-c4bc5bd44-zgd82 1/1 Running 0 2d3h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.0.0.1 443/TCP 2d3h
ingress-nginx service/ingress-nginx ClusterIP 10.0.0.22 80/TCP,443/TCP 2d3h
kube-system service/kube-dns ClusterIP 10.0.0.2 53/UDP,53/TCP 2d3h
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.0.0.176 8000/TCP 2d3h
kubernetes-dashboard service/kubernetes-dashboard NodePort 10.0.0.72 443:30001/TCP 2d3h
5.7、部署完多master效果
[root@k8s-master-1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready 6m18s v1.16.0
k8s-master2 Ready 6m17s v1.16.0
k8s-node1 Ready 6m10s v1.16.0
k8s-node2 Ready 6m16s v1.16.0
[root@k8s-master-1 ~]# kubectl get cs
NAME AGE
controller-manager
scheduler
etcd-2
etcd-1
etcd-0
[root@k8s-master-1 ~]# kubectl get pod,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx pod/nginx-ingress-controller-4nf6j 1/1 Running 0 45s
ingress-nginx pod/nginx-ingress-controller-5fknt 1/1 Running 0 45s
ingress-nginx pod/nginx-ingress-controller-lwbkz 1/1 Running 0 45s
ingress-nginx pod/nginx-ingress-controller-v8k8n 1/1 Running 0 45s
kube-system pod/coredns-59fb8d54d6-959xj 1/1 Running 0 6m44s
kube-system pod/kube-flannel-ds-amd64-2hnzq 1/1 Running 0 6m31s
kube-system pod/kube-flannel-ds-amd64-64hqc 1/1 Running 0 6m25s
kube-system pod/kube-flannel-ds-amd64-p9d8w 1/1 Running 0 6m32s
kube-system pod/kube-flannel-ds-amd64-pchp5 1/1 Running 0 6m33s
kubernetes-dashboard pod/dashboard-metrics-scraper-566cddb686-kf4qq 1/1 Running 0 32s
kubernetes-dashboard pod/kubernetes-dashboard-c4bc5bd44-dqfb8 1/1 Running 0 32s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.0.0.1 443/TCP 19m
ingress-nginx service/ingress-nginx ClusterIP 10.0.0.53 80/TCP,443/TCP 45s
kube-system service/kube-dns ClusterIP 10.0.0.2 53/UDP,53/TCP 6m47s
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.0.0.147 8000/TCP 32s
kubernetes-dashboard service/kubernetes-dashboard NodePort 10.0.0.176 443:30001/TCP 32s
5.8、扩容Node节点
模拟扩容node节点,由于我的资源过多,导致无法分配,出现pending的状态
[root@k8s-master-1 ~]# kubectl run web --image=nginx --replicas=8 --requests="cpu=1,memory=256Mi"
[root@k8s-master1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web-944cddf48-6qhcl 1/1 Running 0 15m
web-944cddf48-7ldsv 1/1 Running 0 15m
web-944cddf48-7nv9p 0/1 Pending 0 2s
web-944cddf48-b299n 1/1 Running 0 15m
web-944cddf48-nsxgg 0/1 Pending 0 15m
web-944cddf48-pl4zt 1/1 Running 0 15m
web-944cddf48-t8fqt 1/1 Running 0 15m
现在的状态就是pod由于资源池不够,无法分配资源到当前的节点上了,所以现在我们需要对我们的node节点进行扩容
六、执行playbook,指定新的节点
[root@ansible-server ansible-install-k8s-master]# ansible-playbook -i hosts add-node.yml -uroot -k
6.1、查看已经收到加入node的请求,并运行通过
[root@k8s-master-1 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-0i7BzFaf8NyG_cdx_hqDmWg8nd4FHQOqIxKa45x3BJU 45m kubelet-bootstrap Approved,Issued
6.2、查看node节点状态
[root@k8s-master-1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-1 Ready 7d v1.16.0
k8s-node-1 Ready 7d v1.16.0
k8s-node-2 Ready 7d v1.16.0
k8s-node-3 Ready 2m52s v1.16.0
6.3、查看pod资源已经自动分配上新节点上
[root@k8s-master-1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web-944cddf48-6qhcl 1/1 Running 0 80m
web-944cddf48-7ldsv 1/1 Running 0 80m
web-944cddf48-7nv9p 1/1 Running 0 65m
web-944cddf48-b299n 1/1 Running 0 80m
web-944cddf48-nsxgg 1/1 Running 0 80m
web-944cddf48-pl4zt 1/1 Running 0 80m
web-944cddf48-t8fqt 1/1 Running 0 80m