使用 nginx 4 层透明代理功能实现 K8S 节点( master 和 worker 节点)高可用访问 kube-apiserver。部署前需要规划好kubernetes集群的节点配置,本教程的节点信息如下:
主机名 | 主机ip |
---|---|
k8s-master | 172.24.211.217 |
k8s-master-1 | 172.24.211.220 |
k8s-node-1 | 172.24.211.218 |
k8s-node-2 | 172.24.211.219 |
(ps:之前的几篇教程只有3台主机,为了避免只有一台master节点,临时增加了k8s-master-1节点。对于该节点,需要参照之前的文章,提前安装docker和flannel,并完成相应配置。etcd方面直接使用已有集群即可)
在部署k8s集群前,先在需要连接的集群的各节点部署客户端工具kubectl,方便后面测试。
下载解压:
cd /opt/k8s/work
wget https://dl.k8s.io/v1.15.0/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
chmod +x kubernetes/client/bin/kubectl
部署分发(需要执行 kubectl 命令的节点都可以分发一份):
scp kubernetes/client/bin/kubectl root@k8s-master:/opt/k8s/bin/
scp kubernetes/client/bin/kubectl root@k8s-master-1:/opt/k8s/bin/
scp kubernetes/client/bin/kubectl root@k8s-node-1:/opt/k8s/bin/
scp kubernetes/client/bin/kubectl root@k8s-node-2:/opt/k8s/bin/
kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。kubectl 作为集群的管理工具,需要被授予最高权限,这里创建具有最高权限的 admin 证书。下面开始创建admin证书和私钥:
cd /opt/k8s/work
cat > admin-csr.json <
生成证书admin.pem和私钥admin-key.pem:
cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
创建 配置文件:kubeconfig
cd /opt/k8s/work
#设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=/opt/k8s/work/ca.pem --embed-certs=true --server=https://127.0.0.1:8443 --kubeconfig=kubectl.kubeconfig
#设置客户端认证参数
kubectl config set-credentials admin --client-certificate=/opt/k8s/work/admin.pem --client-key=/opt/k8s/work/admin-key.pem --embed-certs=true --kubeconfig=kubectl.kubeconfig
#设置上下文参数
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kubectl.kubeconfig
#设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
执行上面的各个命令后,会在work目录下生成配置文件kubectl.kubeconfig,–embed-certs=true这一配置会将证书嵌入配置文件中,不必再拷贝到其他节点,直接拷贝该配置文件即可。其中,https://127.0.0.1:8443 是kube-apiserver 的反向代理(kube-nginx)地址端口。
生成的配置文件如下图所示:
分发配置文件:
scp kubectl.kubeconfig root@k8s-master:~/.kube/config
scp kubectl.kubeconfig root@k8s-master-1:~/.kube/config
scp kubectl.kubeconfig root@k8s-node-1:~/.kube/config
scp kubectl.kubeconfig root@k8s-node-2:~/.kube/config
配置环境变量,使kubectl生效:
vim /etc/profile
#文件尾追加
export PATH=/opt/k8s/bin:$PATH
#使配置生效
soiurce /etc/profile
现在需要在每个节点起一个 nginx 进程,后端对接多个 apiserver 实例,nginx 对它们做健康检查和负载均衡。这样后面部署集群的时候,master和worker节点都可以直接用nginx提供的入口(https://127.0.0.1:8443)去访问apiserver,不但方便,而且可用性高。
下载Nginx源码:
cd /opt/k8s/work
wget http://nginx.org/download/nginx-1.15.3.tar.gz
tar -xzvf nginx-1.15.3.tar.gz
配置编译参数:
–with-stream:开启 4 层透明转发(TCP Proxy)功能;
–without-xxx:关闭所有其他功能,这样生成的动态链接二进制程序依赖最小。
cd /opt/k8s/work/nginx-1.15.3
mkdir nginx-prefix
./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
编译和验证:
cd /opt/k8s/work/nginx-1.15.3
make && make install
./nginx-prefix/sbin/nginx -v
向各节点分发编译后的安装文件:
#创建文件夹
mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}
ssh root@k8s-master-1 "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
ssh root@k8s-node-1 "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
ssh root@k8s-node-1 "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
#先赋予权限,然后分发文件
chmod a+x /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx
scp /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx root@k8s-master:/opt/k8s/kube-nginx/sbin/kube-nginx
scp /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx root@k8s-master-1:/opt/k8s/kube-nginx/sbin/kube-nginx
scp /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx root@k8s-node-1:/opt/k8s/kube-nginx/sbin/kube-nginx
scp /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx root@k8s-node-2:/opt/k8s/kube-nginx/sbin/kube-nginx
配置 nginx,开启 4 层透明转发功能。"upstream backend"配置项填的是master节点的kube-apiserver监听的ip和端口,"server"配置项填的是Ngnix监听的ip和端口。根据集群的规划,进行如下配置:
cd /opt/k8s/work
cat > kube-nginx.conf << \EOF
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream backend {
hash $remote_addr consistent;
server 172.24.211.220:6443 max_fails=3 fail_timeout=30s;
server 172.24.211.217:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 127.0.0.1:8443;
proxy_connect_timeout 1s;
proxy_pass backend;
}
}
EOF
分发该配置文件:
scp kube-nginx.conf root@k8s-master:/opt/k8s/kube-nginx/conf/kube-nginx.conf
scp kube-nginx.conf root@k8s-master-1:/opt/k8s/kube-nginx/conf/kube-nginx.conf
scp kube-nginx.conf root@k8s-node-1:/opt/k8s/kube-nginx/conf/kube-nginx.conf
scp kube-nginx.conf root@k8s-node-2:/opt/k8s/kube-nginx/conf/kube-nginx.conf
创建启动配置文件,注意文件路径:
cat > kube-nginx.service <
分发该启动配置文件:
scp kube-nginx.service root@k8s-master:/usr/lib/systemd/system/kube-nginx.service
scp kube-nginx.service root@k8s-master-1:/usr/lib/systemd/system/kube-nginx.service
scp kube-nginx.service root@k8s-node-1:/usr/lib/systemd/system/kube-nginx.service
scp kube-nginx.service root@k8s-node-2:/usr/lib/systemd/system/kube-nginx.service
配置完成,在各节点执行启动命令:
systemctl daemon-reload && systemctl enable kube-nginx && systemctl restart kube-nginx
验证启动是否成功:
systemctl status kube-nginx |grep 'Active:'
至此,k8s集群安装部署的准备工作已完成,下一篇将介绍如何部署高可用的k8s集群。
[1]. https://github.com/opsnull/follow-me-install-kubernetes-cluster, 和我一步步部署kubernetes集群.