K8S-kube-proxy部署

一、签发证书

在172.16.1.55证书服务器上签发证书,kube-proxy使用的用户是kube-proxy,不能使用client证书,必须要重新签发自己的证书

[root@hdss1-55 certs]# pwd
/opt/certs
[root@hdss1-55 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json  -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
2020/11/01 14:30:26 [INFO] generate received request
2020/11/01 14:30:26 [INFO] received CSR
2020/11/01 14:30:26 [INFO] generating key: rsa-2048
2020/11/01 14:30:27 [INFO] encoded CSR
2020/11/01 14:30:27 [INFO] signed certificate with serial number 344546744694132201411655906972161047625818051099
2020/11/01 14:30:27 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@hdss1-55 certs]# ls kube-proxy-c* -l
-rw-r--r-- 1 root root 1005 Nov  1 14:30 kube-proxy-client.csr
-rw------- 1 root root 1679 Nov  1 14:30 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Nov  1 14:30 kube-proxy-client.pem
-rw-r--r-- 1 root root  267 Nov  1 14:24 kube-proxy-csr.json

二、证书分发

生成的proxy-client-key.pem和proxy-client.pem分发到172.16.1.53和172.16.1.54上

[root@hdss1-53 cert]# pwd
/opt/kubernetes/server/bin/cert
[root@hdss1-53 cert]# scp -P22 172.16.1.55:/opt/certs/kube-proxy-client.pem .
[root@hdss1-53 cert]# scp -P22 172.16.1.55:/opt/certs/kube-proxy-client-key.pem .
[root@hdss1-53 cert]# ll |grep kube-proxy
-rw------- 1 root root 1679 Nov  1 14:44 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Nov  1 14:43 kube-proxy-client.pem
[root@hdss1-54 cert]#  scp -P22 172.16.1.55:/opt/certs/kube-proxy-client.pem .
[root@hdss1-54 cert]#  scp -P22 172.16.1.55:/opt/certs/kube-proxy-client-key.pem .
[root@hdss1-54 cert]# ll |grep proxy
-rw------- 1 root root 1679 Nov  1 14:46 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Nov  1 14:46 kube-proxy-client.pem

三、创建kube-proxy配置

在所有node节点创建,涉及服务器:172.16.1.53和172.16.1.54

[root@hdss1-53 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://172.16.1.60:7443 \
--kubeconfig=kube-proxy.kubeconfig
[root@hdss1-53 conf]# kubectl config set-credentials kube-proxy \
> --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
> --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
> --embed-certs=true \
> --kubeconfig=kube-proxy.kubeconfig
[root@hdss1-53 conf]# kubectl config set-context myk8s-context \
> --cluster=myk8s \
> --user=kube-proxy \
> --kubeconfig=kube-proxy.kubeconfig
Context "myk8s-context" created.
[root@hdss1-53 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
Switched to context "myk8s-context".
[root@hdss1-53 conf]# ll |grep proxy
-rw------- 1 root root 6221 Nov  1 15:33 kube-proxy.kubeconfig
[root@hdss1-54 conf]# scp -P22 172.16.1.53:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .
[root@hdss1-54 conf]#  ll |grep proxy
-rw------- 1 root root 6221 Nov  1 15:35 kube-proxy.kubeconfig

四、加载ipvs模块

kube-proxy 共有3种流量调度模式,分别是 namespace,iptables,ipvs,其中ipvs性能最好。172.16.1.53和172.16.1.54做同样的操作。

[root@hdss1-53 ~]# vim ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do 
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done 
[root@hdss1-53 ~]# chmod +x ipvs.sh 
[root@hdss1-53 ~]# ./ipvs.sh 
[root@hdss1-53 ~]# lsmod |grep ip_vs
ip_vs_wrr              12697  0 
ip_vs_wlc              12519  0 
ip_vs_sh               12688  0 
ip_vs_sed              12519  0 
ip_vs_rr               12600  0 
ip_vs_pe_sip           12740  0 
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0 
ip_vs_lc               12516  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_ftp              13079  0 
ip_vs_dh               12688  0 
ip_vs                 141432  24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat                 26787  3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
nf_conntrack          133053  8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
[root@hdss1-54 ~]# vim ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do 
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done 
[root@hdss1-54 ~]# chmod +x ipvs.sh 
[root@hdss1-54 ~]# ./ipvs.sh 
[root@hdss1-54 ~]# lsmod |grep ip_vs

LVS的十种调度算法
常用算法有五种:轮询(Round Robin),加权轮询(Weighted Round Robin),最少连接(Least Connections),加权最少连接(Weighted Least Connections),源地址散列(Source Hashing)
轮询(Round Robin)RR: 将客户端请求平均分发到Real Server。
加权轮询(Weighted Round Robin)WRR:根据Real Server 权重值进行轮询的调度。
最少连接(Least Connections)LC:选择连接最少的服务器。
,加权最少连接(Weighted Least Connections)WLC:根据Real Server 权重值,选择连接数最少的服务器。
源地址散列(Source Hashing)SH:根据请求的源IP地址,作为散列键(Hash Key)从静态分配的散列表找出对应的服务器。
目标地址散列调度(Destination Hashing ) DH:与SH相反的是,DH根据请求的目标IP地址,作为散列键(Hash Key)从静态分配的散列表找出对应的服务器。
其他调度算法:
基于局部性的最少链接(Locality-Based Least Connections)LBLC:主要是针对请求报文的目标IP地址的负载均衡调度,目前主要使用Cache集群系统。LBLC调度算法先根据请求的目标IP地址找出该目标IP地址最近使用的服务器,若该服务器时可以用的且没有超载,将请求发送到该服务器,若服务器不存在,或者该服务器超载且有服务器处于一半的工作负载,则使用“LC最少连接”的原则选出一个可用的服务器,将请求发送到服务器。
带复制的基于局部性的最少连接(Locality-Based Least Connections with Replication)LBLCR:算法也是针对目标IP地址的负载均衡,目前也主要用于Cache集群系统。它与LBLC算法不通之处时它要维护从一个目标IP地址到一组服务器的映射,而LBLC算法维护从一个目标IP地址到一台服务器的映射。
最短的期望的延迟调度(Shortest Expected Delay) SED:SED基于WLC算法,将请求以最短的期望的延迟方式到服务器,计算当前realserver 的负载情况计算方法:(active+1)*256/weight=overhead。
最少队列调度(Never Queue)NQ:如果realserver的连接数等于0就直接分配到该服务器,但是此服务器并不一定是最快的那台,如果所有服务器都是繁忙状态,它采取最短的期望延迟分配请求。

五、创建启动脚本

[root@hdss1-53 bin]# pwd
/opt/kubernetes/server/bin
[root@hdss1-53 bin]# vim kube-proxy.sh
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override hdss1-53.host.com \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig
[root@hdss1-53 bin]# chmod +x kube-proxy.sh
[root@hdss1-54 bin]# pwd
/opt/kubernetes/server/bin
[root@hdss1-54 bin]# vim kube-proxy.sh
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override hdss1-54.host.com \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig
[root@hdss1-54 bin]# chmod +x kube-proxy.sh
[root@hdss1-53 bin]# mkdir -p /data/logs/kubernetes/kube-proxy
[root@hdss1-54 bin]# mkdir -p /data/logs/kubernetes/kube-proxy
[root@hdss1-53 bin]# vim /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-1-53]
command=/opt/kubernetes/server/bin/kube-proxy.sh                
numprocs=1                                                      
directory=/opt/kubernetes/server/bin                            
autostart=true                                                  
autorestart=true                                                
startsecs=30                                                    
startretries=3                                                  
exitcodes=0,2                                                   
stopsignal=QUIT                                                 
stopwaitsecs=10                                                 
user=root                                                       
redirect_stderr=true                                            
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log
stdout_logfile_maxbytes=64MB                                    
stdout_logfile_backups=5                                       
stdout_capture_maxbytes=1MB                                     
stdout_events_enabled=false
[root@hdss1-54 bin]# vim /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-1-54]
command=/opt/kubernetes/server/bin/kube-proxy.sh                
numprocs=1                                                      
directory=/opt/kubernetes/server/bin                            
autostart=true                                                  
autorestart=true                                                
startsecs=30                                                    
startretries=3                                                  
exitcodes=0,2                                                   
stopsignal=QUIT                                                 
stopwaitsecs=10                                                 
user=root                                                       
redirect_stderr=true                                            
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log
stdout_logfile_maxbytes=64MB                                    
stdout_logfile_backups=5                                       
stdout_capture_maxbytes=1MB                                     
stdout_events_enabled=false

六、启动proxy

[root@hdss1-53 bin]# supervisorctl  update
kube-proxy-1-53: added process group
[root@hdss1-53 bin]# supervisorctl  status
kube-proxy-1-53                  RUNNING   pid 2102, uptime 0:02:18
[root@hdss1-54 bin]# supervisorctl update
kube-proxy-1-54: added process group
[root@hdss1-54 bin]# supervisorctl status
[root@hdss1-54 bin]# supervisorctl status
kube-proxy-1-54                  RUNNING   pid 27119, uptime 0:01:32

七、安装IPVSADM

[root@hdss1-53 bin]# yum install -y ipvsadm
[root@hdss1-54 bin]# yum install -y ipvsadm
[root@hdss1-53 bin]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 172.16.1.53:6443             Masq    1      0          0         
  -> 172.16.1.54:6443             Masq    1      0          0   
[root@hdss1-54 bin]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 172.16.1.53:6443             Masq    1      0          0         
  -> 172.16.1.54:6443             Masq    1      0          0  
[root@hdss1-53 bin]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   192.168.0.1           443/TCP   2d1h
[root@hdss1-54 bin]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   192.168.0.1           443/TCP   2d1h

八、验证集群

[root@hdss1-53 ~]#vi /root/nginx-ds.yamll
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: harbor.od.com/public/nginx:1.7.9
        ports:
        - containerPort: 80
[root@hdss1-53 ~]# kubectl create -f nginx-ds.yaml 
daemonset.extensions/nginx-ds created
[root@hdss1-53 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP           NODE                NOMINATED NODE   READINESS GATES
nginx-ds-2dq5c   1/1     Running   0          9s    172.17.1.1   hdss1-53.host.com              
nginx-ds-vjr8m   1/1     Running   0          9s    172.17.1.1   hdss1-54.host.com              
验证
[root@hdss1-53 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
[root@hdss1-53 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-2dq5c   1/1     Running   0          3m23s
nginx-ds-vjr8m   1/1     Running   0          3m23s
[root@hdss1-53 ~]# kubectl get node
NAME                STATUS   ROLES         AGE     VERSION
hdss1-53.host.com   Ready    master,node   4h12m   v1.15.2
hdss1-54.host.com   Ready    master,node   4h11m   v1.15.2
[root@hdss1-53 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-2dq5c   1/1     Running   0          3m50s
nginx-ds-vjr8m   1/1     Running   0          3m50s

你可能感兴趣的:(K8S,k8s)