K8S集群优化之路由转发:使用IPVS替代iptables

1. 为什么要使用IPVS

k8s的1.8版本开始,kube-proxy引入了IPVS模式,IPVS模式与iptables同样基于Netfilter,但是采用的hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。

2. 具体步骤

2.1开启内核参数

cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sysctl -p

2.2 开启ipvs支持

yum -y install ipvsadm  ipset

# 临时生效
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

# 永久生效
cat > /etc/sysconfig/modules/ipvs.modules <

2.3配置kube-proxy

# 添加下面两行
  --proxy-mode=ipvs  \
  --masquerade-all=true \

# 修改服务文件
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/data/k8s/kube-proxy
ExecStart=/data/k8s/bin/kube-proxy \
  --bind-address=192.168.1.145 \
  --hostname-override=192.168.1.145 \
  --cluster-cidr=10.254.0.0/16 \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
  --logtostderr=true \
  --proxy-mode=ipvs  \
  --masquerade-all=true \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target 

2.4 重启kube-proxy

systemctl daemon-reload
systemctl restart kube-proxy
systemctl status kube-proxy

3.测试

测试是否生效

[root@k8sNode01 docker]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.1.142:6443           Masq    1      0          0         
  -> 192.168.1.143:6443           Masq    1      1          0         
  -> 192.168.1.144:6443           Masq    1      1          0         
TCP  10.254.27.38:80 rr
  -> 172.30.36.4:9090             Masq    1      0          0         
TCP  10.254.72.60:80 rr
  -> 172.30.90.4:8080             Masq    1      0          0         
TCP  10.254.72.247:80 rr
  -> 172.30.36.5:3000             Masq    1      0          0         
TCP  127.0.0.1:27841 rr
  -> 172.30.36.2:80               Masq    1      0          0         
  -> 172.30.90.2:80               Masq    1      0          0         
TCP  127.0.0.1:28453 rr
  -> 172.30.36.5:3000             Masq    1      0          0         
TCP  127.0.0.1:36018 rr
  -> 172.30.36.4:9090             Masq    1      0          0         
TCP  172.30.90.0:27841 rr
  -> 172.30.36.2:80               Masq    1      0          0         
  -> 172.30.90.2:80               Masq    1      0          0         

你可能感兴趣的:(K8S集群优化)