k8s 查看pod流量_K8s中对pod进行流量限制

最近接到一个需求,对日志上传的pod进行流量限制。

# 前期准备k8s一份

calico装好

# k8s配置

由于默认情况下calico并没有启用流量限制的功能,所以需要在calico的配置文件里面启用一下。在每个节点的/etc/cni/net.d/10-calico.conflist 文件中加入bandwidth的支持。这一步最好在安装calico的时候做了,就不用每个节点都配置一遍。期待calico把这个选项也默认打开的一天(https://github.com/projectcalico/calico/issues/2815)。因为是实验环境,我只有一个节点,就手动配置了。

vagrant@ubuntu:~$ cat /etc/cni/net.d/10-calico.conflist

{

"name": "k8s-pod-network",

"cniVersion": "0.3.1",

"plugins": [

{

"type": "calico",

"datastore_type": "kubernetes",

"mtu": 1410,

"nodename_file_optional": false,

"log_file_path": "/var/log/calico/cni/cni.log",

"ipam": {

"type": "calico-ipam",

"assign_ipv4" : "true",

"assign_ipv6" : "false"

},

"container_settings": {

"allow_ip_forwarding": false

},

"policy": {

"type": "k8s"

},

"kubernetes": {

"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"

}

},

{"type": "portmap", "snat": true, "capabilities": {"portMappings": true}},

{

"type": "bandwidth",

"capabilities": {"bandwidth": true}

}

]

}

#流量限制的原理

其实原理很简单,就是在pod启动的时候,在对应的虚拟网络设备加上相应的tc规则,通过tc实现的流量限制。不懂TC的可以参阅https://cloud.tencent.com/developer/article/1409664 等资料。

#实验一

没有流量限制的情况下两个pod之间。

通过下面的yaml启动两个pod。

---apiVersion:v1kind:Podmetadata:name:perf1labels:app:perf1# annotations:# kubernetes.io/ingress-bandwidth: 10M# kubernetes.io/egress-bandwidth: 10Mspec:containers:- name:perf-serverimage:elynn/pperf:latestimagePullPolicy:Alwayscommand:- "/opt/runserver.sh"ports:- containerPort:5201- containerPort:5203---apiVersion:v1kind:Podmetadata:name:perf2labels:app:perf2# annotations:# kubernetes.io/ingress-bandwidth: 1M# kubernetes.io/egress-bandwidth: 1Mspec:containers:- name:perf-serverimage:elynn/pperf:latestimagePullPolicy:Alwayscommand:- "/opt/runserver.sh"ports:- containerPort:5201- containerPort:5203

pod启动完之后,我们先看看对应节点的tc规则,并没有找到任何的流量限制。

$ tc qdisc show

qdisc noqueue 0: dev lo root refcnt 2

qdisc pfifo_fast 0: dev ens33 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1

qdisc noqueue 0: dev docker0 root refcnt 2

qdisc noqueue 0: dev cali51d8b092aa9 root refcnt 2

qdisc noqueue 0: dev cali40347405ff0 root refcnt 2

qdisc noqueue 0: dev vxlan.calico root refcnt 2

qdisc noqueue 0: dev cali3fe07939e27 root refcnt 2

qdisc noqueue 0: dev califde8991e611 root refcnt 2

qdisc noqueue 0: dev calieec63a8d445 root refcnt 2

到pod里面跑个iperf试试,单机转发30Gbits的流量。

$ kubectl exec -it perf1 bash

kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

root@perf1:/opt# iperf3 -c 192.168.243.203

Connecting to host 192.168.243.203, port 5201

[ 4] local 192.168.243.202 port 57728 connected to 192.168.243.203 port 5201

[ ID] Interval Transfer Bandwidth Retr Cwnd

[ 4] 0.00-1.00 sec 3.62 GBytes 31.1 Gbits/sec 283 893 KBytes

[ 4] 1.00-2.00 sec 3.64 GBytes 31.2 Gbits/sec 0 896 KBytes

[ 4] 2.00-3.00 sec 3.57 GBytes 30.7 Gbits/sec 0 1.38 MBytes

[ 4] 3.00-4.00 sec 3.61 GBytes 31.0 Gbits/sec 0 1.38 MBytes

[ 4] 4.00-5.00 sec 3.55 GBytes 30.5 Gbits/sec 0 1.38 MBytes

[ 4] 5.00-6.00 sec 3.64 GBytes 31.2 Gbits/sec 0 1.39 MBytes

[ 4] 6.00-7.00 sec 3.55 GBytes 30.5 Gbits/sec 0 1.39 MBytes

[ 4] 7.00-8.00 sec 3.59 GBytes 30.8 Gbits/sec 0 1.39 MBytes

[ 4] 8.00-9.00 sec 3.50 GBytes 30.1 Gbits/sec 0 1.49 MBytes

^C[ 4] 9.00-9.46 sec 1.67 GBytes 31.4 Gbits/sec 0 1.52 MBytes

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval Transfer Bandwidth Retr

[ 4] 0.00-9.46 sec 33.9 GBytes 30.8 Gbits/sec 283 sender

[ 4] 0.00-9.46 sec 0.00 Bytes 0.00 bits/sec receiver

#实验二

试试把流量控制加上。把实验一中的yaml文件的下列内容加上,就表示限制这个pod的进出流量都是10M。

annotations:kubernetes.io/ingress-bandwidth:10Mkubernetes.io/egress-bandwidth:10M

把之前的pod删了,再创建一下。创建成功之后就能看到tc规则加上了。

```

vagrant@ubuntu:~$ tc qdisc show

qdisc noqueue 0: dev lo root refcnt 2

qdisc pfifo_fast 0: dev ens33 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1

qdisc noqueue 0: dev docker0 root refcnt 2

qdisc noqueue 0: dev cali51d8b092aa9 root refcnt 2

qdisc noqueue 0: dev cali40347405ff0 root refcnt 2

qdisc noqueue 0: dev vxlan.calico root refcnt 2

qdisc noqueue 0: dev cali3fe07939e27 root refcnt 2

qdisc noqueue 0: dev calieec63a8d445 root refcnt 2

qdisc tbf 1: dev califde8991e611 root refcnt 2 rate 10Mbit burst 256Mb lat 25.0ms

qdisc ingress ffff: dev califde8991e611 parent ffff:fff1 ----------------

qdisc tbf 1: dev bwp9524c730aa56 root refcnt 2 rate 10Mbit burst 256Mb lat 25.0ms

```

跑一把流量测试看看,就能看到流量被限制到了10Mbits/s。

vagrant@ubuntu:~$ kubectl exec -it perf1 bash

kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

root@perf1:/opt# iperf3 -c 192.168.243.204

Connecting to host 192.168.243.204, port 5201

[ 4] local 192.168.243.205 port 35668 connected to 192.168.243.204 port 5201

[ ID] Interval Transfer Bandwidth Retr Cwnd

[ 4] 0.00-1.00 sec 246 MBytes 2.06 Gbits/sec 4 263 KBytes

[ 4] 1.00-2.00 sec 1.12 MBytes 9.39 Mbits/sec 0 263 KBytes

[ 4] 2.00-3.00 sec 1.12 MBytes 9.39 Mbits/sec 0 263 KBytes

[ 4] 3.00-4.00 sec 1.18 MBytes 9.91 Mbits/sec 0 263 KBytes

[ 4] 4.00-5.00 sec 1.12 MBytes 9.39 Mbits/sec 0 263 KBytes

[ 4] 5.00-6.00 sec 1.12 MBytes 9.39 Mbits/sec 0 263 KBytes

[ 4] 6.00-7.00 sec 1.18 MBytes 9.91 Mbits/sec 0 263 KBytes

[ 4] 7.00-8.00 sec 1.12 MBytes 9.39 Mbits/sec 0 263 KBytes

[ 4] 8.00-9.00 sec 1.12 MBytes 9.39 Mbits/sec 0 263 KBytes

^C[ 4] 9.00-10.00 sec 955 KBytes 7.86 Mbits/sec 0 264 KBytes

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval Transfer Bandwidth Retr

[ 4] 0.00-10.00 sec 256 MBytes 215 Mbits/sec 4 sender

[ 4] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec receiver

iperf3: interrupt - the client has terminated

#使用限制

目前发现有一下的一些限制:如果不是docker而是使用containerd作为runtime,需要containerd 1.4版本才能支持。

不能动态更新annotation里面的流量限制大小,更新之后必须删除pod重建。

你可能感兴趣的:(k8s,查看pod流量)