TC控制流量

模拟延迟传输简介

netem

netem 是 Linux 2.6 及以上内核版本提供的一个网络模拟功能模块。该功能模块可以用来在性能良好的局域网中,模拟出复杂的互联网传输性能,诸如低带宽、传输延迟、丢包等等情况。使用 Linux 2.6 (或以上) 版本内核的很多发行版 Linux 都开启了该内核功能,比如 Fedora、Ubuntu、Redhat、OpenSuse、CentOS、Debian 等等。

tc

tc是Linux 系统中的一个工具,全名为 traffic control(流量控制)。tc 可以用来控制 netem 的工作模式,也就是说,如果想使用 netem ,需要至少两个条件,一个是内核中的 netem 功能被包含,另一个是要有 tc 。

查看网卡信息

常用网络信息查看指令ifconfig

Linux机器为:
[root@localhost ~]# ifconfig
enp4s0: flags=4163  mtu 1500
        inet 10.216.8.16  netmask 255.255.248.0  broadcast 10.216.15.255
        inet6 fe80::2a19:67f:fd7b:f6f6  prefixlen 64  scopeid 0x20
        ether c8:d3:ff:ba:c8:6e  txqueuelen 1000  (Ethernet)
        RX packets 303591  bytes 24430143 (23.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2383  bytes 422970 (413.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Mac机器为:
jc@jc:~$ ifconfig
en0: flags=8863 mtu 1500
    ether a4:5e:60:ed:37:3d
    inet 10.242.23.215 netmask 0xffff0000 broadcast 10.242.255.255
    media: autoselect
    status: active

如上图,输入指令ifconfig后输出若干网络信息,如Linux的网卡信息为enp3s0, mac机器的网卡信息为en0。备注一般看下inet参数对应的,即为网卡型号。

模拟延迟传输

基本命令形式:
tc qdisc add dev DEV root netem delay 100ms
其中tc qdisc add为固定命令格式
使用tc enp3s0网卡进行延迟传输,配置命令如下:

[root@localhost ~]# tc qdisc show 
qdisc noqueue 0: dev lo root refcnt 2
qdisc pfifo_fast 0: dev enp4s0 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev virbr0 root refcnt 2
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev docker0 root refcnt 2
qdisc noqueue 0: dev br-4bc03776afc3 root refcnt 2
qdisc noqueue 0: dev br-69d033945dee root refcnt 2
qdisc noqueue 0: dev br-9440cbbd9b61 root refcnt 2
qdisc noqueue 0: dev br-a07abf9888d5 root refcnt 2
qdisc noqueue 0: dev veth1aa4b64 root refcnt 2
qdisc noqueue 0: dev veth0d4b251 root refcnt 2
qdisc noqueue 0: dev vethe73ce26 root refcnt 2
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# tc qdisc add dev enp4s0 root netem delay 100ms
[root@localhost ~]#
[root@localhost ~]# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc netem 8003: dev enp4s0 root refcnt 2 limit 1000 delay 100.0ms
qdisc noqueue 0: dev virbr0 root refcnt 2
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev docker0 root refcnt 2
qdisc noqueue 0: dev br-4bc03776afc3 root refcnt 2
qdisc noqueue 0: dev br-69d033945dee root refcnt 2
qdisc noqueue 0: dev br-9440cbbd9b61 root refcnt 2
qdisc noqueue 0: dev br-a07abf9888d5 root refcnt 2
qdisc noqueue 0: dev veth1aa4b64 root refcnt 2
qdisc noqueue 0: dev veth0d4b251 root refcnt 2
qdisc noqueue 0: dev vethe73ce26 root refcnt 2
  • tc qdisc show:查看现有网卡的所有配置
  • tc qdisc add dev enp4s0 root netem delay 100ms:对网卡进行设置,模拟100ms延迟
  • qdisc netem 8003: dev enp4s0 root refcnt 2 limit 1000 delay 100.0ms:模拟延迟的配置结果
    配置前后网络延迟对比
    Linux服务网络情况如下:
#网络延迟设置前
jc@jc:~$ ping 10.216.8.16
PING 10.216.8.16 (10.216.8.16): 56 data bytes
64 bytes from 10.216.8.16: icmp_seq=0 ttl=60 time=3.109 ms
64 bytes from 10.216.8.16: icmp_seq=1 ttl=60 time=4.086 ms
64 bytes from 10.216.8.16: icmp_seq=2 ttl=60 time=3.001 ms
64 bytes from 10.216.8.16: icmp_seq=3 ttl=60 time=4.364 ms
64 bytes from 10.216.8.16: icmp_seq=4 ttl=60 time=3.627 ms
64 bytes from 10.216.8.16: icmp_seq=5 ttl=60 time=3.662 ms
64 bytes from 10.216.8.16: icmp_seq=6 ttl=60 time=2.217 ms
^C
--- 10.216.8.16 ping statistics ---
7 packets transmitted, 7 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.217/3.438/4.364/0.671 ms

#网络延迟设置后
jc@jc:~$ ping 10.216.8.16
PING 10.216.8.16 (10.216.8.16): 56 data bytes
64 bytes from 10.216.8.16: icmp_seq=0 ttl=60 time=103.129 ms
64 bytes from 10.216.8.16: icmp_seq=1 ttl=60 time=102.081 ms
64 bytes from 10.216.8.16: icmp_seq=2 ttl=60 time=102.759 ms
64 bytes from 10.216.8.16: icmp_seq=3 ttl=60 time=103.481 ms
64 bytes from 10.216.8.16: icmp_seq=4 ttl=60 time=103.044 ms
64 bytes from 10.216.8.16: icmp_seq=5 ttl=60 time=102.210 ms
64 bytes from 10.216.8.16: icmp_seq=6 ttl=60 time=102.389 ms
64 bytes from 10.216.8.16: icmp_seq=7 ttl=60 time=103.228 ms
^C
--- 10.216.8.16 ping statistics ---
8 packets transmitted, 8 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 102.081/102.790/103.481/0.481 ms

可通过ping命令发现,网路有明显的100ms延迟

删除网络配置

基本命令形式:
tc qdisc del dev DEV root netem delay 100ms
其中tc qdisc del为固定命令格式

[root@localhost ~]# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc netem 8003: dev enp4s0 root refcnt 2 limit 1000 delay 100.0ms
qdisc noqueue 0: dev virbr0 root refcnt 2
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev docker0 root refcnt 2
qdisc noqueue 0: dev br-4bc03776afc3 root refcnt 2
qdisc noqueue 0: dev br-69d033945dee root refcnt 2
qdisc noqueue 0: dev br-9440cbbd9b61 root refcnt 2
qdisc noqueue 0: dev br-a07abf9888d5 root refcnt 2
qdisc noqueue 0: dev veth1aa4b64 root refcnt 2
qdisc noqueue 0: dev veth0d4b251 root refcnt 2
qdisc noqueue 0: dev vethe73ce26 root refcnt 2
[root@localhost ~]# tc qdisc del dev enp4s0 root  netem delay 100ms
[root@localhost ~]# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc pfifo_fast 0: dev enp4s0 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev virbr0 root refcnt 2
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc noqueue 0: dev docker0 root refcnt 2
qdisc noqueue 0: dev br-4bc03776afc3 root refcnt 2
qdisc noqueue 0: dev br-69d033945dee root refcnt 2
qdisc noqueue 0: dev br-9440cbbd9b61 root refcnt 2
qdisc noqueue 0: dev br-a07abf9888d5 root refcnt 2
qdisc noqueue 0: dev veth1aa4b64 root refcnt 2
qdisc noqueue 0: dev veth0d4b251 root refcnt 2
qdisc noqueue 0: dev vethe73ce26 root refcnt 2
  • tc qdisc show:查看现有网卡的所有配置
  • tc qdisc del dev enp4s0 root netem delay 100ms: 删除enp4s0网络的100ms延迟配置

模拟丢包

基本命令形式:
tc qdisc add dev enp4s0 root netem loss 50%
丢包命令配置

#设置50%的丢包率
[root@localhost ~]# tc qdisc add dev enp4s0 root netem loss 50%
#查看50%的丢包率是否配置正确
[root@localhost ~]# tc qdisc show dev enp4s0
qdisc netem 8005: root refcnt 2 limit 1000 loss 50%

ping查看是否配置成功

#配置丢包前
jc@jc:~$ ping 10.216.8.16
PING 10.216.8.16 (10.216.8.16): 56 data bytes
64 bytes from 10.216.8.16: icmp_seq=0 ttl=60 time=1.680 ms
64 bytes from 10.216.8.16: icmp_seq=1 ttl=60 time=5.192 ms
64 bytes from 10.216.8.16: icmp_seq=2 ttl=60 time=3.715 ms
64 bytes from 10.216.8.16: icmp_seq=3 ttl=60 time=4.588 ms
64 bytes from 10.216.8.16: icmp_seq=4 ttl=60 time=2.856 ms
64 bytes from 10.216.8.16: icmp_seq=5 ttl=60 time=2.206 ms
64 bytes from 10.216.8.16: icmp_seq=6 ttl=60 time=4.099 ms
64 bytes from 10.216.8.16: icmp_seq=7 ttl=60 time=3.201 ms
^C
--- 10.216.8.16 ping statistics ---
8 packets transmitted, 8 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.680/3.442/5.192/1.113 ms

#配置丢包后
jc@jc:~$ ping 10.216.8.16
PING 10.216.8.16 (10.216.8.16): 56 data bytes
64 bytes from 10.216.8.16: icmp_seq=0 ttl=60 time=2.771 ms
64 bytes from 10.216.8.16: icmp_seq=1 ttl=60 time=5.864 ms
Request timeout for icmp_seq 2
64 bytes from 10.216.8.16: icmp_seq=3 ttl=60 time=2.954 ms
64 bytes from 10.216.8.16: icmp_seq=4 ttl=60 time=3.155 ms
Request timeout for icmp_seq 5
64 bytes from 10.216.8.16: icmp_seq=6 ttl=60 time=2.051 ms
64 bytes from 10.216.8.16: icmp_seq=7 ttl=60 time=3.847 ms
Request timeout for icmp_seq 8
64 bytes from 10.216.8.16: icmp_seq=9 ttl=60 time=2.982 ms
^C
--- 10.216.8.16 ping statistics ---
10 packets transmitted, 7 packets received, 30.0% packet loss
round-trip min/avg/max/stddev = 2.051/3.375/5.864/1.129 ms

发现配置丢包后,有30%的概率出现超时,即丢包,虽然与配置的50%有所差距,但的确达到了丢包超时的目的

模拟重复包

基本命令形式:
tc qdisc add dev enp4s0 root netem duplicate 50%

[root@localhost ~]# tc qdisc add dev enp4s0 root netem duplicate 20%
[root@localhost ~]# tc qdisc show dev enp4s0
qdisc netem 8006: root refcnt 2 limit 1000 duplicate 20%
  • tc qdisc add dev enp4s0 root netem duplicate 20%:设置重复包率20%
    设置20%重复包率,ping对应的服务如下:
jc@jc:~$ ping 10.216.8.16
PING 10.216.8.16 (10.216.8.16): 56 data bytes
64 bytes from 10.216.8.16: icmp_seq=0 ttl=60 time=2.860 ms
64 bytes from 10.216.8.16: icmp_seq=1 ttl=60 time=6.321 ms
64 bytes from 10.216.8.16: icmp_seq=1 ttl=60 time=6.341 ms (DUP!)
64 bytes from 10.216.8.16: icmp_seq=2 ttl=60 time=3.431 ms
64 bytes from 10.216.8.16: icmp_seq=2 ttl=60 time=3.446 ms (DUP!)
64 bytes from 10.216.8.16: icmp_seq=3 ttl=60 time=4.553 ms
64 bytes from 10.216.8.16: icmp_seq=3 ttl=60 time=4.572 ms (DUP!)
64 bytes from 10.216.8.16: icmp_seq=4 ttl=60 time=2.474 ms
^C
--- 10.216.8.16 ping statistics ---
5 packets transmitted, 5 packets received, +3 duplicates, 0.0% packet loss
round-trip min/avg/max/stddev = 2.474/4.250/6.341/1.381 ms

模拟包损坏

基本命令形式:
tc qdisc add dev enp4s0 root nemet corrupt 20%
如下代码显示,设置20%错误包率,设置成功确认网关配置

[root@localhost ~]# tc qdisc add dev enp4s0 root netem corrupt 20%
[root@localhost ~]# tc qdisc show dev enp4s0
qdisc netem 8007: root refcnt 2 limit 1000 corrupt 20%

ping ip发现27%的请求超时,说明模拟包损害有效

jc@jc:~$ ping 10.216.8.16
PING 10.216.8.16 (10.216.8.16): 56 data bytes
64 bytes from 10.216.8.16: icmp_seq=0 ttl=60 time=1.924 ms
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
64 bytes from 10.216.8.16: icmp_seq=3 ttl=60 time=2.925 ms
64 bytes from 10.216.8.16: icmp_seq=4 ttl=60 time=2.816 ms
64 bytes from 10.216.8.16: icmp_seq=5 ttl=60 time=3.075 ms
Request timeout for icmp_seq 6
64 bytes from 10.216.8.16: icmp_seq=7 ttl=60 time=2.878 ms
64 bytes from 10.216.8.16: icmp_seq=8 ttl=60 time=2.699 ms
64 bytes from 10.216.8.16: icmp_seq=9 ttl=60 time=2.969 ms
64 bytes from 10.216.8.16: icmp_seq=10 ttl=60 time=2.782 ms
^C
--- 10.216.8.16 ping statistics ---
11 packets transmitted, 8 packets received, 27.3% packet loss
round-trip min/avg/max/stddev = 1.924/2.759/3.075/0.334 ms

模拟包乱序

基本命令形式:
tc qdisc add dev enp4s0 root netem delay 100ms reorder 25% 50%
该命令将enp4s0网卡的传输设置为:有25%的数据包(50%相关)会被立即发送,其他的延迟10ms
命令设置如下:

[root@localhost ~]# tc qdisc add dev enp4s0 root netem delay 100ms reorder 25% 50%
[root@localhost ~]# tc qdisc show dev enp4s0
qdisc netem 8008: root refcnt 2 limit 1000 delay 100.0ms reorder 25% 50% gap 1

ping ip结果如下,发现4/16=25%的请求5ms内立刻返回,其余75%请求耗时100ms

jc@jc:~$ ping 10.216.8.16
PING 10.216.8.16 (10.216.8.16): 56 data bytes
64 bytes from 10.216.8.16: icmp_seq=0 ttl=60 time=104.439 ms
64 bytes from 10.216.8.16: icmp_seq=1 ttl=60 time=103.857 ms
64 bytes from 10.216.8.16: icmp_seq=2 ttl=60 time=103.842 ms
64 bytes from 10.216.8.16: icmp_seq=3 ttl=60 time=102.541 ms
64 bytes from 10.216.8.16: icmp_seq=4 ttl=60 time=103.499 ms
64 bytes from 10.216.8.16: icmp_seq=5 ttl=60 time=103.394 ms
64 bytes from 10.216.8.16: icmp_seq=6 ttl=60 time=103.950 ms
64 bytes from 10.216.8.16: icmp_seq=7 ttl=60 time=104.413 ms
64 bytes from 10.216.8.16: icmp_seq=8 ttl=60 time=103.290 ms
64 bytes from 10.216.8.16: icmp_seq=9 ttl=60 time=189.894 ms
64 bytes from 10.216.8.16: icmp_seq=10 ttl=60 time=103.297 ms
64 bytes from 10.216.8.16: icmp_seq=11 ttl=60 time=3.160 ms
64 bytes from 10.216.8.16: icmp_seq=12 ttl=60 time=2.708 ms
64 bytes from 10.216.8.16: icmp_seq=13 ttl=60 time=4.405 ms
64 bytes from 10.216.8.16: icmp_seq=14 ttl=60 time=103.261 ms
64 bytes from 10.216.8.16: icmp_seq=15 ttl=60 time=104.344 ms
64 bytes from 10.216.8.16: icmp_seq=16 ttl=60 time=2.856 ms
^C
--- 10.216.8.16 ping statistics ---
17 packets transmitted, 17 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.708/85.126/189.894/49.649 ms

指定IP延迟

Linux流量控制主要分为建立队列,建立分类和建立过滤器三个方面

  • 1).针对网络物理设备(如eth0,enp4s0)绑定一个队列qdis
  • 2).在该队列上建立分类class
  • 3).为每一个分类建立一个基于路由的过滤器filter
  • 4).最后与过滤器想配合,建立特定的路由表

建立一个含4class的root队列

为网卡enp4s0建立一个队列,队列名称为root,句柄handle为1(这条qdisc下设4个class,handle id为1:。在没有filter的情况下,tc从IP协议层收到的包会根据IP包头的TOS(Type of Service)字段进入第1~第3个class(与pfifo_fast规则相同),第4个class是没用的。下一条命令给他加个tc规则)
add dev enp4s0 root handle 1:创建一个队列,队列名为root, 句柄名为1
bands 4:创建4个class分类

tc qdisc add dev enp4s0 root handle 1: prio bands 4

设置分类的操作策略

给root qdisc添加一个filter,将匹配到的包做200ms的延迟处理。
parent 1:4:为分类的表示,分类策略的句柄hanle40
netem delay 200ms :超时200ms

tc qdisc add dev enp4s0 parent 1:4 handle 40: netem delay 200ms

绑定过滤器filter

给root qdisc添加一个filter,将发给指定IP的包都送到第4个class:
protocol ip:表示该过滤器应该检查报文分组的协议字段
prio 4:表示它们对报文处理的优先级
u32 match:u32选择器(命令中u32后面的部分)来匹配不同的数据流
ip dst 10.242.23.215 :匹配ip
flowid 1:4:把符合匹配规则的数据流分配到类别1:4进行处理

tc filter add dev enp4s0  protocol ip parent 1:0 prio 4 u32 match ip dst 10.242.23.215 flowid 1:4

实践

操作步骤
#设置队列及分类
[root@localhost ~]# tc qdisc add dev enp4s0 root handle 1: prio bands 4
[root@localhost ~]# tc qdisc show dev enp4s0
qdisc prio 1: root refcnt 2 bands 4 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1

#分类4设置延时操作
[root@localhost ~]# tc qdisc add dev enp4s0 parent 1:4 handle 40: netem delay 200ms
[root@localhost ~]# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc prio 1: dev enp4s0 root refcnt 2 bands 4 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc netem 40: dev enp4s0 parent 1:4 limit 1000 delay 200ms

#设置过滤器,当指定ip请求切换为分类4
[root@localhost ~]# tc filter add dev enp4s0 protocol ip parent 1:0 prio 4 u32 match ip dst 10.242.23.215 flowid 1:4
结果

ping 10.242.23.215 :网络耗时为200ms+
ping 10.216.8.202 :网络耗时1ms
结论ip耗时模拟正常

[root@localhost ~]# ping 10.242.23.215
PING 10.242.23.215 (10.242.23.215) 56(84) bytes of data.
64 bytes from 10.242.23.215: icmp_seq=1 ttl=60 time=207 ms
64 bytes from 10.242.23.215: icmp_seq=2 ttl=60 time=294 ms
64 bytes from 10.242.23.215: icmp_seq=3 ttl=60 time=213 ms
64 bytes from 10.242.23.215: icmp_seq=4 ttl=60 time=235 ms
64 bytes from 10.242.23.215: icmp_seq=5 ttl=60 time=257 ms
64 bytes from 10.242.23.215: icmp_seq=6 ttl=60 time=279 ms
^C
--- 10.242.23.215 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5008ms
rtt min/avg/max/mdev = 207.490/248.075/294.789/32.445 ms
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# ping 10.216.8.202
PING 10.216.8.202 (10.216.8.202) 56(84) bytes of data.
64 bytes from 10.216.8.202: icmp_seq=1 ttl=64 time=0.873 ms
64 bytes from 10.216.8.202: icmp_seq=2 ttl=64 time=0.434 ms
64 bytes from 10.216.8.202: icmp_seq=3 ttl=64 time=0.472 ms
64 bytes from 10.216.8.202: icmp_seq=4 ttl=64 time=0.472 ms
^C
--- 10.216.8.202 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 0.434/0.562/0.873/0.182 ms

参考文献
https://www.cnblogs.com/fsw-blog/p/4788036.html

你可能感兴趣的:(TC控制流量)