待更新…
在计算机网络中,TCP和UDP是传输层协议,其中TCP是面向连接的,也是应用最广泛的传输协议,TCP协议作为数据传输核心的最上层,对数据传输的质量至关重要,因此对于TCP协议的研究有重要意义。在庞大的计算机网络中,所有用户共享骨干网中的公共资源,TCP协议中拥塞控制算法是在全局上对网络传输的管理或约束,本文主要探讨不同拥塞控制算法对TCP传输的影响,做一些抓包的工作。
本实验需要以下实验环境和软件:
其中云服务器需要配置公网IP
查看可以使用的拥塞控制算法
sysctl net.ipv4.tcp_available_congestion_control
查看当前拥塞控制算法
sysctl net.ipv4.tcp_congestion_control
设置拥塞控制算法
sysctl net.ipv4.tcp_congestion_control = reno
安装拥塞控制算法到内核变量中
modprobe -a tcp_westwood
modprobe -a tcp_vegas
modprobe -a tcp_bic
modprobe -a tcp_htcp
modprobe -a tcp_bbr
Iperf是一个广泛使用的网络性能测量和调整工具。它的意义在于它是一个跨平台的工具,可以为任何网络提供标准化的性能测量。Iperf具有客户端和服务器功能,可以创建数据流来测量两端之间单向或双向的网络吞吐量。典型的iperf输出包含一个有时间标记的数据传输量和吞吐量测量的报告。
iPerf3 是一种用于主动测量 IP 网络上可实现的最大带宽的工具。它支持调整与时间、缓冲区和协议(TCP、UDP、带有 IPv4 和 IPv6 的 SCTP)相关的各种参数。对于每个测试,它都会报告带宽、损耗和其他参数。Iperf3是一个新的实现,它与原始 iPerf 不共享任何代码,也不向后兼容。iPerf 最初由NLANR/DAST开发,iPerf3 主要由ESnet /劳伦斯伯克利国家实验室开发。它是根据三条款BSD 许可证发布的。
在终端上执行
iperf --help
Usage: iperf [-s|-c host] [options]
iperf [-h|--help] [-v|--version]
Client/Server:
-b, --bandwidth #[kmgKMG | pps] bandwidth to read/send at in bits/sec or packets/sec
-e, --enhanced use enhanced reporting giving more tcp/udp and traffic information
-f, --format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes
--hide-ips hide ip addresses and host names within outputs
-i, --interval # seconds between periodic bandwidth reports
-l, --len #[kmKM] length of buffer in bytes to read or write (Defaults: TCP=128K, v4 UDP=1470, v6 UDP=1450)
-m, --print_mss print TCP maximum segment size
-o, --output <filename> output the report or error message to this specified file
-p, --port # client/server port to listen/send on and to connect
--permit-key permit key to be used to verify client and server (TCP only)
--sum-only output sum only reports
-u, --udp use UDP rather than TCP
-w, --window #[KM] TCP window size (socket buffer size)
-B, --bind <host>[:<port>][%<dev>] bind to <host>, ip addr (including multicast address) and optional port and device
-C, --compatibility for use with older versions does not sent extra msgs
-M, --mss # set TCP maximum segment size using TCP_MAXSEG
-N, --nodelay set TCP no delay, disabling Nagle's Algorithm
-S, --tos # set the socket's IP_TOS (byte) field
-Z, --tcp-congestion <algo> set TCP congestion control algorithm (Linux only)
Server specific:
-p, --port #[-#] server port(s) to listen on/connect to
-s, --server run in server mode
-1, --singleclient run one server at a time
--histograms enable latency histograms
--permit-key-timeout set the timeout for a permit key in seconds
--tcp-rx-window-clamp set the TCP receive window clamp size in bytes
--tap-dev #[] use TAP device to receive at L2 layer
-t, --time # time in seconds to listen for new connections as well as to receive traffic (default not set)
--udp-histogram #,# enable UDP latency histogram(s) with bin width and count, e.g. 1,1000=1(ms),1000(bins)
-B, --bind <ip>[%<dev>] bind to multicast address and optional device
-U, --single_udp run in single threaded UDP mode
--sum-dstip sum traffic threads based upon destination ip address (default is src ip)
-D, --daemon run the server as a daemon
-V, --ipv6_domain Enable IPv6 reception by setting the domain and socket to AF_INET6 (Can receive on both IPv4 and IPv6)
Client specific:
--bounceback request a bounceback test (use -l for size, defaults to 100 bytes)
--bounceback-congest request a concurrent full-duplex TCP stream
--bounceback-hold request the server to insert a delay of n milliseconds between its read and write
--bounceback-period request the client schedule a send every n milliseconds
--bounceback-no-quickack request the server not set the TCP_QUICKACK socket option (disabling TCP ACK delays) during a bounceback test
-c, --client <host> run in client mode, connecting to <host>
--connect-only run a connect only test
--connect-retries # number of times to retry tcp connect
-d, --dualtest Do a bidirectional test simultaneously (multiple sockets)
--fq-rate #[kmgKMG] bandwidth to socket pacing
--full-duplex run full duplex test using same socket
--ipg set the the interpacket gap (milliseconds) for packets within an isochronous frame
--isochronous <frames-per-second>:<mean>,<stddev> send traffic in bursts (frames - emulate video traffic)
--incr-dstip Increment the destination ip with parallel (-P) traffic threads
--incr-dstport Increment the destination port with parallel (-P) traffic threads
--incr-srcip Increment the source ip with parallel (-P) traffic threads
--incr-srcport Increment the source port with parallel (-P) traffic threads
--local-only Set don't route on socket
--near-congestion=[w] Use a weighted write delay per the sampled TCP RTT (experimental)
--no-connect-sync No sychronization after connect when -P or parallel traffic threads
--no-udp-fin No final server to client stats at end of UDP test
-n, --num #[kmgKMG] number of bytes to transmit (instead of -t)
-r, --tradeoff Do a fullduplexectional test individually
--tcp-quickack set the socket's TCP_QUICKACK option (off by default)
--tcp-write-prefetch set the socket's TCP_NOTSENT_LOWAT value in bytes and use event based writes
--tcp-write-times measure the socket write times at the application level
-t, --time # time in seconds to transmit for (default 10 secs)
--trip-times enable end to end measurements (requires client and server clock sync)
--txdelay-time time in seconds to hold back after connect and before first write
--txstart-time unix epoch time to schedule first write and start traffic
-B, --bind [<ip> | <ip:port>] bind ip (and optional port) from which to source traffic
-F, --fileinput <name> input the data to be transmitted from a file
-H, --ssm-host <ip> set the SSM source, use with -B for (S,G)
-I, --stdin input the data to be transmitted from stdin
-L, --listenport # port to receive fullduplexectional tests back on
-P, --parallel # number of parallel client threads to run
-R, --reverse reverse the test (client receives, server sends)
-S, --tos IP DSCP or tos settings
-T, --ttl # time-to-live, for multicast (default 1)
-V, --ipv6_domain Set the domain to IPv6 (send packets over IPv6)
-X, --peer-detect perform server version detection and version exchange
Miscellaneous:
-x, --reportexclude [CDMSV] exclude C(connection) D(data) M(multicast) S(settings) V(server) reports
-y, --reportstyle C report as a Comma-Separated Values
-h, --help print this message and quit
-v, --version print version information and quit
[kmgKMG] Indicates options that support a k,m,g,K,M or G suffix
Lowercase format characters are 10^3 based and uppercase are 2^n based
(e.g. 1k = 1000, 1K = 1024, 1m = 1,000,000 and 1M = 1,048,576)
The TCP window size option can be set by the environment variable
TCP_WINDOW_SIZE. Most other options can be set by an environment variable
IPERF_<long option name>, such as IPERF_BANDWIDTH.
Source at <http://sourceforge.net/projects/iperf2/>
Report bugs to <[email protected]>
tcpdump可以将网络中传送的数据包完全截获下来提供分析。它支持针对网络层、协议、主机、网络或端口的过滤,并提供and、or、not等逻辑语句来帮助你去掉无用的信息。
$ tcpdump --help
tcpdump version 4.9.3 -- Apple version 114.100.1
libpcap version 1.9.1
LibreSSL 3.3.6
Usage: tcpdump [-aAbdDefhHIJKlLnNOpqStuUvxX#] [ -B size ] [ -c count ]
[ -C file_size ] [ -E algo:secret ] [ -F file ] [ -G seconds ]
[ -i interface ] [ -j tstamptype ] [ -M secret ] [ --number ]
[ -Q in|out|inout ]
[ -r file ] [ -s snaplen ] [ --time-stamp-precision precision ]
[ --immediate-mode ] [ -T type ] [ --version ] [ -V file ]
[ -w file ] [ -W filecount ] [ -y datalinktype ] [ -z postrotate-command ]
[ -g ] [ -k (flags) ] [ -o ] [ -P ] [ -Q meta-data-expression ]
[ --apple-tzo offset ] [--apple-truncate ]
[ -Z user ] [ expression ]
云服务器拥有公网地址,作为服务器Server,需要注意的是:在云服务器操作台上开启一个端口,用来发送iperf数据包,我开启的端口号为5001。
本地无公网地址,作为客户端Client
在服务器Sever执行
# iperf作为服务器启动,后台运行,监听端口号为5001
iperf -s -D -p 5001
在客户端Client执行
iperf -c xx.xx.xx.xx -p 5001 -t 20
在客户端或服务端开启tcpdump抓包,监听ip地址,5001端口号,将抓到的包写进文件file.pcap
tcpdump host xx.xx.xx.xx -p 5001 -w file.pcap
更多指令可以自行设计,无论是iperf还是iperf3都可以测量网络带宽,可以使用–help查看相关指令。
以上就是今天要讲的内容,本文仅仅简单介绍了Linux添加拥塞控制算法模块到内核,网络传输工具iperf/iperf3和抓包工具tcpdump,提供了一个网络吞吐量测试和抓包的例子。
注意,iperf和iperf3是两个不同的抓包工具,一些指令并不兼容,在其指令中包含拥塞控制算法参数,但是网上使用该参数的信息很少,可以进行尝试,也可以使用sysctl更改Linux内核拥塞控制算法。
Ubuntu配置|添加拥塞控制算法到Linux内核
Iperf 维基百科
TCPDUMP 百度百科