链路聚合(英语:Link Aggregation)是一个计算机网络术语,指将多个物理端口汇聚在一起,形成一个逻辑端口,以实现出/入流量吞吐量在各成员端口的负荷分担,交换机根据用户配置的端口负荷分担策略决定网络封包从哪个成员端口发送到对端的交换机。当交换机检测到其中一个成员端口的链路发生故障时,就停止在此端口上发送封包,并根据负荷分担策略在剩下的链路中重新计算报文的发送端口,故障端口恢复后再次担任收发端口。链路聚合在增加链路带宽、实现链路传输弹性和工程冗余等方面是一项很重要的技术。
使用链路聚合的好处:网卡的链路聚合就是将多块网卡连接起来,当一块网卡损坏,网络依旧可以正常运行,可以有效的防止因为网卡损坏带来的损失,同时也可以提高网络访问速度。比使用单网卡更加稳定,在高负载的服务器中使用链路聚合可以避免因网卡故障而造成服务器无响应.
网卡的链路聚合一般常用的有"bond"和"team"两种模式,"bond"模式最多可以添加两块网卡,"team"模式最多可以添加八块网卡。
本文主讲bond模式
Bond技术即bonding,它是Linux Kernel的一个模块,能将多块物理网卡绑定到一块虚拟网卡上,并通过修改网口驱动让多块网卡看起来是一个单独的以太网接口设备,外界看到的只有一个IP,一般用于解决网卡的单点故障或网卡负载较高的场景。
Bond技术需要物理网卡开启混杂模式才能正常工作。在混杂模式下,网卡不只接收目的MAC地址为自身的以太网帧,而是接收网络上所有的数据帧。为了实现多块网卡的协同工作,Bond将自己的MAC地址复制到各个物理网卡上,让所有的网卡共享同一个MAC地址。这个方式就要求所有的网卡都要支持BIOS,这样才能够让操作系统将MAC地址写到网卡上。
对于单物理网卡的Bond网卡来说,Bond网卡的MAC地址和物理网卡的物理地址是一致的。而对于多物理网卡的Bond网卡而言,其中一块物理网卡会被设置为 Master,其他的网卡则都是Slave,Bond网卡的MAC地址取自标志为Master的物理网卡,然后再将这个MAC地址复制到其他物理网卡上。所以在安装网卡时,我们需要指定Bond网卡,以及Bond网卡所对应的标志为Master的物理网卡。
网卡Bond模式总共有7种,最常用的是负载模式(模式0)和主备模式(模式1),在网络流量较大的场景下推荐使用负载模式,而在可靠性要求较高的场景下则推荐使用主备模式。
1)模式0
此模式使用轮询策略,即顺序的在每一个被bond的网卡上发送数据包,这种模式提供负载均衡和容错能力。Bond0可以保证bond虚拟网卡和被bond的两张或多张物理网卡拥有相同的MAC地址,其中bond虚拟网卡的MAC地址是其中一张物理网卡的MAC地址,而bond虚拟网卡的MAC地址是根据bond自己实现的一个算法来选择的。
在bond0模式下,如果一个连接或者会话的数据包从不同的网口发出,途中再经过不同的链路,则在客户端很有可能会出现数据包无序到达的现象,而无序到达的数据包一般需要重新发送,这样网络的吞吐量就会下降。另外,如果做bond0的两张或多张网卡接到了同一交换机上,还需对交换机配置聚合模式。
2)模式1
此模式使用主备策略,在所有做bond1的物理网卡中,同一时刻只有一张网卡被激活,当且仅当活动网卡失效时才会激活其他的网卡。这种模式下做bond的两张或多张网卡的MAC地址和Bond虚拟网卡的MAC地址相同,而Bond的MAC地址是Bond创建启动后活动网卡的MAC地址。
这种模式要求主被网卡能快速的切换,即当主网卡出现故障后能迅速地切换至备用网卡。切换过程中,上层的应用几乎不受影响,因为Bond的驱动程序会临时接管上层应用的数据包,存放至数据缓冲区,等待备用网卡启动后再发送出去。但是如果切换时间过长,则会引起缓冲区的溢出,导致丢包。
实验要求:
[root@server ~]# ifconfig # 查看所有网络设备
# 第一块网卡
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 52:54:00:00:05:0a txqueuelen 1000 (Ethernet)
RX packets 40 bytes 4968 (4.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# 第二块网卡,两块网卡都没有IP
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 52:54:00:d3:36:54 txqueuelen 1000 (Ethernet)
RX packets 40 bytes 4968 (4.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
1)这时我们开始实验,首先建立一个bond0接口:
con-name | 链接名字 |
---|---|
ifname | 设备名字 |
type | 设备模式 |
ip4 | 确定使用的IP |
[root@server ~]# nmcli connection add con-name bond0 ifname bond0 type bond ip4 123.0.0.1/24
Connection 'bond0' (c10ed0f1-4818-4646-a0a5-6884baa6acb5) successfully added.
[root@server ~]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 123.0.0.1 netmask 255.255.255.0 broadcast 123.0.0.255
ether f2:c0:b3:c1:37:77 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
2)查看是否含有设备文件
[root@server network-scripts]# vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0 # 设备名称,ifconfig体现
BONDING_OPTS=mode=balance-rr # 模式为轮询
TYPE=Bond # 设备模式
BONDING_MASTER=yes
BOOTPROTO=none # 静态IP
IPADDR0=123.0.0.1 # ip
PREFIX0=24# 子网掩码
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=bond0 # 链接名称,文件名显示
UUID=c10ed0f1-4818-4646-a0a5-6884baa6acb5
ONBOOT=yes # 开机加载读取
3)添加两块网卡:
con-name | 链接名字 |
---|---|
ifname | 设备名字 |
type | 设备模式 |
bond-slave | bond的备 |
master | 主是谁 |
[root@server ~]# nmcli connection add con-name eth0 ifname eth0 type bond-slave master bond0
Connection 'eth0' (8b289485-10e4-4e8d-90ff-64b70c5611f2) successfully added.
[root@server ~]# nmcli connection add con-name eth1 ifname eth1 type bond-slave master bond0
Connection 'eth1' (3d9d64de-8a77-430f-b9f4-6c49bf8b3fdd) successfully added.
4)查看网卡配置文件eth0与eth1相同:
[root@server network-scripts]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
NAME=eth0
UUID=8b289485-10e4-4e8d-90ff-64b70c5611f2
DEVICE=eth0
ONBOOT=yes
MASTER=bond0 # 主为bond0
SLAVE=yes # 他是备
5)现在就可以查看链接:
[root@server ~]# nmcli connection show
NAME UUID TYPE DEVICE
eth0 8b289485-10e4-4e8d-90ff-64b70c5611f2 802-3-ethernet eth0
bond0 c10ed0f1-4818-4646-a0a5-6884baa6acb5 bond bond0
eth1 3d9d64de-8a77-430f-b9f4-6c49bf8b3fdd 802-3-ethernet eth1
6)测试链接:
[root@server ~]# ping 123.0.0.2 # 链接成功
PING 123.0.0.2 (123.0.0.2) 56(84) bytes of data.
64 bytes from 123.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms
64 bytes from 123.0.0.2: icmp_seq=2 ttl=64 time=0.286 ms
^C
--- 123.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.169/0.227/0.286/0.060 ms
You have new mail in /var/spool/mail/root
7)查看bond0的运行文件:
[root@server ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin) # 轮询模式
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0 # 备设备eth0
MII Status: up # 开启
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:00:05:0a
Slave queue ID: 0
Slave Interface: eth1 # 备设备eth1
MII Status: up # 开启
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:d3:36:54
Slave queue ID: 0
1)删除之前的所有建立链接
[root@server ~]# nmcli connection delete bond0 # 删除链接bond0
[root@server ~]# nmcli connection delete eth0 # 删除他的备链接eth0
[root@server ~]# nmcli connection delete eth1 # 删除他的备链接eth1
[root@server ~]# cd /etc/sysconfig/network-scripts/
[root@server network-scripts]# ls # 删除成功
ifcfg-Ethernet_connection_1 ifdown-isdn ifdown-tunnel ifup-isdn ifup-Team
ifcfg-lo ifdown-post ifup ifup-plip ifup-TeamPort
ifdown ifdown-ppp ifup-aliases ifup-plusb ifup-tunnel
ifdown-bnep ifdown-routes ifup-bnep ifup-post ifup-wireless
ifdown-eth ifdown-sit ifup-eth ifup-ppp init.ipv6-global
ifdown-ippp ifdown-Team ifup-ippp ifup-routes network-functions
ifdown-ipv6 ifdown-TeamPort ifup-ipv6 ifup-sit network-functions-ipv
2)检测是否成功:
[root@server ~]# nmcli connection show
NAME UUID TYPE DEVICE
# 为空已经删除成功
3)建立主备bond链接:
这里mode如果不指定默认使用模式0轮询,现在指定active-backup为主备。
[root@server ~]# nmcli connection add con-name bond0 ifname bond0 type bond mode active-backup ip4 123.0.0.1/24
Connection 'bond0' (5696d59c-9f40-4212-91c0-0e757fe5dd44) successfully added.
You have new mail in /var/spool/mail/root
[root@server ~]# nmcli connection add con-name eth0 ifname eth0 type bond-slave master bond0
Connection 'eth0' (207acbab-c0dc-4166-9401-18c9e623fe38) successfully added.
[root@server ~]# nmcli connection add con-name eth1 ifname eth1 type bond-slave master bond0
Connection 'eth1' (2ed5e51d-5ead-4083-8b76-255983223ff3) successfully added.
5)查看配置文件:
bond链接文件:
[root@server ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS=mode=active-backup # 模式为主备
TYPE=Bond # bond类型
BONDING_MASTER=yes # 主
BOOTPROTO=none # 静态IP
IPADDR0=123.0.0.1
PREFIX0=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=bond0 # 链接名称
UUID=5696d59c-9f40-4212-91c0-0e757fe5dd44
ONBOOT=yes
查看eth0链接文件:
[root@server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
NAME=eth0
UUID=207acbab-c0dc-4166-9401-18c9e623fe38
DEVICE=eth0
ONBOOT=yes
MASTER=bond0 # 主为bond0
SLAVE=yes # 他时备
6)测试禁用eth0,是否使用eth1:
[root@server ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup) # 主备模式
Primary Slave: None
Currently Active Slave: eth0 # 现在使用eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up # eth0开启
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:00:05:0a
Slave queue ID: 0
Slave Interface: eth1
MII Status: up # eth1开启
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:d3:36:54
Slave queue ID: 0
禁用eth0网卡:
[root@server ~]# ifconfig eth0 down # 禁用eth0
You have new mail in /var/spool/mail/root
[root@server ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1 # 当前使用eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: down # eth0已禁用
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 52:54:00:00:05:0a
Slave queue ID: 0
Slave Interface: eth1
MII Status: up # eth1开启
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:d3:36:54
Slave queue ID: 0
这就是主备模式的bond网络配置。
链路聚合与双网卡绑定几乎相同,可以实现多网卡绑定主从荣誉,负载均衡,提高网络访问流量。但链路聚合与双网卡绑定技术(bond)不同点就在于,双网卡绑定只能使用两个网卡绑定,而链路聚合最多可将8个网卡汇聚同时做绑定,此聚合模式称之为team
Team | 的种类 |
---|---|
broadcast | 广播容错 |
roundrobin | 平衡轮叫 |
activebackup | 主备 |
loadbalance | 负载均衡 |
这里的负载均衡,谁空闲就给谁。
要明白team聚合比bond聚合的优势就是负载均衡
环境要求:
1)查看环境
[root@server ~]# nmcli connection show # 无链接
NAME UUID TYPE DEVICE
[root@server ~]# ifconfig # 两块网卡
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 52:54:00:00:05:0a txqueuelen 1000 (Ethernet)
RX packets 493 bytes 30680 (29.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 67 bytes 6208 (6.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 52:54:00:d3:36:54 txqueuelen 1000 (Ethernet)
RX packets 660 bytes 46078 (44.9 KiB)
RX errors 0 dropped 3 overruns 0 frame 0
TX packets 75 bytes 4414 (4.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
2)建立一个主备team0链接:
固定用法:
‘{“runner”:{“name”:“名称”}}’ | 使用什么team模式 |
---|---|
config | 设定模式 |
[root@server ~]# nmcli connection add con-name team0 ifname team0 type team config '{"runner":{"name":"activebackup"}}' ip4 123.0.0.1/24
Connection 'team0' (c2bc8a9c-c9b1-445e-915a-7f6bd93f5315) successfully added.
[root@server ~]# ifconfig team0
team0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 123.0.0.1 netmask 255.255.255.0 broadcast 123.0.0.255
ether ce:76:42:5b:af:2a txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
3)加入指定网卡:
将eth0和eth1加入team0。
[root@server ~]# nmcli connection add con-name eth0 ifname eth0 type team-slave master team0
Connection 'eth0' (37131fe3-5038-4013-89a8-1f870c609859) successfully added.
[root@server ~]# nmcli connection add con-name eth1 ifname eth1 type team-slave master team0
Connection 'eth1' (adbfe350-206d-4e48-acea-63416aee4c74) successfully added.
4)查看链接:
[root@server ~]# nmcli connection show # 建立成功
NAME UUID TYPE DEVICE
eth1 adbfe350-206d-4e48-acea-63416aee4c74 802-3-ethernet eth1
team0 c2bc8a9c-c9b1-445e-915a-7f6bd93f5315 team team0
eth0 37131fe3-5038-4013-89a8-1f870c609859 802-3-ethernet eth0
5)查看team0配置文件:
[root@server ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0
DEVICE=team0 # 设备名team0
TEAM_CONFIG="{\"runner\":{\"name\":\"activebackup\"}}" # 使用主备模式
DEVICETYPE=Team # team模式
BOOTPROTO=none # 静态网络
IPADDR0=123.0.0.1
PREFIX0=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=team0
UUID=c2bc8a9c-c9b1-445e-915a-7f6bd93f5315
ONBOOT=yes
6)查看eth0的配置文件:
[root@server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
UUID=37131fe3-5038-4013-89a8-1f870c609859
DEVICE=eth0
ONBOOT=yes
TEAM_MASTER=team0 # team主为team0
DEVICETYPE=TeamPort
7)查看team0接口工作状态:
[root@server ~]# teamdctl team0 stat # 查看team0接口状态
setup:
runner: activebackup # 主备
ports:
eth0
link watches:
link summary: up # eth0开启
instance[link_watch_0]:
name: ethtool
link: up
eth1
link watches:
link summary: up # eth1开启
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eth0 # 现在使用eth0
8)与bond相同,这里停用网卡eth0就用下一个eth1
[root@server ~]# ifconfig eth0 down # 停用eth0
You have new mail in /var/spool/mail/root
[root@server ~]# teamdctl team0 stat
setup:
runner: activebackup
ports:
eth0
link watches:
link summary: down # 已经停用
instance[link_watch_0]:
name: ethtool
link: down
eth1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eth1 # 使用了eth1
9)查看链接状态:
[root@server ~]# ping 123.0.0.2
PING 123.0.0.2 (123.0.0.2) 56(84) bytes of data.
64 bytes from 123.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms
64 bytes from 123.0.0.2: icmp_seq=2 ttl=64 time=0.208 ms
^C
--- 123.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.173/0.190/0.208/0.022 ms
区别: