How do I configure a bonding device on Red Hat Enterprise Linux (RHEL)?

https://access.redhat.com/articles/172483#Bonding_modes_on_Red_Hat_Enterprise_Linux

Updated 2017年九月20日08:46 - 

English 

  • Introduction
  • Automated configuration
  • Manual configuration
    • Configuring bonded devices on Red Hat Enterprise Linux 7
    • Configuring bonded devices on Red Hat Enterprise Linux 6
    • Configuring bonded devices on Red Hat Enterprise Linux 5
      • Single bonded device on RHEL5
      • Multiple bonded device on RHEL5
    • Configuring bonded devices on Red Hat Enterprise Linux 4
      • Single bonded device on RHEL4
      • Multiple bonded device on RHEL4
    • Bonding modes on Red Hat Enterprise Linux
      • balance-rr (mode 0)
      • active-backup (mode 1)
      • balance-xor (mode 2)
      • broadcast (mode 3)
      • 802.3ad (mode 4)
      • balance-tlb (mode 5)
      • balance-alb (mode 6)
    • Bonding Parameters
    • Link Monitoring Modes
      • ARP monitoring parameters
      • MII monitoring parameters
  • FAQ
  • Known and Resolved issues

Introduction

Bonding (or channel bonding) is a technology enabled by the Linux kernel and Red Hat Enterprise Linux, that allows administrators to combine two or more network interfaces to form a single, logical "bonded" interface for redundancy or increased throughput. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, they may provide link-integrity monitoring.

Automatic configuration

Red Hat Customer Portal Labs provides a Network Bonding Helper for automatically generating a network bond, based on your environment and deployment goals. The Network Bonding Helper incorporates the information included in this document but makes it easier to generate valid and support-recommended configurations.

Manual configuration

Configuring bonded devices on Red Hat Enterprise Linux 7

  • man 7 nmcli-examples is a good way for us. Here are the detailed steps:

    • Add a bond device:

    Raw

    # nmcli con add type bond ifname  mode active-backup    
    (There are 6 types of bonding mode 802.3ad/balance-alb/balance-tlb/broadcast/active-backup/balance-rr/balance-xor.)   
    
    • Set up ip-address for bond device.

    Raw

    # nmcli connection modify  ipv4.addresses 
    
    • Set a static IP for bond device.

    Raw

    # nmcli connection modify  ipv4.method manual
    
    • Add bond-slave to bond device.

    Raw

    # nmcli con add type bond-slave ifname  master 
    
    • Add another slave to bond device.

    Raw

    # nmcli con add type bond-slave ifname  master 
    
    • Show configuration.

    Raw

    # nmcli connection show
    

Configuring bonded devices on Red Hat Enterprise Linux 6

  • For the detailed manual of bonding configuration on RHEL6, please refer to:

    • Deployment Guide - Channel Bonding Interfaces
    • Deployment Guide - Using Channel Bonding
  • In RHEL 6.5 and below, NetworkManager is unable to control a bonded interface. Due to this, the NetworkManager service needs to be instructed to not manage bonding and related interfaces. Therefore, all ifcfg files, for bondX and ethY must contain this line:

    Raw

    NM_CONTROLLED=no
    
  • To create a channel bonding interface, create a file in the /etc/sysconfig/network-scripts/ directory called ifcfg-bondX, replacing X with the number for the interface, such as 0. The contents of the file can be identical to whatever type of interface is getting bonded, such as an Ethernet interface. The only difference is that the DEVICE= directive must be bondX, replacing X with the number for the interface.

    The following is a sample channel bonding configuration file:

    Raw

    DEVICE=bond0
    IPADDR=192.168.0.1
    NETMASK=255.255.255.0
    ONBOOT=yes
    HOTPLUG=no
    BOOTPROTO=none
    USERCTL=no
    BONDING_OPTS="bonding parameters separated by spaces"  # Such as BONDING_OPTS="miimon=100 mode=1"
    NM_CONTROLLED=no
    
  • Bonding Parameters: The behavior of the bonded interfaces depends upon the mode. Mode 0 is the default value, which causes bonding to set all slaves of a round-robin bond to transmit packets in sequential order from the first available slave through the last. For more information about the bonding modes, refer to The bonding modes supported in Red Hat Enterprise Linux.

  • After the channel bonding interface is created, the network interfaces to be bound together must be configured by adding the MASTER= and SLAVE= directives to their configuration files. The configuration files for each of the channel-bonded interfaces can be nearly identical. For example, if two Ethernet interfaces are being channel bonded, both eth0 and eth1 may look like the following example:

    Raw

    DEVICE=ethX
    BOOTPROTO=none
    ONBOOT=yes
    HOTPLUG=no
    MASTER=bond0
    SLAVE=yes
    USERCTL=no
    NM_CONTROLLED=no
    

    In this example, replace X with the numerical value for the interface.

  • Restart the network service to bring the bond up:

    Raw

    # service network restart
    

Configuring bonded devices on Red Hat Enterprise Linux 5

Single bonded device on RHEL5

  • For the detailed manual of bonding configuration on RHEL5, please refer to:

    • Deployment Guide - Channel Bonding Interfaces
    • Deployment Guide - The Channel Bonding Module

To configure the bond0 device with the network interface eth0 and eth1, perform the following steps:

  • Add the following line to /etc/modprobe.conf:

    Raw

    alias bond0 bonding
    
  • Create the channel bonding interface file ifcfg-bond0 in the /etc/sysconfig/network-scripts/ directory:

    Raw

    DEVICE=bond0
    IPADDR=192.168.0.1
    NETMASK=255.255.255.0
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    HOTPLUG=no
    BONDING_OPTS="bonding parameters separated by spaces"  # Such as BONDING_OPTS="miimon=100 mode=1"
    
  • Bonding Parameters: The behavior of the bonded interfaces depends upon the mode. Mode 0 is the default value, which causes bonding to set all slaves of a round-robin bond to transmit packets in sequential order from the first available slave through the last. For more information about the bonding modes, refer to The bonding modes supported in Red Hat Enterprise Linux.

  • Configure the ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-ethX Both eth0 and eth1 should look like the following example:

    Raw

    DEVICE=ethX
    BOOTPROTO=none
    HWADDR=your mac address here
    ONBOOT=yes
    HOTPLUG=no
    MASTER=bond0
    SLAVE=yes
    USERCTL=no
    

    Note:

    • Replace X with the numerical value for the interface, such as 0 and 1 in this example. Replace the HWADDR value with the MAC for the interface.
    • Red Hat suggest that configure the MAC address of the Ethernet card into the file /etc/sysconfig/network-scripts/ifcfg-ethX.
  • Restart the network service to bring the bond up:

    Raw

    # service network restart
    
  • To view the status of the bond, check the following file:

    Raw

    # cat /proc/net/bonding/bondX
    

Multiple bonded device on RHEL5

In Red Hat Enterprise Linux 5.3 (initscripts-8.45.25-1.el5) and later, configuring multiple bonding channels is similar to configuring a single bonding channel. Setup the ifcfg-bondX and ifcfg-ethX files as if there were only one bonding channel. You can specify different BONDING_OPTSfor different bonding channels so that they can have different modes and other settings. Refer to the section 15.2.3. Channel Bonding Interfaces in the Red Hat Enterprise Linux 5 Deployment Guide for more information.

To configure the bond0 device with the ethernet interface eth0 and eth1, and configure the bond1 device with the Ethernet interface eth2 and eth3, perform the following steps:

  • Add the following line to /etc/modprobe.conf:

    Raw

    alias bond0 bonding
    alias bond1 bonding
    
  • Create the channel bonding interface files ifcfg-bond0 and ifcfg-bond1, in the /etc/sysconfig/network-scripts/ directory:

    Raw

    -- /etc/sysconfig/network-scripts/ifcfg-bond0  --
    DEVICE=bond0
    IPADDR=192.168.0.1
    NETMASK=255.255.255.0
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    HOTPLUG=no
    BONDING_OPTS="bonding parameters separated by spaces"  # Such as BONDING_OPTS="miimon=100 mode=1"
    
    -- /etc/sysconfig/network-scripts/ifcfg-bond1  --
    DEVICE=bond1
    IPADDR=192.168.1.1
    NETMASK=255.255.255.0
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    HOTPLUG=no
    BONDING_OPTS="bonding parameters separated by spaces"  # Such as BONDING_OPTS="miimon=100 mode=1"
    
  • Bonding Parameters: The behavior of the bonded interfaces depends upon the mode. Mode 0 is the default value, which causes bonding to set all slaves of a round-robin bond to transmit packets in sequential order from the first available slave through the last. For more information about the bonding modes, refer to The bonding modes supported in Red Hat Enterprise Linux.

  • Configure the ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-eth0. Both eth0 and eth1 should look like the following example:

    Raw

    DEVICE=ethX
    BOOTPROTO=none
    HWADDR=your mac address here
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    USERCTL=no
    

    Note:

    • Replace X with the numerical value for the interface, such as 0 and 1 in this example. Replace the HWADDR value with the MAC for the interface.
    • Red Hat suggest that configure the MAC address of the ethernet card into the file /etc/sysconfig/network-scripts/ifcfg-ethX.
  • Restart the network service to bring the bonds up:

    Raw

    # service network restart
    
  • To view the status of the bond, check the following file:

    Raw

    # cat /proc/net/bonding/bond0
    

Configuring bonded devices on Red Hat Enterprise Linux 4

Single bonded device on RHEL4

  • For the detailed manual for bonding configuration on RHEL4, please refer to:

    • Section 8.2.3 "Channel Bonding Interface"
    • Section 22.5.2. "The Channel Bonding Module"

To configure the bond0 device with the network interface eth0 and eth1, perform the following steps,

  • Add the following line to /etc/modprobe.conf:

    Raw

    alias bond0 bonding
    options bonding mode=1 miimon=100
    

    Note:

    • Configure the bonding parameters in the file /etc/modprobe.conf. It is different from the configuration of RHEL5 and RHEL6. In RHEL5 and RHEL6, bonding parameters are configured in the ifcfg-bondX file by specifying BONDING_OPTS. In RHEL4, it is required to pass options in the modprobe.conf using the options syntax.
    • Bonding Parameters: The behavior of the bonded interfaces depends upon the mode. Mode 0 is the default value, which causes bonding to set all slaves of a round-robin bond to transmit packets in sequential order from the first available slave through the last. For more information about the bonding modes, refer to The bonding modes supported in Red Hat Enterprise Linux.
  • Create the channel bonding interface file in the /etc/sysconfig/network-scripts/ directory, ifcfg-bond0:

    Raw

    DEVICE=bond0
    IPADDR=192.168.0.1
    NETMASK=255.255.255.0
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    HOTPLUG=no
    
  • Configure the Ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-ethX. In this example, eth0 should look like this:

    Raw

    DEVICE=ethX
    BOOTPROTO=none
    HWADDR=your mac address here
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    USERCTL=no
    HOTPLUG=no
    

    Note:

    • Replace the X with the numerical value for the interface, such as 0 in this example. Replace the HWADDR value with the MAC for the interface.
    • Red Hat suggest that you configure the MAC address of the Ethernet card into the file /etc/sysconfig/network-scripts/ifcfg-ethX.

Multiple bonded device on RHEL4

  • To configure multiple bonding channels on RHEL4, first set up the ifcfg-bondX and ifcfg-ethX files as you would for a single bonding channel, shown in the previous section.

  • Configuring multiple channels requires a different setup for /etc/modprobe.conf. If the two bonding channels have the same bonding options, such as bonding mode, link monitoring frequency and so on, add the max_bonds option. For example:

    Raw

    alias bond0 bonding
    alias bond1 bonding
    options bonding max_bonds=2 mode=1 miimon=100
    
  • If the two bonding channels have different bonding options (for example, one is using round-robin mode and one is using active-backup mode), the bonding modules have to load twice with different options. For example, in /etc/modprobe.conf, use:

    Raw

    install bond0 /sbin/modprobe --ignore-install bonding -o bonding0 mode=0 miimon=100 primary=eth0
    install bond1 /sbin/modprobe --ignore-install bonding -o bonding1 mode=1 miimon=50 primary=eth2
    

    If there are more bonding channels, add one install bondX /sbin/modprobe --ignore-install bonding -o bondingX options line per bonding channel.

    Note: There should be no "max_bonds=" or "alias bond0 bonding" types of lines, for this last scenario
    Note: The use of -o bondingX to get different options for multiple bonds was not possible in Red Hat Enterprise Linux 4 GA and 4 Update 1.

  • After the file /etc/modprobe.conf is modified, restart the network service:

    Raw

    # service network restart
    

Bonding modes on Red Hat Enterprise Linux

For information on the bonding modes supported in Red Hat Enterprise Linux, please refer to the Bonding guide within the kernel documentation at /usr/share/doc/kernel-doc-*/Documentation/networking/bonding.txt

To read this file, you will need to install the kernel-doc package.

Red Hat Enterprise Linux 4, 5, and 6

balance-rr (mode 0)

  • Round-robin policy

    Transmits packets in sequential order from the first available slave through the last.

  • This mode provides load balancing and fault tolerance

active-backup (mode 1)

  • Active-backup policy

    Only one slave in the bond is active. A different slave becomes active only if the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.

  • In bonding version 2.6.2 or later, when a failover occurs in active-backup mode, bonding will issue one or more gratuitous ARPs on the newly-active slave. One gratuitous ARP is issued for the bonding master interface and each VLAN interface configured above it, assuming that the interface has at least one IP address configured. Gratuitous ARPs issued for VLAN interfaces are tagged with the appropriate VLAN id.

  • This mode provides fault tolerance

  • The primary option affects the failover behavior of this mode

balance-xor (mode 2)

  • XOR policy

    Transmits based on the selected transmit hash policy. The default policy is a simple ((source MAC address XOR'd with destination MAC address) modulo slave count). Alternate transmit policies may be selected via the xmit_hash_policy option.

  • This mode provides load balancing and fault tolerance

broadcast (mode 3)

  • Broadcast policy

    transmits everything on all slave interfaces.

  • This mode provides fault tolerance

802.3ad (mode 4)

  • IEEE 802.3ad (LACP) dynamic link aggregation

    This mode creates aggregation groups that share the same speed and duplex settings, and uses all slaves in the active aggregator according to the 802.3ad (LACP) specification. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR policy via the xmit_hash_policy option. Note that not all transmit policies may be 802.3ad compliant, particularly with regard to the packet misordering requirements described in section 43.2.4 of the 802.3ad standard. Differing peer implementations will have varying tolerances for noncompliance. It does not require that all slaves use the same NIC driver.

  • This mode provides load balancing and fault tolerance

  • Prerequisites:

    • ethtool support in the base drivers for retrieving the speed and duplex of each slave
    • A switch that supports IEEE 802.3ad (LACP) Dynamic link aggregation
    • Most switches will require some type of configuration to enable 802.3ad mode

balance-tlb (mode 5)

  • Adaptive transmit load balancing

    Channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

  • This mode provides load balancing and fault tolerance

  • Prerequisite:

    • ethtool support in the base drivers for retrieving the speed of each slave.

balance-alb (mode 6)

  • Adaptive load balancing

    Includes balance-tlb and receive load balancing (rlb) for IPv4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond, such that different peers use different hardware addresses for the server.

  • Receive traffic from connections created by the server is also balanced. When the local system sends an ARP request the bonding driver copies and saves the peer's IP information from the ARP packet. When the ARP reply arrives from the peer, its hardware address is retrieved and the bonding driver initiates an ARP reply to this peer assigning it to one of the slaves in the bond. A problematic outcome of using ARP negotiation for balancing is that each time that an ARP request is broadcast it uses the hardware address of the bond. Hence, peers learn the hardware address of the bond and the balancing of receive traffic collapses to the current slave. This is handled by sending updates (ARP Replies) to all the peers with their individually assigned hardware address such that the traffic is redistributed. Receive traffic is also redistributed when a new slave is added to the bond and when an inactive slave is reactivated. The receive load is distributed sequentially (round-robin) among the group of highest-speed slaves in the bond.

  • When a link is reconnected or a new slave joins the bond the receive traffic is redistributed among all active slaves in the bond by initiating ARP Replies with the selected MAC address to each of the clients. The updelay parameter (detailed below) must be set to a value equal or greater than the switch's forwarding delay so that the ARP Replies sent to the peers will not be blocked by the switch.

  • This mode provides load balancing and fault tolerance

  • Prerequisites:

    • ethtool support in the base drivers for retrieving the speed of each slave
    • Base driver support for setting the hardware address of a device while it is open. This is required so that there will always be one slave in the team using the bond hardware address (the curr_active_slave) while having a unique hardware address for each slave in the bond. If the curr_active_slave fails, its hardware address is swapped with the new curr_active_slave that was chosen.

Bonding Parameters

  • max_bonds: Specifies the number of bonding devices to create for this instance of the bonding driver. For example, if max_bonds is 3, and the bonding driver is not already loaded, then bond0bond1, and bond2 will be created. The default value is 1.
  • xmit_hash_policy: Selects the transmit hash policy to use for slave selection in balance-xor and 802.3ad modes. Possible values are:
    • layer2 (default): Uses XOR of hardware MAC addresses to generate the hash. This algorithm will place all traffic to a particular network peer on the same slave. This algorithm is 802.3ad compliant.
    • layer2+3: This policy uses a combination of layer2 and layer3 protocol information to generate the hash. Uses XOR of hardware MAC addresses and IP addresses to generate the hash. This algorithm will place all traffic to a particular network peer on the same slave. For non-IP traffic, it works in the same way as in layer2 policy. This policy is intended to provide a more balanced distribution of traffic than layer2 alone, especially in environments where a layer3 gateway device is required to reach most destinations. This algorithm is 802.3ad compliant.
    • layer3+4: This policy uses upper layer protocol information, when available, to generate the hash. This allows for traffic to a particular network peer to span multiple slaves, although a single connection will not span multiple slaves. For fragmented TCP or UDP packets and all other IP protocol traffic, the source and destination port information is omitted. For non-IP traffic, the formula is the same as for the layer2 transmit hash policy. This is not fully 802.3ad compliant because a single TCP or UDP conversation containing both fragmented and unfragmented packets will see packets striped across two interfaces.

It is critical that a link monitoring mode, either the miimon or arp_interval and arp_ip_target parameters be specified. Configuring a bond without a link monitoring mode is not a valid use of the bonding driver

ARP monitoring parameters

  • arp_interval: Specifies the ARP link monitoring frequency in milliseconds.
  • arp_ip_target: Specifies the IP addresses to use as ARP-monitoring peers when arp_interval is > 0. Multiple IP addresses can be separated by a comma. At least one IP address must be given for ARP monitoring to function. The maximum number of targets that can be specified is 16.
  • arp_validate: Specifies whether or not ARP probes and replies should be validated in the active-backup mode. This causes the ARP monitor to examine the incoming ARP requests and replies, and only consider a slave to be up if it is receiving the appropriate ARP traffic. This parameter can have the following values:
    • none (or 0): This is the default.
    • active (or 1): Validation is performed only for the active slave.
    • backup (or 2): Validation is performed only for backup slaves.
    • all (or 3): Validation is performed for all slaves.

For the active slave, the validation checks ARP replies to confirm that they were generated by an arp_ip_target. Since backup slaves do not typically receive these replies, the validation performed for backup slaves is on the ARP request sent out via the active slave. It is possible that some switch or network configurations may result in situations wherein the backup slaves do not receive the ARP requests; in such a situation, validation of backup slaves must be disabled.

MII monitoring parameters

  • miimon: Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures. A value of 0 disables MII link monitoring. A value of 100 is a good starting point. The use_carrier option, listed below, affects how the link state is determined. The default value is 0.
  • updelay: Specifies the time, in milliseconds, to wait before enabling a slave after a link recovery has been detected.
  • downdelay: Specifies the time, in milliseconds, to wait before disabling a slave after a link failure has been detected.
  • use_carrier: specifies whether or not miimon should use MII/ETHTOOL ioctls or netif_carrier_ok() to determine the link status. A value of 1 enables the use of netif_carrier_ok() (faster and better, but not always supported), a value of 0 will use the deprecated MII/ETHTOOL ioctls. The default value is 1.

FAQ

  • Troubleshooting -- Bonding Problems on Red Hat Enterprise Linux
  • Is it possible to configure bonding over bonded interface in Red Hat Enterprise Linux?
  • Configuring a VLAN device over a bonded interface on RHEL
  • What are the supported network bonding modes for Red Hat High Availbility clusters?
  • How to setup Kdump with bonding and VLAN?
  • How do I configure bonding via sysfs in Red Hat Enterprise Linux 5?

Known and Resolved issues

  • Bonding does not switch to slave
  • The server issues a bonding warning message to enable miimon or arp monitoring, even though it is configured
  • Why do I get a "Permission denied" error when enabling debug for the bonding kernel module?
  • Ping gets duplicate replies for the bond interface
  • What means the message "bonding_init(): either miimon or arp_interval and arp_ip_target module parameters must be specified" when bring up a bonded interface on Red Hat Enterprise Linux?
  • Why does mii-tool or ethtool show "10Mbit half duplex" on bonded interface ?

你可能感兴趣的:(RHEL7)