Ethernet bonding is a method of combining (joining) two or more network interfaces together into a single virtual NIC card which may increase the bandwidth and provides redundancy of NIC Cards.
Linux allows to bond multiple network interfaces using a special kernel module named bonding. The feature is enabled on Linux so we can create new virtual interface called as bond. We have two NIC cards ens33 and ens34. The post shows the procedure on RHEL 7 and CentOS 7.
Table of Contents
1) Enable bonding module
As a first step you need to check if the bonding module is enabled. You can check with the command below
# modinfo bonding modinfo: ERROR: Module alias bonding not found.
If it's not present, you can use the command below:
# modprobe --first-time bonding
You can check again. You will have the result below
# modinfo bonding
You can see that the command gives us a result and you can look the description line.
2) Create a bonding channel interface
We will first create a new file name bonding.conf in the
/etc/modprobe.d/ directory. The name can be anything you like as long as it ends with a .conf extension. It's a configuration file for the driver named bonding
# vim /etc/modprobe.d/bonding.conf alias bond0 bonding
Insert the content above, save and exit. For each configured channel bonding interface, there must be a corresponding entry in your
Now we can create a channel bonding interface. To do it, we need to create a file in the
/etc/sysconfig/network-scripts/ directory called
ifcfg-bond0 as the alias of created earlier. You must notice that we operate on the folder which contains network interface files.
# vim /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 NAME=bond0 TYPE=Bond BONDING_MASTER=yes IPADDR=192.168.43.100 PREFIX=24 ONBOOT=yes BOOTPROTO=none BONDING_OPTS="mode=1 miimon=100"
Note the directive BONDING_OPTS line, bonding uses a variety of options and mode. Modes can be:
- mode 0 or balance-rr: Sets a round-robin policy for fault tolerance and load balancing.
- mode 1 or active-backup: Sets an active-backup policy for fault tolerance.
- mode 2 or balance-xor: Sets an XOR (exclusive-or) mode for fault tolerance and load balancing.
- mode 3 or broadcast: Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces.
- mode 4 or 802.3ad: Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings.
- mode 5 or balance-tlb: Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines.
- mode 6 or balance-alb: Sets an Adaptive Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for
Options can be:
- miimon=time_in_milliseconds: shows(in millinseconds) how often link status is checked for link failure. This is useful if high availability is required because MII is used to verify that the NIC is active
- arp_interval=time_in_milliseconds: Specifies (in milliseconds) how often ARP monitoring occurs. If using this setting while in mode 0 or mode 2 (the two load-balancing modes), the network switch must be configured to distribute packets evenly across the NICs.
3) Configure physical interfaces
Next step is to edit physical interfaces that are intended for bonding by adding the
SLAVE=yes directives to their configuration files. It implies that the channel bonding interface is the master and the interfaces to be bonded are referred to as the slaves. The configuration files for each of the channel-bonded interfaces can be nearly identical and you need to comment out or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above. Make sure you add the MASTER and SLAVE configuration in these files..
For ens33 interface, we will have the configuration below
# vim /etc/sysconfig/network-scripts/ifcfg-ens33 DEVICE=ens33 NAME=bond0-slave TYPE=Ethernet BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes
For ens34 interface, the file should look like below
# vim /etc/sysconfig/network-scripts/ifcfg-ens34 DEVICE=ens34 NAME=bond0-slave TYPE=Ethernet BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes
4) Activate bonding channel
To activate a bond, bring up all the slaves. If the interface were already activated during the modifications, you need to deactivate the interface before.
So we will first deactivate the interface
# ifdown ifcfg-ens33 Device 'ens33' successfully disconnected. # ifdown ifcfg-ens34 Device 'ens34' successfully disconnected.
Now we will reactivate interfaces
# ifup ifcfg-ens33 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5) # ifup ifcfg-ens34 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
or reload all interface with
# nmcli con reload
You can check the configuration with the command below:
# ifconfig bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 192.168.43.100 netmask 255.255.255.0 broadcast 192.168.43.255 inet6 fe80::20c:29ff:feb4:f30a prefixlen 64 scopeid 0x20<link> ether 00:0c:29:b4:f3:0a txqueuelen 1000 (Ethernet) RX packets 26 bytes 4705 (4.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22 bytes 3711 (3.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 00:0c:29:b4:f3:0a txqueuelen 1000 (Ethernet) RX packets 13 bytes 2196 (2.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12 bytes 2072 (2.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens34: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 00:0c:29:b4:f3:0a txqueuelen 1000 (Ethernet) RX packets 13 bytes 2509 (2.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 1639 (1.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 1172 bytes 86468 (84.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1172 bytes 86468 (84.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
You can see that only the bond interface has an IP address. You also check the configuration with the command below
# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: ens33 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: ens33 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:b4:f3:0a Slave queue ID: 0 Slave Interface: ens34 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:b4:f3:14 Slave queue ID: 0
Refer linuxfoundation page for more details on ethernet bonding.
Link monitoring can be enabled via either the miimon or arp_interval parameters where miimon monitors the carrier state as sensed by the underlying network device, and the arp monitor (arp_interval) monitors connectivity to another host on the local network. If no link monitoring is configured, the bonding driver will be unable to detect link failures and will assume that all links are always available. When link monitoring is enabled, then the failing device will be disabled. The active-backup mode will fail over to a backup link, and other modes will ignore the failed link.