https://img-en.fs.com/community/uploads/post/201912/31/31-nic-teaming-nic-bonding-4.png

Basics of Windows NIC Teaming & Linux NIC Bonding

108
https://img-en.fs.com/community/uploads/author/202003/25/1-13-10.jpg
Posted on December 31, 2019
September 25, 2020
4022

Assumed that you've heard about NIC teaming (NIC: network interface card) or NIC bonding from different vendors. They bound several physical NICs into a virtual one that presents a single visible MAC address externally. Their ultimate goal is the same—for higher performance and better redundancy. This guide will walk through Windows NIC teaming & Linux NIC bonding respectively, the two methods with capabilities of load balancing and fault tolerance.

Get Close to NIC Teaming on Windows OS

What Is Windows NIC Teaming?

NIC teaming, also known as Load Balancing/Failover (LBFO) in the Microsoft world, is a mechanism that enables multiple physical network adapter cards in the same physical host/server to be bound together and placed into a "team" in the form of a single logical NIC. The connected network adapters, shown as one or more virtual adapters (also called team NICs, "tNICs" or team interfaces) to the Windows operating system, share the same IP.

How Many Modes of Windows NIC Teaming?

A variety of network adapters are available in the market, Intel NIC is the most common among the mainstream choices. Intel NIC teaming can be classified into several teaming modes:

Adapter Fault Tolerance (AFT) provides automatic redundancy for a server's network connection. The backup adapter will take over if the primary adapter fails.

Requirements and limitations: works with any switches; all the team members must be connected to the same subnet. 2-8 network adapters per team.

Switch Fault Tolerance (SFT) provides failover between two network adapter cards when each adapter is connected to a separate switch.

Requirements and limitations: works with separated switches; all the team members must be connected to the same subnet. Spanning Tree Protocol (STP) need to be enabled when creating an SFT team. 2 network adapters per team.

Adaptive Load Balancing (ALB) provides load balancing and adapter fault tolerance. Receive Load Balancing (RLB) can be enabled and disabled in ALB teams, which is enabled by default.

Requirements and limitations: works with any switches.

Virtual Machine Load Balancing (VMLB) provides transmit and receive traffic load balancing across VMs (Virtual Machines) bound to the team interface, and fault tolerance in the event of switch port, cable, or network card failure as well.

Requirements and limitations: works with any switches.

Fast EtherChannel/Link Aggregation (FEC) provides load balancing adapter fault tolerance (only with routed protocols) and helps to increase transmission and reception throughput.

Requirements and limitations: requires a switch with link aggregation of FEC capability. 2-8 network adapters per team.

Gigabit EtherChannel/Link Aggregation (GEC) is a gigabit extension of FEC/Link Aggregation/802.3ad.

Requirements and limitations: all the team members must operate at gigabit speed.

Static Link Aggregation (SLA) has replaced the two predecessors FEC and GEC.

Requirements and limitations: the network adapters in static mode should all run at the same speed and connect to a switch with static link aggregation capability. The speed of the team will depend on the lowest common denominator if the network cards' speed varies. 2-8 network adapters per team.

Dynamic Link Aggregation (IEEE 802.3ad) creates one or more team(s) by using dynamic link aggregation with mixed-speed network adapters, which provides fault tolerance and helps to increase transmission and reception throughput.

Requirements and limitations: requires a switch that fully supports the IEEE 802.3ad standard.

Multi-vendor Teaming (MVT) makes network adapters from different vendors work in a team possible.

Get Close to NIC Bonding on Linux OS

What Is Linux NIC Bonding?

In Linux OS, NIC bonding refers to a process of aggregating multiple network interfaces together into a single logical "bonded" interface. That is to say, two or more network cards are combined and connected, acting as one. Note that, one of the prerequisites to configure a bonding is to have the network switch which supports EtherChannel (which is true in case of almost all switches).

How Many Modes of Linux NIC Bonding?

The behavior of the bonded NICs depends on the type of bonding mode adopted. Generally speaking, modes can provide fault tolerance and/or load balancing services. The table below gives detailed explanations of the seven modes.

Mode Policy Fault
Tolerance
Load
Balancing
Features and Descriptions
mode=0
(balance-rr)
Round-robin Y Y The default mode.
In round-robin fashion, packets are transmitted/received
sequentially from the first available slave through the last.
mode=1
(active-backup)
Active-backup Y N ONLY one slave active while another is asleep. This standby NIC will act if the active NIC fails. Provided that there are N*interfaces, the resource utilization rate will be 1/N.
mode=2
(balance-xor)
XOR (Exclusive OR) Y Y Transmits based on the XOR formula. Once the connection between the NIC and the matched device is established, the same NIC will be used to transmit/receive for the destination MAC to ensure that the MAC address remains the same.
mode=3
(broadcast)
Broadcast Y N All the packets are sent on all slave interfaces at the expense of resource utilization.
Usually used for specific purposes, like the financial industry that needs an ultra-reliable network.
mode=4
(802.3ad)
IEEE 802.3ad Dynamic Link Aggregation Y Y Create aggregation groups that share the same speed and duplex settings.
Requires a switch that supports IEEE 802.3ad dynamic link aggregation.
mode=5
(balance-tlb)
Adaptive Transmit Load Balancing (TLB) Y Y The outgoing traffic is distributed according to the current load on each slave interface and the incoming traffic is received by the current slave.
Not require any special switch support.
mode=6
(balance-alb)
Adaptive Load Balancing (ALB) Y Y Add receive-load balancing function compared to the previous mode=5. And the receive-load balancing is achieved through ARP (Address Resolution Protocol) negotiation.
Not require any special switch support.
NOTE:
1. One bonding interface can only specify one mode.
2. mode=0, mode=2, and mode=3 requires static aggregation theoretically.
3. mode=4 need to support 802.3ad.
4. mode=1, mode=5, and mode=6 do not require any configurations on switch.
5. The choice of mode is dependent on the network topology, requirements for the bonding behaviors, and characteristics of the slave devices.
Normally speaking, the following modes are used for single switch topologies: mode=0 (balance-rr); mode=2 (balance-xor); mode=4 (802.3ad); mode=5 (balance-tlb); mode=6 (balance-alb). And the left two modes—mode=1 (active-backup) & mode=3 (broadcast) are applied for mulitple switch topology naturally.

FAQs

Will NIC teaming or bonding improve the bandwidth of the server-switch connection?

Many may believe that "link aggregation" will increase bandwidth. For example, three NICs run at 1Gbps each, and the NIC team has a listed speed of 3Gbps, which indicates the speed has tripled? No! It's widely-acknowledged that 1Gbps and 10Gbps Ethernet are commonly used. But there is no defined 3Gbps standard. So how can we realize a 3Gbps link? The truth is that we don't really have a 3Gbps link. Instead, we have three separate 1Gbps links.

More importantly, it's advisable to think of link aggregation in terms of network link resilience rather than total available throughput. For instance, when transferring a file from one PC to another over a 2Gbps aggregated link, the total maximum transfer rate can only up to 1Gbps. However, the benefits of aggregated bandwidth will show when transferring two files. That is to say, link aggregation increases the number of "lanes" instead of the limit of speed.

What on earth will NIC teaming/NIC bonding bring to the users?

The answer, in short, is load balancing and fault tolerance.

Load balancing—Outgoing traffic is automatically load-balanced based on destination address between the available physical NICs. Incoming traffic is controlled by the switch routing the traffic to the server, while the host/server doesn't have the capability of controlling physical NIC traffic.

Fault tolerance—If one of the underlying physical NICs is broken or its cable is unplugged, the host/server will detect the fault condition and automatically move traffic to another NIC in the bond, which eliminates the situation of overall network connection breakdown by a single point of failure.

Load balancing & fault tolerance.jpg

Figure 1: Load balancing & fault tolerance

Taking the advantages of load balancing and fault tolerance, the NIC team members will work jointly to optimize bandwidth and prevent connectivity loss in the event of network adapter(s) failure.