Data center and cloud computing networks require both the highest level of network bandwidth and reliability. MLAG (Multi-Chassis Link Aggregation) makes it possible by taking the benefits of link aggregation, and spreading them across a pair of data center switches to deliver system level redundancy as well as network-level resiliency. It significantly improves network efficiency and simplifies system management, which can be used at various places in the network to eliminate bottlenecks and provide scalability. This article describes the MLAG feature, benefits and explains MLAG implementation on FS N-series switches running Cumulus Linux.
Simply put it, MLAG can be considered as a LAG across more than one node. Two or more MLAG-enabled switches are capable of acting as a single switch when forming link bundles. In this way, a host can uplink to two switches for physical diversity, while only a single bundle interface needs to be managed. Similarly, two switches could connect to two other switches using MLAG, with all links forwarding. The typical MLAG topology is shown in the figure below. Switch A and B are peer switches in the MLAG. They are connected through an IPL (Inter Peer Link) port channel interface. The server is connected to both of the MLAG peer switches through a regular bonding or teaming LACP interface on the server side. On the switch side, the ports connected to the server are configured with the same MLAG enabled port channel number.
Most MLAG configuration is totally proprietary, that means vendors only support MLAG within their own hardware and within a specific product line, so you are not able to create an MLAG between different vendors. Cumulus Linux implemented MLAG using standard Linux building blocks (PROTO_DOWN, ebtables and more), therefore MLAG can be configured across different hardware that built on Cumulus Linux.
MLAG running on FS N-Series network switches creates an active-active redundant connections for the traffic coming from the south (server to the switch), enables users to logically aggregate ports across two switches. This provides increased bandwidth and redundancy, low-latency switching and maximum linear scalability. We will illustrate MLAG implementation process with a simple demo: 2 FS N-Series switches and a server MLAG configuration.
Before implementing MLAG, keep in mind these requirements:
To prevent MAC address conflicts with other interfaces in the same bridged network, Cumulus Networks has reserved a range of MAC addresses specifically to use with MLAG. This range of MAC addresses is 44:38:39:ff:00:00 to 44:38:39:ff:ff:ff. It is recommended to use this range of MAC addresses when configuring MLAG. You cannot use the same MAC address for different MLAG pairs. Make sure you specify a different clag sys-mac setting for each MLAG pair in the network.
Step One: Create a direct connection “peerlink” that uses LACP, between the two peer switches:
Step Two: Configure the peerlink interface on Switch A and B
Step Three: MLAG configuration on server
Step Four: Setting the MLAG priority: The switch with the lower MAC address assumes the primary role. You can change this by setting the clagd-priority option for the peerlink:
Step Five: Check the MLAG configuration status. You can check the status of your MLAG configuration using:
MLAG solves the problem of insufficient uplink bandwidth from each rack, removing the bottleneck and allows the utilization of all interconnects in an active/active mode. This allows one to scale your network without changing your topology. If you’re ready for your MLAG implementation, contact us to know more about FS N-series switches.
Where Is Ethernet Cable of Various Lengths Deployed?
Will Copper Cables Still Be an Indispensable Part in Data Center?
Network Cable Standards for Generic Cabling: TIA 568 vs ISO 11801 vs EN 50173.
QSFP-40G-SR4 Cisco Compatible Module Testing