Using 10GbE Leaf Switch to Build Data Center Architecture
Home Enterprise Network Using 10GbE Leaf Switch to Build Data Center Architecture

Using 10GbE Leaf Switch to Build Data Center Architecture

Posted on by FS.COM

As business needs have evolved, so has data center networking architecture. As servers multiply and switching tiers stretch in data centers, the leaf switch or leaf-spine architecture emerges as a rising star to gradually replace the traditional three-tier architectures. FS 10GbE data center leaf switch is built on a network fabric that combines time-tested protocols with new innovations to create a highly flexible, scalable, and resilient architectures. This post takes a look at why it makes sense to go to leaf-spine architecture and how to build it with FS 10GbE data center leaf switches.

What Is a Leaf Switch and Leaf-Spine Architecture?

In hyper-scale data centers, there might be hundreds or thousands of servers that are connected to a network. In this case, the leaf switch can be deployed as a bridge between the server and the core network. Where there is a leaf switch, there is a spine. Leaf switches work in conjunction with spine switches in data centers to aggregate traffic from server nodes and then connect to the core of the network. This is the so called “leaf-spine” architecture, where there are only two tiers of switches between the servers and the core network. The leaf-spine architecture helps prevent network bottlenecks, improves latency, and reduces deployment and maintenance costs. Thus, it has become a popular data center architecture designed especially when data centers grew in scale with more switching tiers.

Traditional Three-Tier vs. Leaf-Spine Architecture

For any data center application, the overall network performance is highly dependent upon the design approach that is utilized. The traditional three-tier and leaf-spine architectures are two classic approaches to design data center networks for north-south and east-west traffic flows respectively. So what’s the difference between them and which one is better for you? As shown in the figure below, the traditional three-tier network architecture consists of three layers: core, aggregation and access. This three-tier topology is typically designed for north-south traffic. If massive east-west traffic was delivered through the traditional three-tier architecture, devices connected to the same access layer may contend for bandwidth, resulting in congested uplink among the access layer and aggregation layer.

Three-Tier Architecture

Figure 1: Traditional Three-Tier Architecture

Then how to fundamentally solve the bottleneck of the traditional three-tier network architecture? A feasible solution is to use leaf-spine topology by adding east-west traffic parallel to the backbone north-south network architecture. It increases the exchange layer under the access layer, and the data transmission between two nodes is completed directly at this layer, thereby diverting backbone network transmission. Compared with traditional three-tier architecture, the leaf-spine architecture provides a connection through the spine with a single hop between leafs, minimizing any latency and bottle necks. In leaf-spine architectures, the switch configuration is fixed so that no network changes are required for a dynamic server environment. The following is a typical leaf-spine architecture with only two layers: spine and leaf.

Leaf-Spine Architecture

Figure 2: Leaf-Spine Architecture

Use FS 10GbE Leaf Switch to Build Data Center Leaf-Spine Architecture

The leaf-spine architecture provides a loop free mesh between the spine and the leaf switches. This can be accomplished by using either Layer 2 or Layer 3 designs. Here we take FS.COM S5850-32S2Q L2/L3 10GbE data center leaf switch and S8050-20Q4C 40GbE Spine/Aggregation switch as example to show how to build a data center leaf-spine architecture. FS S8050-20Q4C is a high performance 40G/100G Ethernet switch for data center spine/aggregation layer switch. It has 20-port 40GbE QSFP+ and 4-port 100GbE QSFP28 in a compact 1RU form factor and supports L2/L3/Data Center/Metro applications. FS S5850-32S2Q is a 10gb switch with 32-port 10GE SFP+ and 2-port 40GbE QSFP+ in a compact 1RU form factor. With low latency L2/L3 features, it can play as a leaf switch to meet the next generation Metro, Data Center and Enterprise network requirements.

Now given that we need to build a data center fabric with a primary goal of having at least 480 10G servers in the fabric. In this case, we can use FS.COM S8050-20Q4C as spine switch and S5850-32S2Q as leaf switch. Every leaf switch is connected to every spine. In the following leaf-spine network architecture, the connections between spine switches (FS S8050-20Q4C) and leaf switches (FS S5850-32S2Q) are 40G, while connections between the leaf switches and servers are 10G. For most protocols including VRRP, GLBP, RRPP, MLAG, two leaf switches are enough to realize fabric reliability. The port numbers on each spine switch determines the number of leaf switches we can have (20 leaf switches here). But since the ratio of reasonable bandwidth between leaf and spine switch cannot exceed 3:1, the maximum amount of 10G servers we can connect to each leaf switch here is 24. The total amount of bandwidth we can get here is 480x10G.

Build Data Center Leaf-Spine Architecture

Figure 3: Use FS 10GbE Leaf Switch to Build Leaf-Spine Architecture

FAQs About FS 10GbE Data Center Leaf Switch

1.Does FS 10GbE data center leaf switch support ONIE and Open Flow ? Can it run with another NOS?

FS 10GbE data center leaf switch portfolio includes S5850-32S2Q, S5850-48S6Q, N5850-48S6Q. Among them, S5850-32S2Q and S5850-48S6Q don’t support ONIE and Open Flow, but they can run with another NOS. For ONIE and Open Flow, we recommend FS N5850-48S6Q switch with ONIE loader for 3rd party network operating systems and compatibility with Software Defined Networks via Open Flow 1.3.11.

2.Is G.8032 or Ethernet Ring Protection Switching (ERPS) supported by FS S5850/N5850 Series data center switch?

FS S5850 Series data center leaf switch can support G.8032 or ERPS, but N5850 can’t. G.8032 and ERPS are both technology of Loop Protection, but they differ from each other. ERPS can only be switched within 50ms on the optical interface, while G.8032 linking with CFM can be switched within 50ms on the electrical port through. Both of them can be applied to the ring network, and the convergence time is not affected by the number of nodes of the network.

Summary

Leaf switches and leaf-spine architectures support the most demanding and flexible data center environments. They can improve the efficiency of data center networks, while delivering low-latency and high-bandwidth links. FS S5850/N5850 series switches are high compatibility 10GbE data center switches which can work with Broadcom, Cisco, Juniper, Arista switches, as well as other major brands. By using FS 10GbE data center leaf switches, you can more easily scale up the data center network architecture. With easily adaptable configurations and designs, these leaf switches can significantly improve the data center management of oversubscription and scalability.

Related Article: What Is Leaf-Spine Architecture and How to Design It

Related Article: How to Use 10GBASE-T Switch to Build Spine-Leaf Network

Copyright © 2002-2018. All Rights Reserved.