What Is Leaf-Spine Architecture and How to Design It



Posted on May 23, 2017
December 3, 2020

For many years, data centers have been built in a three-tier architecture. But with the data center consolidation, virtualization, hyper-converged systems springing up, a new networking architecture, leaf-spine, is gradually becomes the mainstream in today's data center network deployment, which overcomes some limitations of traditional three-tier architecture for data center switch. Then how much do you know about leaf-spine architecture? How to build leaf-spine architecture? We will explain what leaf-spine architecture is and how to design leaf-spine architecture.

What Is Spine-leaf Architecture?

Leaf-spine network architecture is catching up with large data center/cloud networks due to its scalability, reliability, and better performance. As shown below, the leaf-spine design only consists of two layers: the leaf layer and the spine layer. The spine layer is made up of switches that perform routing, working as the backbone of the network. The leaf layer involves an access switch that connects to endpoints like servers, storage devices. In leaf-spine architecture, every leaf switch is interconnected with each spine switch. With this design, any server can communicate with any other server with no more than one interconnection switch path between any two leaf switches.


Figure 1: Leaf-Spine Architecture.

The traditional three-tier architecture, however, consists of three layers, including core, aggregation/distribution and access layer in the deployment. These devices are interconnected by pathways for redundancy which can create loops in the network. This architecture model is typically designed for traditional north-south traffic, so If running massive east-west traffic through this conventional architecture, devices connected to the same switch port may contend for bandwidth, resulting in poor response time obtained by end-users. Thus, this three-tier architecture is not suitable for the modern virtualized data center where compute and storage servers may be located anywhere within the facility.


Figure 2: Traditional Three-tier Architecture.

Advantages of Leaf-Spine Architecture

The advantages of the leaf-spine model are the improved latency, reduced bottlenecks and expanded bandwidth. Firstly, leaf-spine uses all interconnection links. Each leaf connects to all spines with no interconnections among neither spines themselves nor leafs which creates a large non-blocking fabric. While in a three-tier network, one server may need to traverse a hierarchical path through two aggregation switches and one core switch to communicate with another switch, which adds latency and creates traffic bottlenecks. Another advantage is the ease of adding additional hardware and capacity. Leaf-spine architectures can be either layer 2 or layer 3, thus an additional spine switch can be added and uplinks can be extended to every leaf switch, expanding the interlayer bandwidth and reducing the oversubscription.

How to Design Spine-leaf Architecture?

Before designing a leaf-spine architecture, you need to figure out some important related factors. In this aspect, oversubscription ratios, leaf and spine scale, uplinks from leaf to spine, build at layer 2 or layer 3 should be considered.

Oversubscription Ratios — Oversubscription is the ratio of contention when all devices send traffic at the same time. It can be measured in a north/south direction (traffic entering/leaving a data center) as well as east/west (traffic between devices in the data center). Current modern network designs have oversubscription ratios of 3:1 or less, which is measured as the ratio of downlink ports (to servers/storage) to uplink ports (to spine switches). The figure below illustrates how to measure the oversubscription ratio of leaf and spine layers.


Figure 3: Oversubscription Ratio.

Leaf and Spine Scale — As the endpoints in the network connection only to the leaf switches, the number of leaf switches in the network depends on the interface number required to connect all the endpoints including multihomed endpoints. Because each leaf switch connects to all spines, the port density on the spine switch determines the maximum number of leaf switches in the topology. And the number of spine switches in the network is governed by a combination of the throughput required between the leaf switches, the number of redundant/ECMP (equal-cost multi-path) paths between the leafs, and the port density in the spine switches.

10G/40G/100G Uplinks from Leaf to Spine — For a leaf-spine network, the uplinks from leaf to the spine are typically 10G or 40G and can migrate over time from a starting point of 10G (N x 10G) to become 40G (N x 40G). An ideal scenario always has the uplinks operating at a faster speed than downlinks in order to ensure there isn’t any blocking due to micro-bursts of one host bursting at line-rate.

Layer 2 or Layer 3 — Two-tier leaf-spine networks can be built at either layer 2 (VLAN everywhere) or layer 3 (subnets). Layer 2 designs provide the most flexibility allowing VLANs to span everywhere and MAC addresses to migrating anywhere. Layer 3 designs provide the fastest convergence times and the largest scale with fan-out with ECMP supporting up to 32 or more active spine switches.

Build Leaf-Spine Architecture With FS Switches

Here we take FS leaf-spine switches as an example to show how to build a leaf-spine architecture. Given that we want to build a data center fabric with a primary goal of at least 960 10G servers with 3:1 oversubscription. In this case, we will use FS S8050-20Q4C as the spine switch and S5850-48S6Q as the leaf switch. S8050-20Q4C is a high-performance L2/L3 40G/100G Ethernet switch with highlighted network visibility features, and S5850-48S6Q is an L2/L3 10G/40G Ethernet switch.


Figure 4: FS Leaf-Spine Switches.

Using the two types of switches to build a 40G spine-leaf network architecture, the connections between the spine switches and leaf switches are 40G, while connections between the leaf switches and servers are usually 10G. Thus the 40G QSFP+ ports of S5850-48S6Q switch can be used to connect the spine switch S8050-20Q4C, and the 10G SFP+ ports of S5850-48S6Q are suggested to connect servers and routers. Every leaf switch is connected to every spine. Therefore, through the above formulas, we can have 4 spine switches and 20 leaf switches here. So in building this leaf-spine architecture, the maximum amount of 10G servers is 960 at 3:1 oversubscription.


Figure 5: FS Leaf-Spine Architecture.

Leaf-Spine Switch Recommendation

FS S5850 series Ethernet switches are the perfect fits for using as leaf switches. These switch come with the complete system software and applications to facilitate the rapid service deployment and management for both traditional and fully virtualized data center.

FS P/N S5850-32S2Q S5850-48T4Q S5850-48S6Q
Ports 32x 10GbE SFP+ and 2x 40GbE QSFP+ 48x 10GBase-T and 4x 40GbE 48x 10GbE SFP+ and 6x 40GbE QSFP+
Throughput 595.24Mpps 952.32Mpps 1071.43Mpps
Switching Capacity 800Gbps 1.28Tbps 1.44Tbps
Latency 612ns 612ns 612ns
Max Power Draw 120W/150W 350W 150W/190W

For spine switches, here we recommend FS N8500 Series. They are built with the advanced feature sets, including MLAG, VXLAN, SFLOW, BGP and OSPF, etc. With support for layer 2, layer 3 and overlay architectures, the N5800 series are the ideal choice for data center core switches.

FS P/N N8500-48B6C N8000-32Q N8500-32C
Ports 48x SFP28 and 6x QSFP28 32x QSFP+ 32x QSFP28
Switching Capacity 3.6Tbpsfull-duplex 2.56Tbpsfull-duplex 6.4Tbpsfull-duplex
Forwarding Performance 4.7Bpps 1.44Bpps 4.7Bpps
Latency 500ns 480ns 500ns
Max Power Consumption 550W 300W 550W


It is important to understand the 2-tier leaf-spine architecture as it offers unique benefits over the traditional 3-tier architecture model. Deploying leaf-spine network architecture and buying high-performance data center switches are imperative for data center managers as leaf-spine network topology allows data centers to thrive while accomplishing all needs of the business.

Related Article

How to Use 10GBASE-T Switch to Build Spine-Leaf Network