English

What Is Spine-leaf Architecture and How to Design It

Updated on May 24, 2022 by
78.7k

25G Switch

With the exponential growth of servers and the extension of the data center switching layer, the spine-leaf architecture is gradually replacing the traditional three-tier architecture as a rising star. So how much do you know about spine-leaf architecture? How do you build a spine-leaf architecture? We will explain what a spine-leaf architecture is and how to design one.

What Is Spine-leaf Architecture?

The spine-leaf architecture consists of only two layers of switches: spine and leaf switches. The spine layer consists of switches that perform routing and work as the core of the network. The leaf layer involves access switches that connect to servers, storage devices, and other end-users. This structure helps data center networks reduce hop count and reduce network latency.

In the spine-leaf architecture, each leaf switch is connected to each spine switch. With this design, any server can communicate with any other server, and there is no more than one interconnected switch path between any two leaf switches.

spine-leaf 1

Why Use Spine-leaf Architecture?

The spine-leaf architecture has become a popular data center architecture, bringing many advantages to the data center, such as scalability, network performance, etc. The benefits of spine-leaf architecture in modern networks are summarized here in five points.

Increased redundancy: The spine-leaf architecture connects the servers with the core network, and has higher flexibility in hyper-scale data centers. In this case, the leaf switch can be deployed as a bridge between the server and the core network. Each leaf switch connects to all spine switches, which creates a large non-blocking fabric, increasing the level of redundancy and reducing traffic bottlenecks.

Increased bandwidth: The spine-leaf architecture can effectively avoid traffic congestion by applying protocols or techniques such as transparent interconnection of multiple links (TRILL) and shortest path bridging (SPB). The spine-leaf architecture can be Layer 2 or Layer 3, so uplinks can be added to the spine switch to expand inter-layer bandwidth and reduce oversubscription to secure network stability.

Improve scalability: The spine-leaf architecture has multiple links that can carry traffic. The addition of switches will improve scalability and help enterprises to expand their business later.

Reduced expenses: A spine-leaf architecture increases the number of connections each switch can handle, so data centers require fewer devices to deploy. Many data center networks employ spine-leaf architecture to minimize costs.

Minimal latency and congestion: By limiting the maximum number of hops to two between any source and destination nodes, we establish a more direct traffic path, enhancing overall performance and mitigating bottlenecks. The only exception is when the destination is on the same leaf switch.

Spine-leaf vs. Traditional Three-Tier Architecture

The main difference between spine-leaf architecture and 3-tier architecture lies in the number of network layers, and the traffic they transform is north-south or east-west traffic.

As shown in the following figure, the traditional three-tier network architecture consists of three layers: core, aggregation and access. The access switches are connected to servers and storage devices, the aggregation layer aggregates the access layer traffic, provides redundant connections at the access layer, and the core layer provides network transmission. But this three-layer topology is usually designed for north-south traffic and uses the STP protocol, supporting up to 100 switches. In the case of continuous expansion of network data, this will inevitably result in port blockage and limited scalability.

The spine-leaf architecture is to add east-west traffic parallelism to the north-south network architecture of the backbone, fundamentally solving the bottleneck problem of the traditional three-tier network architecture. It increases the exchange layer under the access layer, and the data transmission between two nodes is completed directly at this layer, thereby diverting backbone network transmission. Compared with traditional three-tier architecture, the spine-leaf architecture provides a connection through the spine with a single hop between leaves, minimizing any latency and bottle necks. In spine-leaf architectures, the switch configuration is fixed so that no network changes are required for a dynamic server environment.

spine-leaf 2

How to Design Spine-leaf Architecture?

Before designing a spine-leaf architecture, you need to figure out some important and relevant considerations, especially the oversubscription rate and the size of the spine switch. Surely, we have also given a detailed example for your reference.

Design Considerations of Spine-leaf Architecture

Oversubscription rate: It is the contention rate when all devices are sending traffic at the same time. It can be measured in the north/south direction (traffic entering/leaving the data center) and in the east/west direction (traffic between devices within the data center). The most appropriate oversubscription ratio for modern network architectures is 3:1 or less, which is measured and delineated as a ratio between upstream bandwidth (to backbone switches) and downstream capacity (to servers/storage).

For example, a leaf switch has 48 x 10G ports for a total of 480Gb/s of port capacity. If you connect 4 x 40G uplink ports from each leaf switch to a 40G spine switch, it will have 160Gb/s of uplink capacity. The ratio is 480:160, or 3:1. However, data center uplinks are typically 40G or 100G and can be migrated over time from a starting point of 40G (Nx 40G) to 100G (Nx 100G). It is important to note that the uplink should always run faster than the downlink to not have port link blockage.

Leaf and spine sizing: The maximum number of leaf switches in the topology is determined by the port density of the spine switches. And the number of spine switches will be governed by the combination of the required throughput between the leaf switches, the number of redundant/ECMP (equivalent multipath) paths, and their port density. So the number of spine-leaf switches and port density need to be taken into account to prevent network problems.

Layer 2 or Layer 3 design: A two-tier spine-leaf fabric can be built at either Layer 2 (configuring VLANs) or Layer 3 (subnetting). Layer 2 designs need to provide maximum flexibility, allowing VLANs to span anywhere and MAC addresses to migrate anywhere. Layer 3 designs need to provide the fastest convergence times and maximum scale with fan-out ECMP supporting up to 32 or more active spine switches.

How to Deploy Spine-leaf Switches for Spine-leaf Architecture?

With these points in mind, and given the urgent need to build a data center, the main goal is to have at least 480x10G servers in the architecture. Here we have given an example to help you quickly complete the spine-leaf architecture design.

We use the NC8200-4TD, which provides 40G ports, as the spine switch and the N5850-48S6Q, which provides 40G/10G ports, as the leaf switch. In this way, the bandwidth of the uplink is 40G and the bandwidth of the downlink is 10G. However, since the reasonable bandwidth ratio between leaf and spine cannot exceed 3:1, each leaf switch can connect up to 24x10G servers here. The total bandwidth we can get here is 480x10G. In the entire link transmission, the switch supports PFC, MLAG, VXLAN, or EVPN-VXLAN and other related virtual technologies at the same time, which is sufficient to achieve structural reliability.

spine-leaf 3

Data Center Spine-leaf Switches Recommendation

First, you need to understand the performance characteristics of spine and leaf switches, such as port density, virtualization technology, redundant hardware, etc. Then select the appropriate switches depending on your deployment needs to complete the network architecture. FS N series data center switches are equipped with a complete virtualization software system, to help you achieve higher network performance and rapid deployment.

Products NC8200-4TD NC8400-4TH N9510-64D N8560-64C N8560-32C N8550-32C N5850-48S6Q N5860-48SC N8550-48B8C N8560-48BC
Application Spine Layer Spine Layer Spine Layer Spine Layer Spine Layer Spine Layer Leaf Layer Leaf Layer Leaf Layer Leaf Layer
Ports 128x 10G/25G, 64x 40G, or 32x 100G 32x 100G, or 16x 100G, 4x 400G 64x 400G 64x 100G 32x 100G 32x 100G QSFP28, 2x 10Gb SFP+ 48x 10G SFP+, 6x 40G QSFP+ 48x 10G SFP+, 8x 100G QSFP28 48x 25G SFP28, 2x 10Gb SFP+, 8x 100G QSFP28 48x 25G SFP28, 8x 100G QSFP28
Virtualization Technology MLAG, Stack, EVPN-VXLAN MLAG / MLAG, Stack MLAG, Stack, EVPN-VXLAN MLAG, VXLAN VXLAN MLAG, Stack, EVPN-VXLAN / MLAG, Stack, EVPN-VXLAN
Forwarding Rate 4.76 Bpps 1.905 Bpps 10.3 Bpps 4.288 Bpps 2.98 Bpps 4.7 Bpps 1 Bpps 1.90 Bpps 2.9 Bpps 1.929 Bpps
Switching Capacity 6.4 Tbps 25.6 Tbps 51.2 Tbps 12.8 Tbps 6.4 Tbps Tbps 6.4 Tbps full duplex 1.44 Tbps full duplex 2.56 Tbps 4 Tbps 4 Tbps
Latency <1μs <1μs <1μs <1μs <1μs / / <1μs / <1μs
Max Power Consumption <650W <1950W <2524W <600W <450W 550W 282W <300W 550W <300W

You might be interested in

Knowledge
Knowledge
See profile for Howard.
Howard
Data Center Switch Wiki and Buying Guide
Jun 16, 2022
23.5k
Knowledge
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
384.2k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
366.8k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
333.8k
Knowledge