Easily Unlock Data Center Leaf-Spine Architecture

Easily Unlock Data Center Leaf-Spine Architecture

Posted on by FS.COM

As virtualization, cloud computing, and distributed cloud computing becomes more popular in the data center, a shift in the traditional three-tier networking model is taking place as well. A new networking architecture, leaf-spine, which promises to overcome some of the limitations of the three-tier fabric and creates a fast, predictable, scalable, and efficient communication architecture in a data center environment is gradually become the mainstream in today’s data center deployment. Let’s take a close look at it in this article!

Traditional Three-Tier Architecture
A traditional three-tier network architecture, as shown in picture below, is composed of three layers. The bottom of the architecture is the access layer, where hosts connect to the network. The middle layer is the aggregation layer, to which the access layer is redundantly connected. The aggregation layer provides connectivity to adjacent access layer switches and data center rows, and in turn to the top of the architecture, known as the core. This architecture is well-suited for traffic from servers to external destinations. However, it is not suitable for large virtualized data centres where compute and storage servers may be located anywhere within the facility. For one server to communicate with another server it may need to traverse a hierarchical path through two aggregation switches and one core switch as shown in the picture, which adds to latency and can create traffic bottlenecks.

data center Three-Tier Architecture

Emerging Leaf-Spine Architecture
Leaf-Spine network architecture is catching up with large data center/cloud networks due to its scalability, reliability, and better performance. Maybe it’s time for enterprises and smaller networks to consider implementing leaf-spine networks. A basic architecture diagram for leaf-spine networks is shown below. As you can see, the top layer has spine switches and the layer below has leaf switches. You can think of spine switches as interconnection switch and leaf switch as access switch. In the leaf-spine architecture, all access switches are connected to every interconnection switch. Any server can communicate with any other server with no more than one interconnection switch path between any two access switches. This architecture can be non-blocking by providing sufficient bandwidth from each access switch to interconnection switches.

Data Center Leaf-Spine Architecture

Design Your Leaf-Spine Architecture — Start Small and Scale Up Massively
Before you design an leaf-spine architecture, you will need to know what the current and future needs are. For example, if you have a server count of 100 today and that will eventually scale up to 1000 servers, you need to make sure your fabric can scale to accommodate future needs. The formula is as follows:

  • No. of Spine Switches = No. of uplink ports on the Leaf Switches
  • No. of Leaf Switches = No. of downlink ports on the Spine Switches
  • No. of Servers = No. of downlink ports on the Leaf Switches x No. of Leaf Switches

If we plan on using a 24-port 10Gbps switch for the leaf layer, utilizing 20 ports for servers and 4 ports for uplinks, we can have a total of 4 spine switches. If each spine switch has 64 10Gbps ports, we can scale out to a maximum of 64 leaf switches. 64 leaf switches x 20 servers on each switch = 1280 maximum servers in this fabric. But this is a theoretical maximum and you will need to accommodate for connecting the fabric to the rest of the data center. You can start off with 5 leaf switches and 4 spine switches to meet your current need of 100 servers (5 leaf switches x 20 servers on each switch) and scale out leaf switches as more servers are needed.

 Leaf-Spine Architecture

  • Oversubscription Ratio

Another factor to keep in mind when designing your fabric is the oversubscription ratio. In a leaf-spine design, this oversubscription is measured as the ratio of downlink ports (to servers/storage) to uplink ports (to spine switches). If you have 20 servers each connected with 10Gbps links and 4 10Gbps uplinks to your spine switches, you have a 5:1 oversubscription ratio (200Gbps/40Gbps). Significant increases in the use of multi-core CPUs, server virtualization, flash storage, Big Data and cloud computing have driven the requirement for modern networks to have lower oversubscription. Current modern network designs have oversubscription ratios of 3:1 or less.

Conclusion
Leaf-Spine network architecture offers many unique benefits over the traditional three-tier model. With easily adaptable configurations and design, leaf-spine has improved the IT department’s management of oversubscription and scalability. In today’s leaf-spine design, the most common application is 40G leaf to spine uplinks and 10G leaf to server downlinks. FS.COM provides a full range of 40G and 10G cabling infrastructure for your network:

40G Cabling Infrastructure 40G QSFP+ Transceiver
40G QSFP+ to QSFP+ DAC
40G QSFP+ to QSFP+ AOC
MTP MPO Trunk Cables
Fiber Enclosures
10G Cabling Infrastructure 10G SFP+ Transceivers
10G SFP+ to SFP+ DAC
10G SFP+ to SFP+ AOC
LC Uniboot Fiber Patch Cables
Cat6 & Cat6A Network Cables
Tags: , ,

Comments are closed.

FS.COM data center switch
FS.COM LC polarity switchable patch cable