As virtualization becomes popular in the data center, a shift in the traditional three-tier networking model is taking place as well. A new networking architecture, leaf-spine, is gradually becomes the mainstream in today’s data center network deployment. It overcomes some of the limitations of traditional three-tier architecture, and also creates a fast, predictable, scalable, and efficient communication architecture for data center switch.
A traditional three-tier network architecture, as shown in the picture below, is composed of three layers. This three-tier topology is typically designed for north-south traffic. If running massive east-west traffic through this conventional architecture, devices connected to the same switch port may contend for bandwidth, resulting in poor response time obtained by end-users. Moreover, for one server to communicate with another server, it may need to traverse a hierarchical path through two aggregation switches and one core switch as shown in the picture, which adds latency and creates traffic bottlenecks. Thus, this three-tier architecture is not suitable for modern virtualized data center where compute and storage servers may be located anywhere within the facility.
Leaf-spine network architecture is catching up with large data center/cloud networks due to its scalability, reliability, and better performance. As shown below, the leaf-spine design only consists of two layers: the leaf layer and spine layer. The spine layer is made up of switches that perform routing, working as the backbone of the network. The leaf layer involves access switch that connects to endpoints like servers, storage devices, firewalls, load balancers, and edge routers. In leaf-spine architecture, every leaf switch is interconnected with each and every spine switch. With this design, any server can communicate with any other server with no more than one interconnection switch path between any two leaf switches. This architecture can be non-blocking by providing sufficient bandwidth from each leaf switch to spine switches.
Before designing an leaf-spine architecture, you will need to know what the current and future needs are. For example, if you have a server count of 100 today and that will eventually scale up to 1000 servers, you need to make sure your fabric can scale to accommodate future needs. The formula is as follows:
Number of Spine Switches = Number of uplink ports on the Leaf Switches
Number of Leaf Switches = Number of downlink ports on the Spine Switches
Number of Servers = Number of downlink ports on the Leaf Switches x Number of Leaf Switches
Another factor to keep in mind when designing your fabric is the oversubscription ratio. In a leaf-spine design, this oversubscription is measured as the ratio of downlink ports (to servers/storage) to uplink ports (to spine switches). If you have 20 servers each connected with 10Gbps links and 4 10Gbps uplinks to your spine switches, you have a 5:1 oversubscription ratio (200Gbps/40Gbps). Significant increases in the use of multi-core CPUs, server virtualization, flash storage, Big Data and cloud computing have driven the requirement for modern networks to have lower oversubscription. Current modern network designs have oversubscription ratios of 3:1 or less.
Here we take FS.COM leaf-spine switches as an example to show how to build a leaf-spine architecture. Given that we want to build a data center fabric with a primary goal of having at least 900 10G servers in one fabric with 3:1 oversubscription. In this case, we will use FS.COM S8050-20Q4C as spine switch and S5850-48S6Q as leaf switch. S8050-20Q4C is a high-performance 40G/100G Ethernet switch for data center Spine/Leaf switches with highlighted network visibility features, which supports L2/L3/Data Center/Metro features. It has 20 ports of 40G and 4 ports of 100G. S5850-48S6Q Ethernet switch is to meet next generation metro, data center and enterprise network requirements, offering 48 ports of 10G and 6 ports of 40G. The details of these two switch modules are presented as follows.
In spine-leaf network architecture for 40G application, the connections between the spine switches and leaf switches are 40G, while connections between the leaf switches and servers are usually 1/10G. Thus the 40G QSFP+ ports of S5850-48S6Q switch can be used to connect the spine switch S8050-20Q4C, and the 10G SFP+ ports of S5850-48S6Q are suggested to connect servers and routers. Every leaf switch is connected to every spine. Therefore, the number of connections used for uplinks from each leaf determines the number of spine switches we can have (4 ports here for four spine switches). And the number of ports on each spine switch determines the number of leaf switches we can have (20 leaf switches here). So in building this leaf-spine architecture, the maximum amount of 10G servers is 960 at 3:1 oversubscription.
Leaf-spine network architecture offers many unique benefits over the traditional three-tier architecture. With easily adaptable configurations and design, it improves the total available bandwidth, simplifies network configuration and facilitates IT department’s management. Meanwhile, leaf-spine design also greatly enhances network stability and flexibility.
Where Is Ethernet Cable of Various Lengths Deployed?
Will Copper Cables Still Be an Indispensable Part in Data Center?
Network Cable Standards for Generic Cabling: TIA 568 vs ISO 11801 vs EN 50173.
QSFP-40G-SR4 Cisco Compatible Module Testing