How to Choose Data Center Spine and Leaf Switches?
The development of leaf-spine architecture not only solves the problems of the rapid growth of traffic in the data center and the continuous expansion of the scale of the data center but also meets the needs of high-speed interconnection within the data center. This network architecture requires the configuration of corresponding data center switches, and how should spine and leaf switches be selected and configured to function fully? This article will tell you the answer.
Why Do You Need Spine-leaf Architecture?
Spine-leaf architecture can provide predictability of network latency. In leaf-spine architecture, you always know how many docks each packet goes through. Correspondingly, the traffic and distance that these packets must cover are also consistent, reducing network latency between leaf switches. Apart from that, the leaf-spine architecture with deleting the STP protocol supports layer 3 routing to achieve a more stable network environment.
Leaf-spine network architecture greatly improves network efficiency, especially in high-performance data centers or high-density bandwidth cloud networks. When the link traffic is too large, additional spine switches and leaf switches can be added, or VXLAN technology to achieve more comprehensive traffic coverage and load balancing. The leaf-spine architecture not only solves the transmission bottleneck of horizontal network connections but also exhibits its high scalability and can be adapted to almost all large, medium, and small data centers.
What Are Spine and Leaf Switches?
Data center spine and leaf switches are the two main components of the leaf-spine architecture. Spine switches can be regarded as core switches in a leaf-spine network architecture. But the difference is that the 100G switches with high port density are sufficient for spine switches. Leaf switches are equivalent to access layer switches. Leaf switches provide network connections to endpoints, servers, and upwards to spine switches.
Data Center Spine Switches
Spine switches can handle layer 3 network traffic with high port density for scalability. Each of its L3 ports is dedicated to connecting to an L2 leaf switch and cannot connect or find any servers, access points, or firewalls. Therefore, the number of ports available on the spine switch determines the number of switches at the spine and leaf layers, which in turn determines the maximum number of servers that can be connected to the network.
In a high-density data center network, evenly distributing the uplink connections of leaf switches between the boards of the spine switches and reducing traffic by cross modules can significantly improve the performance of spine switches. Of course, data center spine switches generally need to have a large cache, high capacity, and virtualized performance. They usually need to be equipped with 10G/25G/40G/100G ports and a complete software system, complete protocols, and application functions, such as EVPN-VXLAN, stacking, MLAG, etc., to facilitate rapid network deployment.
Data Center Leaf Switches
Leaf switches are commonly used devices in data centers, mainly to control traffic between servers, and forward layer 2 and layer 3 traffic. The number of uplink ports of a leaf switch restricts the number of spine switches it is connected to, and the number of downlink ports determines the number of devices connected to the leaf switch. Generally, the uplink ports support the speed of 40G/100G, while the downlink ports can vary from 10G/25G/40G/50G/100G depending on the model you plan to use.
As server devices continue to grow, it is necessary to select leaf switches that support larger rates and more ports. To prevent link traffic congestion, leaf switches should have an oversubscription ratio of less than 3:1 for uplink and downlink port bandwidth, or apply virtualization technologies to balance link traffic. For related virtualization technologies, like spine switches, they can also add VXLAN, PFC, stacking, or MLAG technologies, and support both IPv4 and IPv6 for better network management and expansion.
Data Center Spine and Leaf Switches Recommendation
Based on the advantages that leaf-spine architecture brings to the network environment, spine and leaf switches emerge one after another. It is critical to choose the right spine and leaf switches based on individual business needs. For example, the FS data center spine switches are available in 1U and 2U form factors, equipped with comprehensive performance and virtualization technologies to achieve low latency, zero packet loss, high throughput, and service forwarding rates to meet the growing demands of data center environments. These features include industry-leading chips, redundant hot-swappable power supplies, fans, VXLAN, MLAG (VAP), PFC, ECN, and more. At the same time, FS data center spine switches also have ports with different rates, and you can choose the number of ports and bandwidth rates according to business needs.
Products | N8560-32C | N8550-32C | N8560-64C | NC8200-4TD |
---|---|---|---|---|
Ports | 32x 100G QSFP28 | 32x 100G QSFP28, 2x 10Gb SFP+ | 64x 100G QSFP28 | 128x 10G/25G, 64x 40G, or 32x 100G |
CPU | Intel® Xeon D-1527 (Quad-core, 2.2 GHz) | Intel® Xeon® D-1518 processor quad-core 2.2 GHz | Cavium CN7230 (Quad-core, 1.5 GHz) | / |
Switch Chip | BCM56870 | Broadcom BCM56870 Trident III | BCM56970 | BCM56870 |
Switching Capacity | 6.4 Tbps | 6.4 Tbps full duplex | 12.8 Tbps | 6.4 Tbps |
Forwarding Rate | 4.76 Bpps | 4.7 Bpps | 9.52 Bpps | 4.76 Bpps |
Number of VLANs | 4K | 4K | 4K | 4K |
Memory | SDRAM 8GB | DRAM 2x 8 GB SO-DIMM DDR4 | SDRAM 4GB | SDRAM 4GB |
Flash Memory | 240GB | 2x 16MB | 8GB | 8GB |
FS data center leaf switches provide high-performance switches with 10G/25G/40G/100G ports to meet the diverse requirements for the number of uplink ports and downlink ports. Like spine switches, they also apply related virtualization technologies to meet the increasing data demand, fully realizing zero packet loss, low latency, and non-blocking lossless Ethernet, ensuring your network reliability. They feature the software technologies of layer 3 IPv4 and IPv6 routing protocols, VXLAN, MLAG, etc. FS N8550 and N5850 series switches greatly apply the self-developed scalable open network system FSOS to provide higher reliability, especially suitable for small and medium data centers and large and medium campus network design.
Products | N5860-48SC | N8560-48BC | N8550-48B8C | N5850-48S6Q | NC8200-4TD |
---|---|---|---|---|---|
Ports | 48x 10G SFP+| 8x 100G QSFP28 | 48x 25G SFP28| 8x 100G QSFP28 | 48x 25G SFP28, 2x 10Gb SFP+, 8x 100G QSFP28 | 48x 10G SFP+, 6x 40G QSFP+ | 128x 10G/25G, 64x 40G, or 32x 100G |
CPU | Cavium CN7130 (Quad-core, 1.2 GHz) | Cavium CN7130 (Quad-core, 1.2 GHz) | Intel® Xeon® D-1518 processor quad-core 2.2 GHz | Intel Atom C2538 processor quad-core 2.4GHz | / |
Switch Chip | BCM56770 | BCM56873 | Broadcom BCM56873 Trident III | Broadcom BCM56864 Trident II+ | BCM56870 |
Switching Capacity | 2.56 Tbps | 4 Tbps | 4 Tbps full duplex | 1.44 Tbps full duplex | 6.4 Tbps |
Forwarding Rate | 1.90 Bpps | 2.98 Bpps | 2.9 Bpps | 1 Bpps | 4.76 Bpps |
Number of VLANs | 4K | 4K | 4K | 4K | 4K |
Memory | SDRAM 4GB | SDRAM 4GB | DRAM 2x 8 GB SO-DIMM DDR4 | DRAM 8GB SO-DIMM DDR3 RAM with ECC | SDRAM 4GB |
Flash Memory | 8GB | 8GB | 2x 16MB | 16MB | 8GB |
System | / | / | FSOS | FSOS | / |
Summary
High-density networks in data centers require higher-performance switches that also support network scalability. Leaf-spine architecture can achieve low network latency and scalability through east-west traffic monitoring and prediction, while FS data center switches provide complete protocol functions and virtualization application technology, which exactly fit the advantages that spine-leaf switches should have and show.