Top of rack (ToR) is one common architecture of switch-to-server connections. According to a survey result in 2015, ToR was the most popularly used architecture both in colocation data centers and enterprise data centers. Seen from current trend, it will be widely deployed in the present and in the future as well.
“But, wait, what does a ToR look like? Is a top-of-rack switch placed at the top of the rack?”
Thanks for your questions. A ToR switch could be at the top of the rack, but the actual physical location does not necessarily need to be at the top of the rack. It can also be at bottom or middle of the rack. After practical installation, however, engineers found that top of the rack is better due to easier accessibility and cleaner cable management.
Benefits of ToR are many and in brief they are:
- Copper stays “In Rack”.
- Lower cabling costs.
- Modular and flexible “per rack” architecture.
- Future-proofing for higher speeds.
In a ToR design, at least one switch is placed in each rack and servers within the rack are connected to the switch typically via copper cable. Then switches in each rack are connected to top-tier switches.
In today’s leaf-spine topology, the ToR switches are the leaf switches and they are attached to the spine switches. For example, 10G servers are connected to a 10G ToR/leaf switch (it has 40G ports as well) via 10G SFP+ DAC (direct attach copper cable), or via Cat6a/Cat7 cable and 10GBASE-T transceiver. Then the 10G switch is connected to a 40G spine switch.
The combination of ToR and leaf-spine has solved some problems that existed in traditional three-tier (access-aggregation-core) topology, such as the “traffic jam” in top-tier switch. In a three-tier network topology, the data traffic will all take a single “best path” that is chosen from a set of alternative paths, until the point that it gets congested then packets are dropped.
In leaf-spine topology, to prevent any one uplink path from being chosen, the path is randomly chosen so that the traffic load is evenly distributed between the top-tier switches. If one of the top-tier switches were to fail, it only slightly degrades performance through the data center.
Since ToR is the most popular design of data center architecture, ToR switches naturally become popular as well. Here are some high performance ToR switches of different switch-to-server data rates, ranging from 1G to 100G.
|Dominant Switch-to-Server Data Rate||Switch Model||Port Type||Switching Capacity||Latency||Packet Forwarding Rate|
|1G||Dell Networking S3048-ON||48*SFP, 4*SFP+||260 Gbps (full-duplex), 130 Gbps (half-duplex)||1000Base-T: < 3.7 µs, 10G: < 1.8 µs||131 Mpps|
|HPE 5900AF-48G-4XG-2QSFP+||48*SFP, 4*SFP+, 2*QSFP+||336 Gbps||10G: < 1.5 µs||250 Mpps|
|10G||Cisco Nexus 5672UP||48*SFP+, 6*QSFP+||1.44 Tbps||~ 1 µs at any packet size||1.07 Bpps|
|Juniper QFX5100-48S||48*SFP+, 6*QSFP+||1.44 Tbps||550 ns-3 µs||1.08 Bpps|
|25G||Arista 7280SR2-48YC6||48*SFP28, 6*QSFP28||1.8 Tbps||3.8 µs||1.6 Bpps|
|Mellanox SN2410||48*SFP28, 8*QSFP28||2 Tbps||300 ns||2.98 Bpps|
|40G||FS S8050-20Q4C||20*QSFP+, 4*QSFP28, 4*SFP+||2.4 Tbps||612 ns||1.2 Bpps|
|100G||Arista 7280CR-48||48 QSFP28, 8 QSFP+||10.24Tbps||~ 3.8 µs||5.76 Bpps|
All these ToR switches support L2/L3 features, IPv4/IPv6 dual stack, data center bridging and FCoE. ToR switches are often required to be multiport and the low-latency since they have to deal with different layers’ traffic.
In present, the 1G and 10G data rates still contribute to the largest portion of all switch-to-server connections, 40G and 100G ToR switches that can support multiple data rates are still not many. The 40G example and the 100G example I listed above are one the few multiport high speed ToR switches that are with low latency and high performance.
Copyright © 2002-2018. All Rights Reserved.