English

FS NVIDIA Mellanox InfiniBand Switch Overview

Posted on Mar 25, 2024 by
277

In today's digital era, fast data transmission is crucial in the fields of modern computing and communication. NVIDIA Mellanox InfiniBand switches play a key role in data center networks to meet the demands of large-scale data transfer and high-performance computing. This article will introduce the fundamentals of InfiniBand technology, the functionalities and features of Mellanox InfiniBand switches, and their applications in various domains.

What Is InfiniBand Architecture?

InfiniBand architecture is a powerful new architecture designed to support I/O connectivity for the Internet infrastructure, which brings fabric consolidation to the data center. Storage networking can concurrently run with clustering, communication and management fabrics over the same infrastructure, preserving the behavior of multiple fabrics.

InfiniBand is an open-standard network connecting technology with high bandwidth, low latency, and excellent dependability. IBTA (InfiniBand Trade Alliance) defines this technology, which is commonly utilized in supercomputer clusters. Simultaneously, with the advent of artificial intelligence, it is becoming the primary network interconnection solution for GPU servers. For more details about InfiniBand, please refer to Getting to Know About InfiniBand.

INTERCONNECTS - TOP100 SUPERCOMPUTERS

 

FS Mellanox InfiniBand Switches at a Glance

With its expertise and technological strength in the field of InfiniBand, NVIDIA offers a range of powerful InfiniBand switches. These NVIDIA/Mellanox InfiniBand switches, equipped with core functionalities such as routing, forwarding, and data flow management, enable efficient data transmission and communication. NVIDIA's product line covers a wide range of scalability, performance, and functional requirements, providing users with flexible choices. The table below showcases the Mellanox InfiniBand switches available from FS.

 

MQM8790-HS2F

MQM8700-HS2F

MQM9700-NS2F

MQM9790-NS2F

Product
Series
QM8790 series
QM8700 series
QM9700 series
QM9790 series
Ports Speed
40x 200Gb/s or
80x 100Gb/s
40x 200Gb/s or
80x 100Gb/s
64x 400Gb/s on 32 OSFP Connectors
64x 400Gb/s on 32 OSFP Connectors
Port Types
QSFP56
QSFP56
OSFP
OSFP
Switching Capacity
16Tb/s
16Tb/s
51.2Tb/s
51.2Tb/s
Airflow
Back-to-Front(P2C)
Back-to-Front(P2C)
Back-to-Front(P2C)
Back-to-Front(P2C)
Switch Chip
NVIDIA QUANTUM
NVIDIA QUANTUM
NVIDIA QUANTUM-2
NVIDIA QUANTUM-2
Power Supplies
1+1 Hot-swappable
1+1 Hot-swappable
1+1 Hot-swappable
1+1 Hot-swappable
Fan
5+1 Hot-swappable
5+1 Hot-swappable
6+1 Hot-swappable
6+1 Hot-swappable
Management
Unmanaged
Managed
Managed
Unmanaged
Rack Units
1 RU
1 RU
1 RU
1 RU
 

Note: In the table, P in P2C and C2P means power, C means cable (line interface), P2C (Power to Cable), C2P (Cable to Power). The reference system here is that the cable line interface side is the front, and the power side is the back.

There are two options for the 200G HDR and 400G NDR InfiniBand switches: QM8700 and QM8790 for the 200G switch, and QM9700 and QM9790 for the 400G switch. The only difference between the two options is management mode. The QM8700 and QM9700 provide the control interface for out-of-band management, while the QM8790 and QM9790 require the NVIDIA Unified Fabric Manager (UFM®) platform for management. In addition, please note that the port encapsulation of the Mellanox 200G HDR InfiniBand switch is OSFP56, and the port encapsulation of the Mellanox 400G NDR InfiniBand switch is OSFP.

Find more details about the Mellanox Switches and Adapters here.

Key Features of Mellanox InfiniBand Switches

InfiniBand switches play a crucial role in HPC and data center by facilitating efficient communication between servers, storage systems, and networks. They provide the foundation for high-speed data transfers, enabling organizations to process complex workloads and unlock new levels of performance.

  • High-speed interconnectivity: Mellanox InfiniBand switches offer exceptionally low latency, enabling rapid data transmission between nodes. They facilitate seamless communication within large-scale computing clusters with high bandwidth capacity, maximizing throughput and reducing processing time. Besides, InfiniBand switches support high-performance computing clusters and provide the necessary infrastructure to build and manage HPC clusters efficiently.

  • Scalability and flexibility: Mellanox InfiniBand switches are designed to handle massive data transfers, providing a scalable architecture that can seamlessly scale up to thousands of nodes to ensure optimal performance even with increasing workloads. InfiniBand switches can also seamlessly integrate with existing infrastructures, making them a flexible choice for organizations.

  • Enhanced storage networking: InfiniBand switches deliver exceptional data transfer rates, making them ideal for storage networking. They enable rapid access to distributed file systems, flash arrays, and high-performance storage systems, ensuring efficient data retrieval and reducing latency for critical storage applications.

Applications of Mellanox InfiniBand Switches

InfiniBand switches find extensive applications in various domains.

  • High-performance computing (HPC): InfiniBand switches are widely used in the field of HPC. They enable low-latency, high-bandwidth interconnection, allowing for efficient data interchange and communication across huge parallel computing clusters.

  • Scientific research: It is widely adopted in scientific research, facilitating advanced simulations, computational fluid dynamics, and molecular dynamics. They provide the necessary infrastructure for researchers to process vast amounts of data and accelerate scientific discoveries.

  • Data center interconnectivity: As the backbone of data centers, it provides high-speed interconnectivity between servers, storage devices, and networks, ensuring smooth data transfers, seamless virtualization, and efficient resource utilization.

  • Cloud computing and big data: With the rise of cloud computing and big data applications, InfiniBand switches play a vital role in meeting the demands of these data-intensive environments. They enable rapid data ingestion, processing, and analysis, facilitating real-time insights and driving innovation.

Conclusion

Mellanox InfiniBand switches are a key technology for HPC and data center interconnect by providing high-speed interconnectivity, scalability and enhanced storage networking. They accelerate the processing of complex workloads, improving computing performance and efficiency. In the field of HPC, InfiniBand switches have broad application prospects and will continue to promote scientific research and innovation progress.

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
386.2k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
367.6k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
335.5k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
Mar 16, 2023
420.5k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
294.7k