English

End-To-End InfiniBand Solutions for LLM Training's Bottleneck

Posted on Jan 25, 2024 by
738

ChatGPT's impact on technology has led to speculation about AI's future. Multimodality has gained attention, and OpenAI introduced GPT-4, a groundbreaking multimodal model. GPT-4 represents a remarkable advancement in various areas.

These impressive strides in AI are the result of extensive model training, which necessitates substantial computational resources and high-speed data transmission networks. The end-to-end InfiniBand (IB) network stands out as an ideal choice for high-performance computing and AI model training. In this article, we will delve into the concept of large language model (LLM) training, and explore the necessity of the end-to-end Infiniband network to address LLM's training bottleneck.

ChatGPT

Are There Any Connections Between LLM and ChatGPT?

Training large language models (LLMs) faces several bottlenecks, primarily related to data transfer and communication within GPU computing clusters. As large language models grow, the need for high-speed and reliable networks becomes crucial. For instance, models like GPT-3 with 1.75 trillion parameters cannot be trained on a single machine and rely heavily on GPU clusters. The main bottleneck lies in efficiently communicating data among the nodes in the training cluster.

Two Stages of the LLM Training Bottleneck

Stage 1: Ring-Allreduce

One commonly used GPU communication algorithm is Ring-Allreduce, where GPUs form a ring, allowing data to flow within it. Each GPU has a left and right neighbor, with data only being sent to the right neighbor and received from the left neighbor. The algorithm consists of two steps: scatter-reduce and allgather. In the scatter-reduce step, GPUs exchange data to obtain a block of the final result. In the allgather step, GPUs swap these blocks to ensure all GPUs have the complete final result.

Stage 1: Ring-Allreduce

Stage 2: Two-Stage Ring

In the past, with limited bandwidth and no NVLink or RDMA technology, a large ring sufficed for both single-machine and multi-machine distribution. However, with the introduction of NVLink within a single machine, using the same method becomes inappropriate. The network bandwidth is much lower than NVLink's bandwidth, so employing a large ring would significantly reduce NVLink's efficiency to the network's level. Additionally, in the current multi-NIC (Network Interface Card) environment, utilizing only one ring prevents the full utilization of multiple NICs. Hence, a two-stage ring approach is recommended to address these challenges.

In a two-stage ring scenario, data synchronization occurs between GPUs within a single machine, leveraging the high bandwidth advantage of NVLink. Subsequently, GPUs across multiple machines establish multiple rings using multiple NICs to synchronize data from different segments. Finally, GPUs within a single machine synchronize once more, completing data synchronization across all GPUs. Notably, the NVIDIA Collective Communication Library (NCCL) plays a crucial role in this process.

Stage 2: Two-Stage Ring

The NVIDIA Collective Communication Library (NCCL) includes optimized routines for multi-GPU and multi-node communication, specifically designed for NVIDIA GPUs and networks. NCCL provides efficient primitives for all-collection, all-reduce, broadcast, reduce, reduce scatter, and point-to-point send and receive operations. These routines are optimized for high bandwidth and low latency, utilizing in-node and NVIDIA Mellanox networks via PCIe and NVLink high-speed interconnects.

NCCL

By addressing the bottlenecks in data transfer and communication, advancements in GPU computing clusters and the utilization of tools like NCCL contribute to overcoming challenges in training large language models, paving the way for further breakthroughs in AI research and development.

How Does the End-To-End Infiniband Network Solution Help?

When it comes to large model training, Ethernet falls short in terms of transmission rate and latency. In contrast, the end-to-end InfiniBand network offers a high-performance computing solution capable of delivering transmission rates up to 400 Gbps and microsecond latency, surpassing the capabilities of Ethernet. As a result, InfiniBand has become the preferred network technology for large-scale model training.

Data Redundancy and Error Correction Mechanisms

One key advantage of the end-to-end InfiniBand network is its support for data redundancy and error correction mechanisms, ensuring reliable data transmission. This becomes especially critical in large-scale model training where the sheer volume of data being processed makes transmission errors or data loss detrimental to the training process. By leveraging InfiniBand's robust features, interruptions or failures caused by data transmission issues can be minimized or eliminated.

Data Redundancy and Error Correction Mechanisms

Local Subnet Configuring and Maintaining

In an InfiniBand interconnection protocol, each node is equipped with a host channel adapter (HCA) responsible for establishing and maintaining links with host devices. Switches, with multiple ports, facilitate data packet forwarding between ports, enabling efficient data transmission within subnets.

The Subnet Manager (SM) plays a crucial role in configuring and maintaining the local subnet, aided by the Subnet Manager Packet (SMP) and the Subnet Manager Agent (SMA) on each InfiniBand device. The SM discovers and initializes the network, assigns unique identifiers to all devices, determines the Minimum Transmission Unit (MTU), and generates switch routing tables based on selected routing algorithms. It also performs periodic scans of the subnet to detect any changes in topology and adjusts the network configuration accordingly.

Credit-Based Flow Control

Compared to other network communication protocols, InfiniBand networks offer higher bandwidth, lower latency, and greater scalability. Additionally, InfiniBand employs credit-based flow control, where the sender node ensures it does not transmit more data than the number of credits available in the receive buffer at the other end of the link. This eliminates the need for a packet loss mechanism like the TCP window algorithm, allowing InfiniBand networks to achieve extremely high data transfer rates with minimal latency and CPU usage.

Remote Direct Memory Access (Rdma) Technology

InfiniBand utilizes Remote Direct Memory Access (RDMA) technology, which enables direct data transfer between applications over the network without involving the operating system. This zero-copy transfer approach significantly reduces CPU resource consumption on both ends, allowing applications to read messages directly from memory. The reduced CPU overhead boosts the network's ability to transfer data rapidly and enables applications to receive data more efficiently.

Overall, the end-to-end InfiniBand network presents significant advantages for large model training, including high bandwidth, low latency, data redundancy, and error correction mechanisms. By leveraging InfiniBand's capabilities, researchers and practitioners can overcome performance limitations, enhance system management, and accelerate the training of large-scale language models.

FS Offers Comprehensive End-to-End InfiniBand Networking Solutions

FS provides a comprehensive end-to-end networking solution leveraging advanced components such as NVIDIA Quantum-2 switches and ConnectX InfiniBand smart cards, along with flexible 400Gb/s InfiniBand technology. With our deep understanding of high-speed networking trends and extensive experience in implementing HPC and AI projects, FS aims to deliver unparalleled performance while reducing costs and complexity in High-Performance Computing (HPC), AI, and hyper-scale cloud infrastructures.

FS's end-to-end InfiniBand networking solutions empower organizations to leverage the full potential of high-performance computing, AI, and hyperscale cloud infrastructures. By delivering superior performance, reducing costs, and simplifying network management, FS enables customers to stay at the forefront of innovation and achieve their business objectives efficiently.

You might be interested in

Knowledge
Knowledge
Knowledge
See profile for Sheldon.
Sheldon
Decoding OLT, ONU, ONT, and ODN in PON Network
Mar 14, 2023
386.2k
Knowledge
See profile for Irving.
Irving
What's the Difference? Hub vs Switch vs Router
Dec 17, 2021
367.6k
Knowledge
See profile for Sheldon.
Sheldon
What Is SFP Port of Gigabit Switch?
Jan 6, 2023
335.5k
Knowledge
See profile for Migelle.
Migelle
PoE vs PoE+ vs PoE++ Switch: How to Choose?
Mar 16, 2023
420.5k
Knowledge
Knowledge
Knowledge
Knowledge
See profile for Moris.
Moris
How Much Do You Know About Power Cord Types?
Sep 29, 2021
294.7k