Discovery and innovation have always started with great minds dreaming big, and this is very much the case today. As data analytics, high performance computing (HPC) and artificial intelligence (AI) continue to converge and evolve, they are fueling the next industrial revolution and the next quantum leap in human progress. And with the help of increasingly powerful technology, we can all dream even bigger.
Lightning-fast networks are a key enabler of this ongoing push to take things to a higher level. Whether it’s a quest to find new galaxies, discover new forms of energy or design vehicles that drive themselves, we need low-latency, high-bandwidth interconnects that can keep pace with the speed of today’s ever-growing data processing and storage needs. Anything less can impede system performance and slow the path to discovery and innovation.
The quest for faster networking performance drove the rise of InfiniBand technology for server-side connectivity in HPC shops, hyperscale data centers and AI environments. The unique characteristics of InfiniBand break through the networking roadblocks in system architectures to deliver a better balance of processor performance, memory bandwidth and I/O performance. And in a parallel trend, Dell Technologies and other networking providers are delivering solutions that put more processing power in smart network interface controllers (NICs), switches and routers to automate network functions and accelerate system performance.
And this brings us to the news of the day — the announcement of the 7th generation of the NVIDIA Mellanox InfiniBand architecture. This high-speed, extreme low-latency and high-density InfiniBand interconnect delivers the scalability and feature-rich technology needed for today’s supercomputers, AI applications and hyperscale cloud data centers.
NVIDIA reports that this new generation of Mellanox InfiniBand sets world records for high performance networking, delivering 2X higher bandwidth per port (400Gb/s), 3X higher switch silicon port density, 5X higher switch system capacity and 32X higher AI acceleration power.
The Mellanox InfiniBand architecture continues to enhance and extend NVIDIA In-Network Computing technologies, including pre-configured and programmable compute engines such as NVIDIA Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™, MPI tag matching and MPI All-to-All and programmable cores. NVIDIA says it adds up to the best cost per node and the best return on investment.
We look forward to incorporating these latest InfiniBand capabilities into our HPC/AI systems. Advances like these that enable our customers to detect cancer in MRI images using AI models, make billions of AI-driven content recommendations without human involvement, and unlock the secrets of dark matter and other mysteries of our universe.
This is all keeping in spirit with the Dell Technologies commitment to developing technologies that power human progress and transform lives. And that’s just what is happening today, as we at Dell Technologies work with our partners to bring the latest innovations to data-driven organizations.