Skip to main content
AI Computer

Powering the Future of AI

The continued evolution of artificial intelligence poses new challenges in power management. Learn how data centers can prepare for faster infrastructure, such as 224 Gbps-PAM4 technology, and more power-hungry GPUs. 

Read Time: 4 Min

New breakthroughs in AI performance are setting up the race to deliver the most powerful data centers for the future. As AI applications continue to grow in complexity and demand exponentially more computation, power resources may govern which data centers can update to the next tier of processing and maintain premium status.

In recent years, AI has emerged as a game changer in industries ranging from healthcare and finance to transportation. Machine learning algorithms and deep neural networks have become powerful tools for data analysis, pattern recognition and decision making. However, these AI applications require massive amounts of computing power and energy to operate effectively.

Training large AI models can consume a significant amount of power, and specialized hardware such as Graphics Processing Units (GPUs) or Tensor Processing Units (TPUs) are often employed to perform the complex matrix calculations required for deep learning algorithms much faster than traditional CPUs. These specialized hardware units are designed to handle large volumes of data and optimize the processing required for machine learning.

When Large Models Scale Up

Recent examples have shown that training models can require as much as several hundred kilowatts of power. For example, OpenAI's Large Language Model (LLM) GPT-3 was trained on 175 billion parameters and 570GB of internet-sourced data, which reportedly required 355 megawatts of power. 

OpenAI’s next version, GPT-4, is hundreds of times more capable than GPT-3, with a training set of 170 trillion parameters, which makes it the most powerful AI engine ever seen — at least for the time being.

OpenAI utilized Microsoft Azure data centers for the GPT project. Companies will continue to build and maintain data centers in their own facilities to support breakaway success in popularity or rapid expansion of their user base. But they will likely need to rely on cloud-based services as well — such as Microsoft Azure, Amazon Web Services and Google Cloud Platform — to provide the additional computing power needed to meet sudden spikes in demand. 

These large-scale cloud platforms have been leading the development of hyperscale data center best practices and boast the most advanced data center performance in terms of compute speed and bandwidth.  

In this highly competitive landscape, the most promising AI projects will vie for the most reliable data centers that can provide the heaviest loads in computational processing. In the outlook for the next phase of AI growth and data center upgrades, there may not be enough premium data center services for everybody. 

The bottleneck may be largely imposed by the limitation of electric power. A Molex survey of more than 800 design engineers and their managers shows that 40% of respondents list power management as the top challenge when implementing power systems in a data center. Power distribution came in a distant second place at 20%.

Doubling Data Capabilities

State-of-the-art data centers today deliver data rates of up to 112 Gbps. Many data centers are in the process of upgrading hardware and connectors to reach this level of speed and performance. 

The drive to create a 224 Gbps-PAM4 data center is already on the horizon to meet the growing demand for AI processing. However, the infrastructure components for 224G are only in the early stages of being released to market, which means it will be several years before a fully 224G facility becomes commonplace.

Doubling the data rate capability of the world’s data centers in aggregate will be a mammoth undertaking because it requires a significant boost in electrical power generation. Given that AI consumes two percent of energy resources today, tomorrow’s upgrade may be the equivalent of adding a few new major cities to the world map, each needing power plants of their own.

In fact, the power requirements for a typical GPU module have grown from 450W per module in 2018 to 1000W in 2022, or from 3600W to 8000W per box for an OCP OAM — driven by the use of more computing power. This power increase correlates to more heat generation than ever before, and a need for heat sinks and components that can handle the higher temperatures. Molex Mirror Mezz Connectors are built for both the increased power requirements and the associated heat management. The same connector can be used for 450W and 1000W GPU models and perform superbly in air and liquid cooling. 

Adapting to What’s Next

The collision between exponentially increasing power requirements for an AI-enabled data center and the limitations of current facilities requires creative solutions. Some current hyperscale data centers might not be able to make the next generation leap, simply because of unfortunate geography. Certain localities may not elect to mete out another slice of their limited power generation to cloud-server companies. Or an aged grid — even one in transition to renewable resources — could experience intermittent outages, which prove problematic for data services. 

The bottleneck might also be inside the facility as it transitions from Technology A to Technology B and maxes out its internal power distribution architecture. In addition to investing in hardware, data centers must also focus on new ways of optimizing their power consumption as the industry approaches building out AI-driven data centers. This could include the use of advanced cooling systems, energy-efficient hardware and innovative power management strategies. An AI-driven management tool might even be a future solution for data centers as they wrestle with optimizing their current resources. 

As AI continues to reshape the world of computing, data centers must adapt to meet the growing demands of these applications. The competition for the most computational and powerful resources will continue to intensify as the AI landscape expands and more companies seek to deploy AI applications in their operations. Data center operators must invest in the latest technologies and strategies to stay ahead of the curve and remain competitive in this dynamic landscape. That’s why Molex is investing in the technologies of tomorrow, including 224 Gbps-PAM4 capabilities and a full host of solutions for data center power management.

 

 

 

Share