The physical limits of traditional board-level interconnects now present the primary challenge to increasing data center performance. Every interconnection point, from ball grid arrays (BGAs) and surface-mount technology (SMT) joints to advanced microelectronic connections like copper pillars, represents an opportunity for signal loss.
While advanced materials can help, they are not a complete solution; high-frequency signals still lose significant power over even short distances on a substrate. As a result, the primary goal for system architects has evolved. The focus is now on optimizing the entire electrical channel by eliminating performance-degrading transitions such as vias and long PCB traces.
This reality has prompted a shift toward interconnects that are either adjacent to (near-ASIC) or integrated with (on-ASIC) the main processor. System architects now need to evaluate this new class of solutions, including near-ASIC designs and on-ASIC technologies like co-packaged copper (CPC) and co-packaged optics (CPO). Because mating connectors directly to the substrate is a novel application, architects must approach this with a new level of expertise. The critical challenge has shifted from choosing a technology to mastering an entirely new design philosophy, starting with a deeper understanding of the performance wall itself.
The Performance Wall of Modern System Design
Higher data rates create several interconnected problems that make conventional board-level design unsustainable. At its core, the issue is a signal integrity crisis. Cumulative signal loss erodes high-frequency transmissions at every transition point along the entire interconnect path, from the chip and its substrate to the motherboard.
Signal loss directly translates into a power challenge, as stronger drive signals are required to overcome the loss, which in turn increases heat and overall system power consumption. Design engineers are often forced to add costly components such as retimers to boost the signal, further increasing system complexity and power consumption. This concept, commonly known as the “I/O power wall,” is a tipping point where moving data consumes as much energy as processing the data itself. This issue has been a persistent challenge for decades, but the move to 224G data rates, with Nyquist frequencies exceeding 50 GHz, has pushed conventional PCB materials to their absolute physical limits.
A density crisis further compounds the signal integrity challenge. Doubling the number of GPUs in a system, even while staying at the same data rate, makes traditional routing increasingly difficult. Leading-edge modern switch and GPU designs can require more than 1,024 differential pairs in a tightly constrained area around the chip. In some extreme cases, signal loss is so severe that architects cannot even use the substrate's outer edges for routing. Not limited to routing alone, the density challenge also affects connector pitch, escape routing, power density and thermal management. This combination of signal degradation, power demands and density constraints highlights the limitations of traditional approaches when planning and designing for 224G data rates and beyond.
Defining the New Landscape: Near-ASIC versus On-ASIC
Choosing between a near-ASIC and an on-ASIC solution prompts a clear set of architectural decisions. A general rule of thumb often used as a starting point for this decision is the approximately 3 dB of signal loss introduced by the transition from the PCB to the substrate in a near-ASIC design. Performance targets for a 224G on-ASIC solution are stringent, often requiring insertion loss below -1.0 dB and return loss below -17 dB up to 56 GHz to maintain signal integrity.
The Near-ASIC Approach
Near-ASIC solutions offer a better-performing alternative to a full PCB trace solution. By residing on the main motherboard, they typically rely on familiar SMT soldering, offering a balance of improved performance and known manufacturing processes. Implementing these solutions on a standard PCB often requires a substantial layer count—exceeding 30 layers—to manage the routing density, which can drive up board costs.
This approach is often chosen when time-to-market is a critical factor, providing a proven solution that gets a system released on time. Their feasibility, however, depends on the system's overall loss budget and whether it can tolerate the additional signal loss from the board-to-substrate transition. The design complexity and perceived risk of an on-ASIC approach can make the near-ASIC solution more attractive for architects focused on time-to-market.
The On-ASIC Approach
On-ASIC solutions are integrated directly onto the advanced substrate, enabling thinner traces and higher routing density. While near-ASIC designs increase motherboard costs through higher layer counts, the on-ASIC approach shifts the expense to the substrate itself. Often the second-most expensive component in the system after the processor die itself, the high-value substrate requires a design that prioritizes protection. The solution can benefit from reworkable, compression-based interconnects that perform reliably without the challenges associated with soldering.
Looking further ahead, the industry is even exploring replacing organic substrates with glass or silicon to handle future performance demands. These materials provide superior dimensional stability and flatness, helping to mitigate issues like substrate warp while allowing for even finer circuit features.
Comparing Co-Packaged Copper and Co-Packaged Optics
CPC extends trusted, lower-cost copper technology for short-reach applications where proven reliability is a critical factor. Copper is a known and trusted medium with well-understood reliability and implementation methods. Its primary drawback, however, is that it requires significant and costly channel re-engineering with each new speed generation, a time-consuming process that can delay system development.
CPO offers a different technological path by permanently solving the challenges of signal reach and integrity. This advantage, however, comes with higher initial costs and new manufacturing complexities, as many new elements must be introduced into the system. Technologies like wavelength division multiplexing (WDM) further enhance this advantage by allowing multiple data streams to travel over a single fiber. By solving the core signal integrity problem, CPO creates a new innovation cycle. It supports advanced system designs, such as disaggregating components into independently upgradeable hardware, which is a significant advantage for hyperscale data centers.
Ultimately, the choice between CPC and CPO is typically a cost-benefit analysis. While production-volume CPO systems are not yet widespread, some hyperscalers may favor the long-term flexibility of CPO for their rapid refresh cycles. In contrast, the telecommunications sector, which requires systems with a much longer service life and multiple speed bumps, often prioritizes the proven reliability and lower initial investment of CPC.
A Holistic Approach: The Shift Toward Chip-Level Connectivity
The migration of high-speed interconnects toward the chip is an industry imperative, a direct consequence of the physical limits of high-speed data transmission. Success at today's speeds requires deep, early-stage collaboration. Molex actively co-engineers solutions with companies across the entire ecosystem, from chip developers to end-users. Proven engineering from established near-ASIC solutions directly informs an effective on-ASIC strategy, which prioritizes robust, reworkable designs that protect the high-value substrate.
Future success depends on choosing a collaborator with expertise across the diverse ecosystem that includes near-ASIC solutions as well as on-ASIC technologies like CPC and CPO, and the ability to work hand-in-hand across the entire value chain. This close collaboration has enabled Molex to create near-ASIC solutions that support faster data processing, as well as CPC and CPO technologies that improve energy efficiency and scalability for next-generation networks. Looking ahead, Molex is preparing for the industry's next major bottleneck of density, a challenge where optics may provide a distinct long-term advantage.
By working with design engineers, Molex continues to develop the high-speed solutions required to overcome today’s and tomorrow’s performance bottlenecks. Explore Molex 224G High-Speed Data Center solutions.
Share