High-speed data moving through a data center rack.

TE Perspectives

Data Center Connectivity in the Age of Digital Innovation

Author: David Helster, Engineering Fellow, Data and Devices

Digital innovation relies on the ability to move more data, faster. In the 1980s, cable technology capable of carrying ten megabits of data per second became widely available and the foundation for modern networks – and the internet itself – was born.

 

That technology has had a surprisingly long life: It remains widely compatible with network cards on computers today, though the cables common in today’s residential networks can comfortably carry 1 gigabit per second (Gbps), 1,000 times the capacity of the earliest models. In the meantime, however, the amount of data we need to transmit, store, analyze, and process has increased significantly. That’s because every advancement in data transmission speed has spurred new, innovative ways to use that increased capacity. For example, artificial intelligence and machine learning have only reached today’s level of maturity because of our ability to move and process massive amounts of data quickly.

 

While the pace of digital innovation has not slowed, overcoming each new network speed threshold is becoming more difficult. Today, high-speed data communications are reaching a point where the physical properties of the cables and connections that carry them are making it more challenging than ever to produce economically viable solutions – and it’s in the data center that the need for that speed is most critical.

Copper is Still King

Data centers house the information that we tap into daily to send each other messages, order retail goods online and guide ourselves through rush–hour traffic. Although high–speed global networks can move this data far and wide, the ultimate choke point occurs between servers in large data centers, where the computationally intense work of training AI models and analyzing enormous datasets occurs.

 

At its fastest, a single connection in today’s data centers can move data efficiently and reliably between servers, switches and other computers at about 100 gigabits per second (Gbps). That’s roughly 10,000 times the amount of throughput made possible in the early network cables of the 1980s. Interestingly, the technology within those cables is still broadly similar.

Despite advances in fiber optics, passive copper wire remains the preferred mode of transport for moving data at high speed over short distances. Copper is significantly cheaper to manufacture and deploy than fiber optics. In addition, it can deliver performance advantages over optical cabling because optical signals still need to be translated to and from electrical impulses at either end of a connection.

 

As a result, much of TE’s work has revolved around finding ways to extend the capabilities of the humble copper wire. The next critical milestone needed to support the expanding use of real–time data analysis and artificial intelligence applications: 200 Gbps.

An engineer analysis the connectivity performance in a data center.

Enable efficiencies, speed, and reliability in data performance

Reaching the 200 Gbps Threshold

One of the biggest challenges in getting to the 200 Gbps threshold is that the higher frequencies necessary to transmit data at those speeds create more opportunities for signal loss. Cables and connectors that can handle that frequency while limiting signal loss exist, but they tend to be big and expensive. TE is working on reproducing the precision of these high–quality, high–cost connections reliably at a lower cost, so data centers can feasibly adopt them on a large scale.

The broad outlines of this problem aren’t new. The compromise between performance, manufacturing and reliability has been at the crux of every advance in cable speed so far. Unfortunately, finding a workable compromise has gotten more difficult with every speed increase. Even as we progress toward efficient, economical 200 Gbps connections, connection technologies that achieve faster speeds are already on the horizon.

A man talks on a smart phone while looking at a 5G-connected smart city.

Engineering the connected future of everything

Future Speeds Could Require More than Just New Cables

Eventually, we’ll reach a point of diminishing returns when trying to harvest additional speed from copper wire alone. However, that won’t necessarily mean moving to a different cable technology altogether. Instead, advancements in data center equipment will likely be part of the solution for achieving next–generation data speeds. New architectures could enable more efficient connections while providing additional opportunities for improving latency. For example, changing the physical interconnections among servers and switches could be accomplished by allowing wires to connect directly to chip components.

 

Big technological leaps like that are expensive to achieve and difficult for any one component manufacturer to drive. Instead, developing new ways to connect multiple components in a network will require close collaboration among various players in the industry. TE is very active in conversations about next–generation network architectures. We also work closely with our customers to ensure the solutions we develop will fit the entire ecosystem.

Changing architectures also offer opportunities to gain efficiencies beyond faster signal interconnections. For example, heat is a significant concern for data center operators since it can degrade performance and reliability. Optical cable connections, in particular, are notorious for producing heat and require good thermal design.

 

To help address this problem, TE’s thermal engineers weigh in during the product development process to help ensure our products transmit signals cost–effectively and reliably. For example, our engineers have developed innovative products to replace gap pads, a common solution for a thermal connection between two surfaces. TE’s Thermal Bridge improves heat transfer from high–power optical modules.

 

Integrating these heat–dissipating technologies into our products improves the entire data center ecosystem,making it possible to pack more devices into a switch.

Beyond the Horizon

The speed at which we can move large volumes of data also creates new possibilities for data center architecture – including a move toward disaggregated computing that essentially makes computers less physical. Instead of connecting a set of computers with their own separate processor, memory and storage capabilities, disaggregated architectures break each part of a computer into its own piece. A giant shared pool of memory could then serve many powerful data processors.

 

Pooled storage has long been critical for cloud computing and distributed data centers. Now, faster interconnects and lower latency signaling are opening up possibilities to virtualize other parts of computers, enabling more efficient use of resources.

In the nearer term, the need to train increasingly complex AI models to support more sophisticated applications in medicine, retail and autonomous driving will push us to collect, store and process more data more quickly. We’ve only just begun to see how impactful these technologies can be.

 

Big AI applications will become more scalable and broadly available with next–generation high–speed interconnects. But it would be foolish to assume digital innovation – or the speeds we need to attain it – will slow down anytime soon. At TE, even as we near the 200 Gbps milestone, we’re already thinking about what it will take to reach the next one, and the one after that.

About the Author

David Helster, Engineering Fellow, Data and Devices

David Helster

David Helster is a Senior Fellow in TE Connectivity’s Data and Devices business unit. He has been designing high-speed systems and interconnects for his entire 31 year career. He currently leads the System Architecture and signal integrity technology groups. In this role, he aligns future customer and industry technical needs with TE’s strategic product development. David received his BSEE from Drexel University.