microQSFP Plugs and Cages

Product Group

Now there is a cooler way to meet I/O demand.

Increasing Data Center Performance with Smaller, Faster, and Cooler Connectivity

Rising demand for higher bandwidth is driving data center equipment makers to build faster, denser switches and servers, but this means squeezing more components into the same unit, which generates heat. Now, a new generation of input/output (I/O) connectors is enabling faster, denser, and cooler data center equipment.

Our customers build products that consume a lot of energy through powering the connectors used to transmit data between switches, servers, and other data center equipment. Data center owners often operate large sites, and the need for higher aggregate bandwidth is always increasing due to greater use of streaming video, Big Data, and other bandwidth-intensive applications. This directly impacts data center equipment makers. For example, a data center switch has a one rack unit (1RU) form factor, and manufacturers continually strive to double, triple, or quadruple the amount of data throughput in these devices. That means that connectors and other components inside those devices must be smaller and faster – smaller to increase the density of connectivity on the faceplate of the device, and faster to increase the throughput per connector. 

The Innovation Behind microQSFP (English)

Watch now to learn more about microQSFP.

The fundamental challenge is that cramming faster connectors together generates more heat, and there is a tradeoff between smaller, faster, and cooler when it comes to I/O components used in data center equipment. In high-speed I/O, you have an optical transceiver that plugs into a cage, converting optical signals to electricity, and that generates heat. Typically that heat comes from an engine inside, and it would be bridged to the transceiver’s outer wall (or cage), from the cage to the heat sink that was attached to that cage, and from there into the airflow inside the box.

The problem is that when you increase density, you increase heat. If all you do is add more I/O connectors onto a faceplate, you quickly reach a thermal overload where each module is dumping its heat into its neighbor, which is trying to dump its own heat. The escalation of heat generated cannot be adequately dissipated. Going from 3.2 Tbps to 6.4 Tbps in throughput in a 1RU switch doubles the number of I/O modules from 32 to 64, and behind those modules are a lot of other components that also generate heat. The equipment manufacturer has no choice but to punch holes in the faceplate – using up very precious real estate – to get more air to cool the unit and those components within.

A new solution for smaller and faster connectivity is the microQSFP standard. TE engineers knew that microQSFP represented a baseline of technology that would deliver faster and smaller I/O connections. While QSFP28 can support 3.2 Tbps switches, the module was too large to deliver the density required to support a new generation of 6.4 Tbps switches.

Going to a smaller connector with microQSFP and increasing the bandwidth was a big step in the right direction. However, the third issue of thermal performance presented the greatest challenge. Essentially, microQSFP places double the amount of connections into the same rack unit. Doubling the amount of optics doubles the heat, and we needed to do something fundamentally different to dissipate heat for microQSFP.  

microQSFP Interconnect Family from TE
microQSFP Interconnect Family from TE

Initially, TE engineers considered heat pipes, fans, and thermo-electric coolers for their early design prototypes, but these all proved too costly and complex. TE engineers ultimately approached the thermal management problem by taking the basic pieces of the solution and re-combining them. Specifically, by integrating the heat sink into the transceiver or plug rather than the cage, they were able to significantly improve thermal capacity over current interfaces. In totally re-thinking the way things were done in the past, TE engineers came up with a solution that addressed all three needs: smaller, faster, and cooler. 

Now, data center equipment manufacturers have a smaller, faster, and cooler way to meet their increasing I/O connectivity demand.
Lucas Benson,
Global Product Manager, High-Speed I/O Products
Lucas Benson

Integrating the heat sink in the I/O module allowed TE engineers to reduce the thermal path significantly while also turning each plug and cage location into increased airflow for the unit. Typically the cage temperature of a module should not exceed 70 degrees C. By designing microQSFP with an integrated heat sink, TE engineers achieved about a 15 degrees C improvement in heat output, bringing it well within the maximum of a 70-degree envelope.

To build an industry-wide ecosystem for this revolutionary solution, TE spearheaded a multi-source agreement (MSA) among several manufacturers to standardize on its design.

Now, data center equipment manufacturers have a smaller, faster, and cooler way to meet their increasing I/O connectivity demand. 


Lucas Benson, Global Product Manager, High-Speed I/O Products