Next-Gen Datacenter Connectivity and Thermal Management

By Contributed Article | January 23, 2018

As speeds and requirements rise, the QSFP-DD MSA comes forth as a solution that addresses the technical challenges of achieving a double-density interface.

By Joe Dambach

Next-Gen Datacenter Connectivity

Chips, switches, connectors, cables, and optical module technologies are at the core of datacenter networks, which must be able to support increasingly faster processing, as well as higher bandwidth and density. A range of pluggable I/O solutions support the fastest data rates across the spectrum of distances found in datacenters and telecommunications, with lower latency and insertion loss, and excellent signal integrity, electromagnetic interference protection, and thermal management.

Bandwidth requirements for wireless devices have been a catalyst for higher-density designs in server farms. High-speed, high-density pluggable I/O solutions provide a highly scalable upgrade path. Supporting next-generation 100Gb/s Ethernet and 100Gb/s InfiniBand EDR applications, the zQSFP+ interconnect transmits up to 25 Gb/s per-serial-lane data rates, making it a popular choice in datacenter high-performance computing (HPC), switch, router, and storage applications.

QSFP-DD pluggable modules

As data usage rises, network switch technologies keep data flowing smoothly. High-density connectors can relieve pressure on core switches. There are already silicon chips on the market that support 256 differential lanes. The missing link is a connector form factor that provides adequate density to support this lane count in a 1RU box while managing thermal and signal integrity. The QSFP-DD MSA set out to address the technical challenges of achieving a double-density interface.

The QSFP-DD specification defines a module, a stacked integrated cage/connector system, and a surface-mount cage/connector system that expands the standard QSFP four-lane interface by adding a row of contacts, enabling an eight-lane electrical interface in which each lane is capable of operating at up to 25Gb/s NRZ or 50Gb/s PAM4 modulation. This allows the QSFP-DD to address solutions up to 200Gb/s or 400Gb/s aggregate per QSFP-DD port. A single switch slot can support up to 36 QSFP-DD modules, providing up to 14.4Tb/s aggregate capacity.

As transceiver speeds increase, the energy it takes to drive signals increases and creates more heat. Thermal cooling is critical for connector module and overall network power consumption, energy efficiency, performance, and longevity. Advances in heatsink technologies enable highly efficient, reliable, and resilient thermal management strategies to support both higher-density copper and optical connectivity.

The QSFP-DD module drives eight lanes in a space only slightly deeper than the QSFP. That means eight lasers creating heat coming out of the module, effectively doubling the thermal energy. Managing thermal performance at extremely high heats will be critical moving forward. Advanced thermal management techniques used in the design of the module and cage enable the QSFP-DD to support power levels of at least 7W, with a target range up to 10W. Additional work is underway to develop solutions for cooling the QSFP-DD module operating at up to 12W or higher.

MSA partners are already in the process of tooling QSFP-DD products, including modules and cages available for thermal testing in customer environments. The new optical transceiver stands to bring exceptional value by enabling manufacturers to produce more competitive network server, switch, and storage solutions to support rising data traffic volumes.

Joe Dambach worked in the connector industry for more than 45 years, starting at Molex when he was 19, and retired from the company in 2020. Before his retirement, he managed Molex’ high-speed I/O development and focused on all things QSFP-DD and SFP-DD.

Subscribe-g80

 

Sign Up for Updates

 

Recently posted:

[related_posts limit=”10″]

Get the Latest News
x