Seeking a way to deliver bandwidth more effectively, some have investigated the removal of high-speed circuits from the PCB by replacing them with discrete or ribbon twin-axial copper cables.
For many years, engineers have bemoaned the eventual limitations of copper printed circuit board conductors in ever-higher-speed circuits: Surely the laws of physics would create barriers to the progress of traditional copper circuitry and usher in the age of optical interconnects. Copper interconnects have presented serious signal integrity challenges, especially as system data rates pushed 10+Gb/s. Advanced signal-conditioning technologies, including forward error correction (FEC), have enabled copper to perform well past its anticipated limits but with increased cost, space, and power consumption. As data rates continued to increase and the cost of optical components declined, growth of optical backplanes was all but ensured.
A funny thing happened on the road to the future: Copper links in a different form may yet delay the broad adoption of fiber in printed circuit board applications.
Today, PCB design engineers are faced with a cost/performance dilemma. Systems now in design are pushing 25Gb/s with an eye to 50+Gb/s. Indeed, PCB channels running at 56Gb/s were demonstrated at DesignCon 2016. The ability to achieve these rates requires a combination of costly upgrades that may include:
- High-performance PCB laminate material with better electrical, moisture absorption, and copper flatness characteristics
- Additional layers to provide adequate isolation
- Precise circuit layout requiring many simulations to ensure signal integrity
- Costly high-end proprietary chip sets
- Use of additional signal conditioning devices
Thicker boards add weight and cost and result in deeper plated through-holes, a major source of signal distortion. Backdrilling these holes becomes essential. Limited by the geometry of dense PCB lines and spaces, the maximum practical length of a channel continues to decrease as data rates go up. Increasing interest in PAM4 signaling enables the continued use of lower data rates to achieve higher throughput but results in reduced channel operating margin, which adds pressure to design cleaner circuits.
Seeking a way to deliver bandwidth more effectively, some have investigated the removal of high-speed circuits from the PCB entirely and replacing them with discrete or ribbon twin-axial copper cables.
Shielded twinax offers significant advantages including:
- Greater isolation between high-speed signals
- Tighter impedance control
- Better signal skew control
- Larger effective conductor area that varies between 24 to 30 AWG to minimize attenuation
These factors can result in significantly improved signal fidelity and increased channel reach. High-speed cable-to-PCB connectors are located immediately adjacent to signal sources such as a processor, conveying the signal over twinax rather than through PCB traces buried in a daughtercard or backplane.
Connector manufacturers have begun to introduce interconnect systems that can replace copper traces on a PCB with twin-axial cable.
Samtec was one of the first to address this emerging packaging strategy with its Firefly Micro Flyover product family. Low-profile right-angle headers mate with plugs terminated with ribbon cable that “flies” over the PCB to another location on the board or to an I/O port. This system has since been expanded to include passive and active copper as well as optical configurations.
Additional connector manufacturers are beginning to position their high-speed PCB-to-cable connectors to address these applications. The new Nano-Pitch I/O connector from Molex and the Sliver interface from TE Connectivity are ideal interfaces to lift high-speed signals out of the PCB.
The Intel Omni-Path architecture is a new high-performance server and data center fabric designed to support high-end servers and supercomputers. It runs high-speed signals through shielded twinax cable from point to point on a PCB as well as to a backplane or I/O port.
Advanced computers and servers often require physically large backplanes. That poses a problem as the signal path from a device on daughtercard slot 1 (across the backplane) to a device on a daughtercard 30 slots away could be a meter or more. Traditional backplane architecture could be a costly way to go. Orthogonal midplane architecture is one possible solution but introduces access and cooling issues.
Cable backplanes solve this problem by replacing copper traces on the backplane with twinax cables plugged into the back of the backplane. High-speed signals pass through the backplane via posted daughtercard connectors, leaving only low-speed and power circuits on the backplane.
Based on their flagship high-speed backplane connectors, Amphenol FCI, Molex, and TE Connectivity have demonstrated cable backplanes using discrete or ribbon twinax cable to provide connection between daughtercards.
Cable management can be a problem when dealing with hundreds of discrete twin-axial cables. Amphenol TCS has demonstrated a modular cable tray approach that simply plugs into the rear of the backplane. Amphenol-FCI, while still FCI Electronics, showed a cable backplane terminated with ribbon twinax that helps organize the many signals.
TE Connectivity has envisioned orthogonal cable midplane architecture based on modules that are allowed to float, minimizing daughtercard alignment issues.
At this point, cable backplanes are highly customized solutions that address application-specific challenges and are the focus of much design and development work.
High-speed I/O interfaces could benefit from direct twin-axial connection. Samtec recently showed a QSFP connector directly terminated to twinax cable, bypassing the traditional termination to a PCB.
It may be too early to determine if this “flyover” concept represents a short-term transition toward adoption of fiber optic interfaces or simply becomes another option for system designers that can be applied to a broader range of applications.