Fiber has traditionally been the go-to solution for long-distance, high-speed communication links. Fiber optic cable offers huge advantages of bandwidth, low attenuation, and reduced size and weight to transmit hundreds of data channels over a single glass fiber. Look to these suppliers for the products you need.
Fiber Optic Cable
Fiber has traditionally been the go-to solution for long-distance, high-speed communication links. Fiber optic cable offers huge advantages of bandwidth, low attenuation, and reduced size and weight, as well as the ability to transmit hundreds of data channels over a single glass fiber. A classic example of this capability is the upcoming TE SubCom trans-Atlantic subsea cable, which will consist of eight fiber pairs featuring a design capacity of 160 terabits per second operating over 4,101 miles (6,600 kilometers) of fiber from Virginia Beach, Va., to Bilbao, Spain.
The incredible growth of the Internet is propelling exabyte levels of traffic through data centers. That, in turn, has required huge infrastructure investments to keep pace with demand to process and store information generated by “Big Data.” The cloud has become a primary resource in huge central data centers that may house over 100,000 individual servers. In order to function as a flexible, parallel computation engine, servers and switches must communicate via high-speed, low power, low latency links. High-performance copper and fiber interconnects are the glue that enables this system to function. Copper cable assemblies were the early and preferred media for shorter links, but as data rates rose and the size of data centers increased, fiber found a home inside the building. A progression of pluggable input/output (I/O) interconnects evolved to address the multiple objectives of reach, signal integrity, power dissipation, cost, and I/O panel density.
The physical size of a server chassis designed to be installed in an industry-standard 19” rack is defined by a series of IEC standards. Over the years, demand for cramming more processing power into smaller spaces reduced the height profile of servers from 4U (4 X 1.752 inches) down to 1U (which is actually only 1.719 inches tall). That doesn’t leave much space on the faceplate for the installation of I/O connectors, sparking the race to develop high-speed/density I/O connectors.
Small Form Factor Connectors
Small Form Factor connectors, including the SFP interface, satisfied developers’ needs for a flexible copper and fiber solution depending on the length of a specific application. A common PCB-mounted shielded cage assembly is designed to accept both copper and fiber adapters, including a direct copper option. Managed by a MSA, the SFP specification has been upgraded over the years from one Gb/s to SFP+ capable of 16 Gb/s/channel in the same module envelope. SFP28 is the latest iteration with data rates to 28 Gb/s/channel.
QSFP (quad SFP), embedded four 10 Gb/ channel links in a single connector shell. The most recent QSFP28 upgrade enables the ability to support 100 Gb Ethernet and Infiniband EDR via 4 X 25 GB channels. System designers can choose direct copper cables, active optical, or fiber optic I/O options, which can be altered in the field as requirements change.
Additional pluggable interfaces continue to be proposed or are entering the market. The QSFP double-density MSA is defining an 8-lane by 25 Gb interface to address 200 Gb applications.
CXP connectors deliver 10 Gb/s over 12 fiber or copper lanes for an aggregate of 120 Gb/s. CDFP 2.0 is designed to support data rates of 25 Gb/s over each of 16 lanes of optical fiber for an aggregate of 400 Gb/s. CFP 2/4 has been specifically optimized for longer distance links.
The race to higher speed and density continues.
TE Connectivity’s new Micro QSFP also addresses the thermal issues associated with squeezing more electronics into smaller spaces. Packaged in an envelope slightly larger than the standard SFP connector, the Micro QSFP can be configured with one, two, or four channels to support up to 100 Gb/s applications. Up to 72 ports can be mounted on a 1U faceplate, delivering a total of 7.2 Terabits per second.
Amphenol has joined a new MSA and announced the RCx 25 Gb/s per lane passive I/O connector system that specifically focuses on low cost. The RCx interface is based exclusively on direct-attach copper cables and relatively short lengths of up to three meters without forward error correction. RCx is optimized to accommodate more than 128 25G lanes on a 1 U faceplate.
At this point, existing pluggable interfaces are capable of satisfying most applications for the next several generations of datacenter equipment, but applications that will require greater than 400 Gb performance may be a turning point for the choice of I/O interface. As a short-term solution, system architects will use PAM4 signaling, which effectively reduces the actual data transmission rate by 50%. This will allow the use of existing and emerging pluggable I/O interconnect technology well into the future.
The cost of the interface also remains a factor. With a roster of more than 10 pluggable contenders currently available, the market has become fractured, which tends to limit the ability to reduce prices based on volume.
At some point, the multiple contenders of pluggable I/O connectors simply run out of space on which to mount a sufficient number of connectors to the faceplate of a 1U chassis. Mid-board embedded optical transceivers may be the answer.
Embedded Optical Transceivers
Embedded optical transceivers are designed to be mounted adjacent to a host ASIC. High-speed signals are converted to optical signals and injected into optic fibers that “fly over” the PCB directly to the I/O panel. Reducing the length of copper traces on the board improves signal integrity, and potentially makes the board less expensive to manufacture. Optic links are terminated in a MPO or MPX high-density connector mounted on the faceplate, greatly increasing I/O signal density.
Embedded optical transceivers will not be replacing pluggable transceivers overnight, but as Mike Davis, Market Manager Optical products at Amphenol FCI, explains, embedded transceivers enable system architectures to solve problems that cannot be addressed with pluggable I/O such as extreme port count in limited spaces. The following chart compares on-board transceivers with current pluggable alternatives.
Select applications in high-performance computing, storage, and networking may be candidates for replacement of pluggable I/O with embedded transceiver technology. As 50 Gb PAM4 signaling evolves to 50+ Gb NRZ, pluggables may not be able to provide the speed and density required, making embedded transceivers the best-performing and most cost-effective solution.
This is a challenging period for connector manufacturers to participate in the mid-board optical transceiver arena. Avago was the first to pioneer embedded optical modules, with the Snap12 product, and has since evolved to higher performance optical interconnects. “Flyover” optical cables were introduced by Avago for their MicroPOD optical modules. Finisar also introduced a board-mount optical assembly (BOA). The concept caught on with new market entrants, such as Samtec, with their Firefly Micro Flyover mid-board transmitter and receiver modules. A few years later TE Connectivity, Amphenol FCI, and Molex moved into the market with their own transceiver modules. Lack of an industry standard resulted in proprietary designs.
The last year has brought extensive change to this market segment. Avago sold their optical module product lines to Foxconn Interconnect Technology (FIT), which is continuing to support Mini and MicroPOD optical modules. Molex has temporarily withdrawn their QuatroScale Mid-Board transceiver until industry standards are defined for on-board optics. TE Connectivity discontinued its Coolbit Optical engine, along with much of its fiber optic connector lines, to focus on its rugged expanded beam optical interfaces. The Amphenol FCI Leap transceiver (12 channel x 25 Gb/s) and the Samtec Optical Flyover (12 channel x 14 Gb/s) are currently leading the industry in actively pursuing embedded optical module applications.
Until system designers reach the point where pluggable I/O simply cannot deliver the density and bandwidth required by new applications, embedded optics will remain a niche solution. When that day comes, the existence of an industry standard that can define electrical, mechanical, footprint, and thermal parameters would allow optical and footprint compatibility among competing suppliers. Sharon Hall, Director, North America Marketing at Oclaro, says the creation of a standard will allow customers to choose between multiple compatible sources and ultimately drive costs down. That challenge is being taken up by the recently formed Consortium for On-Board Optics (COBO) which is targeting early 2017 for completion of a MSA standard. Amphenol FCI, HUBER+SUHNER, Molex, Rosenberger, Samtec, Sumitomo, TE Connectivity, US Conec, and Yamaichi are associate members in this consortium.
Another issue yet to be resolved by the industry is the optimal number of duplex channels with which embedded transceivers should be configured. Transceivers currently on the market range from one to 12 duplex channels.
Industry forecasters have pegged aggregate I/O bandwidth demand to double every three years, with 50 Gb/s PAM4 channels being common by 2018. Cloud traffic will continue to increase as streaming video becomes pervasive and more devices are linked to the Internet of Things. Data centers will be driving the adoption of embedded I/O transceivers and have identified the ultimate target of tapping a 1U chassis with up to 12.6 Tb of I/O capacity. The current perceived limit of about 400 Gb for pluggable interfaces will likely be pushed to 800 Gb/s, enabling their continued use, but the density and thermal management advantages of embedded optical transceivers make them an important enabler in the roadmap to high-speed I/O.
- Quantum Computers are on the Horizon as Quantum Mechanics Advance - April 6, 2021
- Will Copper Conductors Hit a Wall? - February 9, 2021
- PCIe Specification Roadmap Evolves in Tandem With Increasing Bandwidths - January 12, 2021