The State of Data Center Networking Technologies

By Contributed Article | April 11, 2016

As networks are required to move more data faster than ever before, Bishop & Associates’ Lisa Huff looks at the current state of data center networking and makes her forecast for the future.

Subscribe-g80

 

Data center opticsEthernet dwarfs other networking technologies in the data center market with 10G, 25G, 40G, and 100G data rates currently being used, and 400G on the horizon. Fibre Channel, still being used in storage area networking (SAN) within the data center, has data rates of 16G and 32G currently, and 64G will arrive soon. The market for FC is about a tenth that of Ethernet in the data center.

Fibre Channel over Ethernet (FCoE) is now being implemented in 40G connections with 120G planned, but this technology has not been widely adopted inside the data center. Most data center networking engineers see this technology as much too expensive for any of the benefits it offers over standard 10G or 40G Ethernet.

InfiniBand also remains a niche technology, mainly used for high-performance computing (HPC) – very specialized data centers – and has data rates of 40G (QDR), 56G (FDR), and 100G (EDR) presently.

Optical module form factors have progressed from SFP+, now renamed SFP10, for single-lane 10G serial applications up through CDFP for 16 lanes of 25G to achieve 400G. In between, we have SFP28 – based on the SFP10, but slightly redesigned to support up to 28Gb/s; the QSFP family of 10, 14, and 28 that support 40G, 56G, and 100G applications; the CFP family of CFP, CFP2, and CFP4 all supporting 40G and 100G; CXP, which can support 100G Ethernet but is primarily used for InfiniBand; and CPAK, which is a proprietary 100G solution from Cisco.

In addition, there is a new form factor being considered for 400G – CFP8 – that will incorporate 4x100G technology. The CFP8 will be roughly the same size as the CFP2 and use a 16x25G electrical I/O connector. It was proposed to the CFP MSA organization in July of 2015.

Copper solutions for data center technologies consist of the RJ45 for 10G, 25G, and 40G and the direct-attach copper (DAC) cable assembly that mates to SFP10 ports for 10G, QSFP10 ports for 40G, and QSFP28 ports for 100G.

The following table summarizes the 10G-and-above variants being planned or being used in internal data center networking.

10G and above data rate table

We expect the RJ45 copper connector will be phased out after 40G, and that copper I/O connections will be few, if any, at 400G.

While the IEEE governs the physical layer specifications for Ethernet, the form factors being used in these networks have various groups specifying them. Optical modules and twinax-cable form factors are usually developed by either an SFF committee and/or a multi-source agreement (MSA), while copper twisted-pair (TP) products are defined by the Telecommunications Industry Association (TIA) and/or the International Standards Organization (ISO).

Many of the same products are being used across networking technologies. For instance, the SFP14 and SFP16 are the same product, just tested and binned to slightly different requirements.

Network ports being used inside and between data centers are expected to grow more than 26% over the next five years. Companies focused on data center applications would be wise to follow the developments in both the IEEE and the above summarized groups.

[hr]

Sign Up for Updates

Get the Latest News
x