At this year’s show, the supercomputing world reached towards higher speeds and capabilities.
The term “supercomputers” brings to mind huge, room-sized machines that spend their time contemplating the mysteries of the universe. Some of these machines actually do, but this year’s International Conference for High Performance Computing, Networking, Storage, and Analysis, held in Denver, demonstrated that high-performance computing (HPC) is much more inclusive and now includes a broad range of equipment and applications.
SC17 has become the premier supercomputing conference, with attendance of nearly 13,000 professionals from 71 countries, 428 technical sessions, and 334 exhibits. Exhibitors featured a wide array of rack servers, power and memory modules, cooling systems, cable assemblies, and test equipment, as well as advanced software. In addition to legacy suppliers such as IBM, HP, Cray, Dell, and Intel, dozens of smaller equipment and component manufacturers displayed their products. Users such as NASA, Huawei, Google, and the Department of Energy demonstrated how they are using their HPC resources to solve complex problems and introduce new services. In addition, 31 universities exhibited the results of their research and development work. A keynote address on the critical role supercomputers will play in the deployment of the Square Kilometer Array Telescope attracted an audience of nearly 1,000 participants. SC17 featured the world’s fastest computing network on the floor of the exhibit hall with 3.3Tb/s of bandwidth.
At the top end, a high-stakes international horse race for the most powerful computer currently is being led by China with the Sunway TaihuLight machine that operates at 93.01 petaflops (one petaflop is one thousand trillion  floating-point operations per second). Coming in number five, behind machines in Switzerland and Japan, the fastest computer in the United States is the Titan, a five-year-old Cray XK7 that runs at 17.6 petaflops.
Obsolescence happens quickly in the world of supercomputers. It is hard to imagine processing speeds like this, but that capability is essential when exploring such arcane subjects such as particle physics, molecular evolution, medical diagnostics, and simulation of global climate conditions. Some of these machines feature more than 10 million cores and consume over 15MW of power. Issues including big data analytics, cloud computing, artificial intelligence, deep learning, and the continuing evolution of hyperscale architecture were popular topics addressed in multiple technical presentations.
High-performance computing now includes servers that target data centers and even telecom installations. In some cases, a rack server features an integrated switch, lots of solid state memory, as well as network interface adapters. Infiniband and Ethernet networking technologies as well as the PCIe expansion bus standard were evident at SC17. New entrants included Gen-Z, which is an open system interconnect designed to support high-speed, point-to-point, daisy-chain, mesh, and switch-based topologies.
The exhibit floor provided a great opportunity to see how supercomputers have evolved from a narrow niche market to a broad mix of advanced hardware and software. Both large and small manufacturers of rack servers continue to package more computing power in a 1U or 2U form factor. Most of these modules are based on a motherboard design with power and I/O being provided by cable assemblies mating to the back or front panel. The typical architecture of a rack server consists of multiple stacked and parallel PCBs, linked by right-angle coplanar connectors. Some systems use a midplane to separate the computing from the I/O sections. Relatively few use traditional blindmate backplane architecture.
I/O connectors include a mix of RJ-45, D subminiatures, Infiniband, SFP, and QSFP, as well as several types of discrete optic interfaces. Machines contain rows of DDR4 memory sockets as well as multiple hard drives. Many rack servers include a built-in power supply that requires a simple IEC power cord connector.
A consequence of packing many high-current devices in a confined area is the challenge of managing the buildup of heat. Maintaining acceptable junction temperatures in high-performance processors and accelerators is a major challenge. Designing more efficient HPC equipment is a major theme in the industry, as current machines can draw thousands of watts. Exhibitors promoted cooling technologies that ranged from forced air to immersion in Fluorinert liquid.
Several suppliers extolled the advantages of closed-loop heat pipes connected to an on-board condenser to transfer heat from one part of the motherboard to another. Others pipe cooled water directly to a jacket that surrounds a heat-generating device. Water is re-circulated using an external chiller. Dripless connectors allow removal of a server from a rack.
Recognizing that this conference is primarily focused on equipment and software, relatively few connector manufacturers chose to fund a booth. The exceptions included Molex, I-PEX, and Samtec. Amphenol ICC and TE Connectivity representatives showed products in the Ethernet Alliance booth, which featured the Ethernet roadmap to 400+ Gb/s. 3M, Mellanox Technologies, and Siemon were among the many high-bandwidth cable assembly manufacturers that demonstrated their latest copper and fiber interconnects.
The Molex booth featured their high-performance interfaces, including the Impel™ backplane connector family, as well as their QSFP-DD pluggable connectors.
I-PEX displayed their extensive line of miniature and microminiature board-to-board and coaxial connectors, with contact centerlines down to 0.35mm pitch.
Samtec featured their high-speed/density interfaces as well as their internal IC packaging capabilities. Their mid-board and panel optical interfaces, including the FireFly ™ Micro Flyover ™ system, were also a prominent part of the booth.
A high-end cable midplane system using the Strada Whisper backplane connector from TE Connectivity was on display at the Ethernet Alliance booth.
One of the greatest values of attending a SC conference is the ability to gain a perspective of HPC industry leaders. Greg Walz, Advanced Technical Marketing Manager at Molex, said that 100+ Gb/s twinax channels will be achieved using PAM4 signaling, but is not giving up on the possibility of using copper well above these data rates. One of his current quests is to identify a low-cost technology to cool components when forced air is no longer capable of doing the job.
A discussion with John D’Ambrosia, chairman of the Ethernet Alliance, revealed his concern about the unknown bandwidth demands that will be created in support of connected cars, including the unknown applications that could be deployed to address this space, and how this demand will drive the entire eco-system. It is imperative that the infrastructure community and the connected car community begin discussing this aspect.
Advances in both hardware and software continue to make progress in achieving the next major industry goal of Exascale computing, or one quintillion (1018) floating point operations per second.
SC18 will be held in Dallas, November 11-16, 2018.