Rugged FPGA I/O team likely to draft XMC/FMC
High-bandwidth I/O applications and FPGAs have made logical teammates for some time now. FPGAs connect directly to I/O devices, ensuring low latency. High-speed front end DSP is another plus.
· Does the elbow room exist for needed functionality?
· Can it be cooled?
· Is bandwidth ample?
· How’s connectivity?
· Is latency low enough?
Where do the limits of PMC and XMC playing abilities lie?
A veteran player, the PMC card owes part of its success to its ability to incorporate speed and environmental specification improvements as they occur. And the scouting report on the XMC module is that it replaces PMC’s parallel PCI/PCI-X bus with a high-speed serial interface and connector to match. The most common protocol format it supports is PCI Express.
As applications push performance barriers, system designers increasingly look to FPGAs as a practical way of achieving the throughput they require, which often now is beyond the capabilities of PMC or XMC host interfaces. PCI Express or PCI-X latency is typically in the order of 1-2µS and delivers bandwidths of just a few GBps. XMC is sufficient for some of the more demanding applications, but not all. PCI, PCI-X, PCI Express, and Serial RapidIO evolved to address the needs of conventional computer systems using CPUs. For some FPGA applications these interfaces can actually dilute the advantages of using FPGAs, which excel with parallel streaming dataflows.
Generous with I/O connections (Figure 1), FPGAs work well for high-bandwidth, high-resolution I/O. Designers find a large number of I/O connections particularly helpful in cases where high-speed memories must buffer the input or output data streams. These applications usually demand the largest FPGA packages with the highest available number of I/O pins. Typically measuring 35 mm x 35 mm or larger, these hefty FPGA devices can violate the PMC/XMC specification’s “no go” area across the middle of the module, where no components are allowed. This restricted area forms part of the card’s primary thermal interface (for conduction-cooled cards) and a mechanical fixing area to marry up with stiffening bars on the host. The result is that using large FPGA packages on PMC/XMC cards can encroach on the real estate where the designer would ideally want I/O devices placed.
FPGA Mezzanine Cards (FMC)
More recently the FMC mezzanine module has arrived. It is similar in height and width to a PMC, but around half the length. FMCs connect I/O directly to the FPGA device on the host board. This approach optimizes the interface between the I/O and the FPGA and also shrinks real estate, cost, latency, and power, while boosting bandwidth.
The FMC specification’s large number of differential connections, up to 80 differential pairs, makes it possible to support one or more high-speed parallel interfaces between the FPGA and I/O devices. It also defines a number of serial connections (up to 10 pairs) suitable for Multi-Gigabit Transceivers. Because the host detects which of the FMC’s power supplies (VADJ) to use, the FMC has a simplified power requirement that frees up valuable real estate for more I/O.
Though it measures only around half the PWB area of a PMC/XMC (Figure 2), an FMC can often provide greater I/O functionality, most notably for rugged applications. Consider a pair of actual designs using the same I/O devices for a rugged application: One uses an XMC format card, the other an FMC format card. Because the design requires a large FPGA, and because of the XMC “no go” restriction, the useful space in which to fit the I/O devices can end up being around a quarter of the overall real estate of the XMC.
In comparison, the smaller FMC offers a far greater area of I/O device real estate (Figure 3). In this example, the FMC was able to support two ADCs for two 3 GSps channels, while the XMC offered a single channel. An XMC using a smaller FPGA or a lesser specification for memory may not be affected to such an extent – provided it still has a sufficient number of I/O connections to the devices.
Cooling rugged FPGA-based XMC cards, on which power dissipation can frequently exceed 20 W or even as much as 30 W, can be a significant challenge. Typical rugged air-cooled specifications for such cards define upper air-inlet temperatures of 70 ºC and conduction-cooled cold walls of 85 ºC. Making a mezzanine work within this environment with much lower power levels is already difficult. And host boards often have two XMC sites, compounding the cooling challenge.
When plugged onto a 3U host card, such as 3U VPX, the size and orientation causes the XMC to cover the majority of the host. This means that if there are any hot devices on the host, they will be located beneath the XMC, positioning that seriously affects cooling. Making matters more difficult, the XMC mezzanine’s devices face the host rather than the outside, which places the heat-generating devices opposite those on the host and increases the cooling problem. To cool the XMC, air squeezes between the host and mezzanine, an area that can be a very small cross section, limiting the volume of air available to cool the assembly. Conduction cooling is less difficult, but nevertheless all the heat-generating devices are in one plane and may cause hot spots. In 6U designs the situation is not much better. While some of the 6U host’s real estate is not covered by the mezzanine card(s), the thermal paths to either the cooling air inlet or cold wall interface are longer.
FMC-based designs make cooling simpler. With an FMC the mezzanine covers less of the host board. Appropriate FMC host design allows for suitable heatsinks to be implemented in the areas not restricted by mezzanine placement. The FMC includes I/O devices, but the FPGA is not on the FMC, making cooling easier, too (Figure 4). The FMC specification limits the power dissipation of a single-width module to 10 W.
Traditionally, conduction-cooled systems have limited front-panel space and handle all I/O through backplane connections. In addition, the shock and vibration requirements for such systems are often too rigorous for front-panel cable solutions to handle. FMCs, on the other hand, have had front-panel I/O from day one. To reconcile FMCs and conduction-cooled systems, Curtiss-Wright uses right-angle connectors and a specially designed strain relief bracket to secure the connectors and minimize vibration damage to delicate connectors. This approach has been implemented on the 3U FPE320 board and is designed into or planned for all future FMC-based products.
Interoperability may also need to be addressed in some FMC designs. The FPGA on the host defines the software and HDL. This means that there is no real concept of a software driver. While FMCs from different vendors will fit together electronically and mechanically, differences in the host environments could lead to incompatibility.
And FMCs by definition require the use of an FPGA, which clearly minimizes their universal appeal compared to that of PMC or XMC. Relatively few FMCs and FMC host carriers exist, while PMC and XMC have built up an extensive portfolio of proven cards.
The choice of which mezzanine format, PMC, XMC, or FMC, is best for rugged embedded computing solutions will ultimately come down to design issues such as application details, perception of risk, development timeline, and personal preference. The baseline for choosing which mezzanine is most suitable for certain applications is how it would do in a match with a monolithic board, that is, a single PWB with all functionality onboard. A monolithic card is usually the best technical option, free as it is of the restrictions segmenting the design imposes, such as number of connector I/O pins to the mezzanine.
The FMC characteristics described here indicate that FMCs promise to do for FPGA-based solutions what PMC and XMC did for embedded CPU-based systems. FMCs do not compete with PMCs or XMCs but rather complement them, particularly for high-bandwidth, low-latency applications. It is likely the open standards of XMC/FMC will address rugged FPGA I/O needs, with the monolithic approach serving as the safety net. Which one works best will depend on application specifics.
Robert Hoyecki is Vice President of Advanced Multi-Computing at Curtiss-Wright Controls Embedded Computing. Rob has 15 years of experience in embedded computing with a focus on signal process products. He has held numerous leadership positions such as application engineering manager and product marketing manager. Rob earned a Bachelor of Science degree in Electrical Engineering Technology from Rochester Institute of Technology.
Rob can be reached at firstname.lastname@example.org.