Signal processing in FPGAs goes mainstream

Q&A with Jeffrey Milrod, President and CEO, BittWare

Jeff gives us further insight in this exclusive interview as to how signal processing using FPGAs has changed, and how customers are benefitting from the improvements. A few years ago, FPGAs were used mostly, but not exclusively, for communications, interface, and glue-type logic. Only the brave, lunatic, or desperate few implemented processing algorithms in the FPGA itself.

PL: What’s different in 2008, from a couple years ago, in how people design a signal processing system with FPGAs?

JM: A few years ago, FPGAs were used mostly, but not exclu-sively, for communications, interface, and glue-type logic. Only the brave, lunatic, or desperate few implemented processing algorithms in the FPGA itself. Even then, these were mostly very simple, straight-forward, and repetitive types of algorithms.

Today, things are very different and the pendulum has swung the other way. It now seems that everyone wants to use FPGAs as a signal processing resource, but clearly not everyone understands the implications of that. Implementation is still quite challenging and not for the faint of heart. While FPGA vendors and third party tools suppliers have made great strides, there is still a gap between the high-level algorithm implementation and the HW.

PL: I see FPGAs, DSPs, and even GPPs in your diagrams. Where are the boundaries in partitioning a system?

JM: There really are no strict boundaries other than those imposed by hard performance requirements. However, partitioning can have a huge impact on development time and effort.

Traditional DSPs are still much easier for implementing complex algorithms and algorithms that are likely to be frequently modified, since development can be done in higher level languages like C. This also facilitates code reuse - many users already have a great deal of code working on DSPs, with no compelling reason to port to FPGAs.

If code reuse is not an issue, and the algorithm is fairly well-defined, standard, or straight-forward, FPGAs are very attractive since they can offer compelling performance advantages - both in terms of size and speed. Often, signal processing algorithms force the use of FPGAs because they can’t be reasonably implemented in DSPs.

FPGAs provide some future-proofing - as they become bigger, faster, lower power, and easier to use, it’s likely that they will dominate signal processing. FPGA development investments today will be the code reuse of tomorrow.

As for GPPs, we only use them to do command and control processing. This can greatly ease the processing and development burden of the target DSPs and FPGAs, so that these valuable resources only do what they’re best at.

PL: What does your FPGA design toolset look like, and why are those pieces important?

JM: The generic block template in the toolset consists of a function implemented with a standard data interface for sourc-ing and sinking, along with a standard memory mapped control interface. Many of the specific blocks provide board-level physical interfacing, but some provide data switching and routing functions, and others provide resource arbitration, DMA engines, control block, and utility functions.

These pieces are not overly exciting in and of themselves; the real value lies in the fact that they are constructed to be building blocks in a framework - BittWare’s ATLANTiS - allowing users to more quickly and easily get their special algorithms and applications up and running in the FPGA on a real board.

PL: What makes the ATLANTiS framework special?

JM: In ATLANTiS, we’re using standard interconnects and an orthogonal control plane, and we’ve done all the low-level physi-cal interfaces and data movement structures that are standard in microprocessors or DSPs. (Figure 1.)

Figure 1
(Click graphic to zoom by 2.2x)

Rather than starting with a blank slate, as it were, and having to deal with all the low level interfacing, data movement, and control, our ATLANTiS modular framework enables users to focus on adding their unique value to the FPGA - much like one would focus on writing algorithmic code and setting-up memory structures in a processor without worrying about creating a data transport layer, DRAM controller, or arbitrating for resources.

PL: How are people succeeding at shrinking their systems with FPGAs? Please give us an example.

JM: We have seen many examples of this in military, instru-mentation, and communication applications where an FPGA is used to implement an algorithm that is particularly difficult to do in standard DSPs or GPPs.

One specific example of this is a high-end cytometry (cell-sorting) instrument from iCyt. Their flow cytometer uses a laser to detect and sort the cells, with optical sensors sampled at over 100 MHz. Using DSPs to detect the cells required several DSPs to perform a weighted threshold on every sample from every sensor. We consulted with them and helped implement the cell detection algorithm in a pre-processing FPGA, thereby only requiring DSPs for the cell analysis - cutting the number of DSP boards in half.

PL: What’s the next wave look like, and where can competitive advantage be gained?

JM: The next generation of FPGAs that we’ve seen through our partnership with Altera is simply amazing. The speed, feature set, and capabilities are astounding, and the power is lower than I’d hoped. The challenge is in harnessing all that tremendous signal processing potential in a practical, timely, and cost effective way.

Competitive advantage will be gained by those who facilitate code reuse and can reduce the effort required to get real signals in and processed in the FPGA. There is a great deal of work being done to improve the algorithmic implementation process, but often the harder part is integrating the algorithms into the real world of signal processing.

Jeffrey Milrod is BittWare’s president and CEO. He holds a bachelor’s degree in Physics from the University of Maryland and an MSEE from John Hopkins University.