Stretching the boundaries for FPGAs and people

Q&A with Jeff Kodosky, Co-Founder and Technology Fellow, National Instruments

Jeff K, as he’s known in the LabVIEW community, has a unique vision for how people should program and who should program. He shares some exclusive thoughts with us on where he sees NI’s technology today and in the future. To get the most out of their next multicore or FPGA applications, designers should expose more of the parallelism by drawing more computations in parallel and pipelining computations in a loop. This is already fairly easy and safe to do in LabVIEW, and it will be increasingly important in achieving the highest performance on an FPGA when using newer, higher-speed I/O modules.

PL: We’ve heard from Dr. Truchard about graphical system design. How else has the FPGA capability of NI CompactRIO and LabVIEW changed the way people think about designs?

JK: The combination of an intuitive graphical programming language and a heterogeneous hardware platform (MPU + FPGA) has really changed not only how people think about design but also who is thinking about design.

Hardware and software design engineers see this tightly integrated, embedded platform as a solution to their time-to-market pressures. Regardless of their deployment platforms, these experts use NI LabVIEW software and CompactRIO hardware to prototype and iterate on their designs and algorithms at a fraction of the time even imagined inthe past.

But even more exciting to me is how this marriage of graphical programming and the latest programmable logic technology is changing the "who" of the design world. Now domain experts - the physicists, robotics engineers, mechatronics and mechanical engineers, and scientists - can actually implement their innovative ideas in hardware themselves, instead of hiring an embedded design expert or giving up seeing their concepts come to fruition.

PL: Where are the boundaries today, and where would you like them to be soon, in choosing to implement on an FPGA or a DSP?

JK: The characteristics of an application typically suggest which alternative might be more appropriate. If the application is computation-bound and involves a great deal of complicated mathematical computations, a DSP may be the best implementation target. An FPGA works better if the application is highly parallel or pipelined, or logic-bound with many bit-level operations - for example, when implementing digital protocols.An FPGA also works better if the application uses custom I/O requiring high-speed timing and triggering logic or special "glue" logic. Finally, an FPGA may be the best choice in some safety-critical situations because the safety-critical portion can run in parallel without interference from anything else; there is no operating system, device driver, critical section, or interrupt to delay or interfere with the execution of the safety-critical actions.

In the long term, I would like to see a convergence between FPGA and DSP targets where highly optimized ALU components are generously sprinkled around in an FPGA connection and logic fabric. Such a hybrid target would be ideal for complex embedded applications that combine lots of parallel computation with high-speed timing and triggering.

PL: At NIWeek 2007, OEM board versions of CompactRIO were shown - give us an example of how that is impacting fielded applications.

JK: The ability to move down what we call the RIO deployment curve is very exciting. Engineers and scientists can use powerful platforms like PCs, PXI, and plug-in FPGA devices for rapid prototyping. Then they can easily move to smaller, rugged systems like CompactRIO, and even integrated systems combininga real-time processor and reconfigurable FPGA within the same chassis.

Sanarus, a medical device start-up com-pany, has developed plans for a potentially revolutionary product that could change the way doctors treat benign tumors. With this device, based on LabVIEW and CompactRIO, doctors can eliminate tumors by freezing and killing them in an outpatient procedure, a dramatic change from in-patient surgery or the "wait and see" approach used previously.

The Visica2 Treatment System is an in-strument for use in a doctor’s office or clinic. The procedure is performed with local anesthesia and uses a real-time, ultrasound-guided probe that is virtually painless. The treatment, which lasts 10 to 20 minutes, freezes and destroys targeted tissue through an incision so small that it does not require stitches.

Because this device will be used in offices and clinics around the country, the machine has to be cost-effective. The lower-cost hardware options from NI, still based on the unique reconfigurable (RIO) architecture, enable Sanarus to meet its volume, cost, and technology challenges.

PL: Also at NIWeek 2007 a vision was outlined to get LabVIEW on a chip. Where is the vision is going, and what is it going to take to get there?

JK: It has always been my goal to ensure LabVIEW can reach the lowest levels of hardware and the smallest and lowest-cost programmable targets. This goal is captured nicely by the phrase "LabVIEW on a chip."

In one sense, LabVIEW FPGA already achieves this goal. If an ASIC were made from the FPGA then a LabVIEW application would really be on a chip, but we haven’t seen a high enough volume application to justify making an ASIC.

A related interesting topic is whether LabVIEW on a processor could be accelerated by having some code in the run-time engine actually programmed into an FPGA sitting alongside the processor, or maybe directly in the firmware of the processor itself. This is something we continue to brainstorm, and we are beginning to talk about it informally with some processor vendors. We are also developing relationships with several universities to explore these and related ideas.

Another way to think about LabVIEW on a chip is to envision a processor chip that runs LabVIEW "natively," where software modules (virtual instruments) would be loaded and run analogously to the way a typical operating system loads and runs applications. This could be done with a typical Von Neumann architecture processor today, but it is interesting to envision what the ideal underlying hardware might be to fully realize the potential of the LabVIEW parallel language.

In my view, the ideal hardware would be a "super" FPGA containing lots of optimized embedded components sprinkled throughout the interconnect and logic fabric, such as integer and floating-point MACs and ALUs, FIFOs, blocks of memory, and so on. This super FPGA would consist of a large number of separately reconfigurable regions where the configuration memory itself would be multi-buffered so that the contents of a region could be changed in a single clock cycle. I think LabVIEW would be an excellent tool to program such architecture, and, who knows, maybe something like this will eventually be developed.

PL: What should designers be focused on doing differently now to get an advantage in their next FPGA design?

JK: To get the most out of their next multicore or FPGA applications, designers should expose more of the parallelism by drawing more computations in parallel and pipelining computations in a loop. This is already fairly easy and safe to do in LabVIEW, and it will be increasingly important in achieving the highest perfor-mance on an FPGA when using newer, higher-speed I/O modules.

Jeff Kodosky cofounded National Instruments with Dr. James Truchard and William Nowlin in 1976. Known as the "father of LabVIEW," he invented the graphical programming language that defines the software. Kodosky was named an NI business and technology fellow in 2000. He received his bachelor’s degree in physics from Rensselaer Polytechnic Institute in Troy, NY.

National Instruments
512-683-0100
www.ni.com