FPGAs drive a wedge into embedded market fragmentation

Q&A with Craig Anderson from Nallatech -- one of those quiet technology companies whose "still waters run deep."

An exclusive interview with Nallatech's Craig Anderson, CEO

Editor's Note: Nallatech is one of those quiet technology companies whose “still waters run deep.” While the company has morphed from a seemingly technology focused business to a more product focused enterprise, there’s a lot of quiet intellectual property under the hood of their reconfigurable FPGA platforms.

Although technologists started the company, Nallatech’s CEO Craig Anderson is on a no-nonsense path toward deployed military and High Performance embedded Computing/High-Performance Computing (HPEC/HPC) systems.

– Chris Ciufo, Editor

DSP:I visited Nallatech in Scotland three years ago, and at that time, the company was closely tied to the Scottish government, Xilinx, and academia. Are these relationships still pertinent?

ANDERSON: Our strategic focus has remained consistent for the last several years – designing, developing, and supplying FPGA-based high-performance computing solutions for the U.S. defense and aerospace sector, and now also for the financial services sector. All these markets benefit from collaborative relationships.

To answer your question, our business relationships in these three areas remain extremely important to us, and we have continued to develop and strengthen these links over the last few years. We continue to receive significant support from the Scottish government and its economic development arm, Scottish Enterprise, in the form of business development, training, and R&D grant funding. We are one of Xilinxís top tier partners and will be first to market later this year with application accelerator modules based on Xilinxís latest Virtex-5 devices. We are also doing more work with universities and research labs than ever before and benefit greatly from the IP that is jointly created.

Over the last few years, we have built on this strong foundation by broadening our partnerships and alliances. We are part of a key industry alliance, FPGA High Performance Computing Alliance (FHPCA), whose founders include Nallatech, Xilinx, iSLI [Institute for System Level Integration], and Scottish Enterprise. We now have strategic relationships with Intel and National Semiconductor and will continue to add to this list where partners can help us bring compelling technology benefits to our customers.

DSP: What about reconfigurable computing – is it or will it become mainstream?

ANDERSON: I see reconfigurable computing as the ability to rearrange a ìbucket fullî of computing components into a specific computing architecture ideally suited to a specific algorithmic problem. In effect, you are continually ìtuningî the computing architecture as the required computing tasks are presented to the computer. The benefits of FPGAs are that they can morph to just about any function with any type of connectivity between processing elements to match the desired algorithm layout exactly. This lends itself to an optimum data flow and resulting power-saving attributes.

So will this become mainstream? Well, I believe that it has to. The main reason is that every school of thought in the computing industry is now aligned with the concept that the world is going parallel and we are going to be inexorably drawn to ìmany coreî type solutions. As these cores rise to their hundreds and thousands on silicon, we will effectively be building distributed compute architectures on them. A key question will be whether these cores are a lot of little von Neumann like processors similar to multicore or lots of clumps of morphable logic like FPGA CLBs [Configurable Logic Blocks] today.

The challenge for both of these technologies is the programming model. One thing thatís for sure is todayís programming model doesnít work for either.

For FPGAs to make successful reconfigurable computers, a key requirement will also be their ability to dynamically reconfigure as a program executes. This seems to have been on the cusp of commercial viability for more than a decade, but has never quite seemed to make it out of the labs. Perhaps the reality is that dynamic partial reconfiguration has never made commercial sense in the past. However, as devices have gotten larger, there are many more cases where it makes sense.

For example, there is no need to have a separate device with a PCIe interface onboard; you can roll this into one FPGA. Then again, it does make sense to keep the PCIe interface live whilst changing the functionality of the design that interfaces to this. I believe that in 2008/2009 we will see this becoming commercially viable with the FPGA vendors taking partial reconfiguration seriously and treating it as an area of differentiation.

If FPGA vendors want to play in the future world of reconfigurable computing, they need to address how users can easily task switch functionality in and out of the FPGAs. The current fragmentation of the computing industry presents a huge market opportunity, and these vendors either need to grab it with both hands or let other technologies take center stage in tomorrowís computing architectures.

DSP: Letís talk specifically about the military for a moment. What are some of the key defense programs youíre most interested in, and why?

ANDERSON: For confidentiality purposes, I would rather focus on the types of programs and applications we are participating in within the defense market. Throughout the defense market, we are seeing a requirement to take existing systems that have been field deployed for many years and provide a technology refresh to those systems to enhance their processing capabilities while addressing SWAP [Size, Weight, and Power] considerations. This allows for additional capabilities and functionalities to be deployed within the same airframe, ground vehicle, or vessel. With the advent and success of UAV platforms, for example, the SWAP requirement has been receiving an even higher level of visibility and consideration.

In terms of data collection and monitoring, there seems to be an insatiable appetite for data acquisition, such as SIGINT [Signal Intelligence], ELINT [Electronic Intelligence], COMINT [Communications Intelligence], and so on, and the ability to process this data as close to the sensor as possible prior to transmission for additional processing, storage, or display purposes.

Lastly, video processing has been an area of increasing focus to our customer base. Real-time UAV guidance, compare/contrast digital mapping data, and facial recognition capabilities are all examples of defense applications in which we are actively engaged.

DSP: What standards are you following for your deployable systems?

ANDERSON: The technical requirements for these systems are as varied as the applications they address, but the ability to develop and deploy on the same architecture is a tremendous benefit that reduces time to market, a key capability required to compete in todayís market. Our forthcoming deployable product offerings are based on industry standards such as VITA 41, VITA 46, VITA 42, and PCI-104, and we are leading the industry effort to ratify VITA 57, an interface standard optimized for interfacing to FPGAs. Our new COTS products that will be introduced in fall 2007 will address these requirements.

DSP: What do you think about the differences between FPGA chips, hardware implementations, vendors, and the software used to program them?

ANDERSON: Wow, now thatís one that is very close to my heart and by far the biggest nut to crack. FPGA vendors, hardware implementations, and the software needed to program them MUST all be aligned to create and attract a flourishing market. Put simply, we need to build interface standardization between each of the layers, which is something we have been championing for many years. We support organizations like OpenFPGA, CHREC [National Science Foundation Center for High-Performance Reconfigurable Computing], and FHPCA that are trying to coordinate standardization from subtly differing perspectives. Nallatech is one of only two companies that have all the pieces of the pie, apart from the FPGA chips themselves.

To address this problem, weíve embraced an open architecture strategy. Here, we allow our customers and other vendors (that may be viewed as competitive) to interface to our tools and hardware at any level. We see this as particularly important from the compiler level and above, which is why we have had long-standing partnerships with companies like Mitrion and Impulse Accelerated Technologies. All our tool flows support VHDL as their output format, and we always provide the architectural details of our hardware so engineers can make our products ìsingî with all their own hand-coded VHDL if they so wish.

Standardization will come in the end, even if itís via the de facto route. However, a large investment in standardization efforts such as OpenFPGA is probably the best approach. Perhaps Xilinx and Altera should make this investment together and metaphorically ìjoin handsî on this one, because they will be jointly fighting a massively larger market than the one they compete in today.

DSP: What are the top three embedded technologies youíre seeing in the market today?

ANDERSON: In the Signal Intelligence [SIGINT] space, the ability to sample analog signals of interest and convert them to the digital domain is of great interest. Weíre seeing a huge interest in the latest signal acquisition technologies, with a particular focus on high-resolution 16-bit A/Ds at hundreds of MSPS sampling and very high speed – over 2 GSPS – with 8 to 10-bit resolutions.

High-speed interconnect fabric is still being debated, but PCI Express and RapidIO are emerging as the most popular standards and are driving growth with increased features and capabilities that enable many system design challenges, especially in SIGINT and radar applications.

Finally, the latest generations of board standards are beginning to get traction in the market. Acceptance of VXS, VPX, and XMC is growing, along with blade solutions, such as IBMís BladeCenter blades (in applications where ruggedization isnít an issue). In fact, Nallatech is already shipping BladeCenter solutions to customers in the military sector.

DSP: What are your customersí biggest challenges? Are they solvable, and if so, how?

ANDERSON: Time to market always seems to be at the top of this list. The shortage of available, qualified design resources to apply to a job in a timely manner has become a major concern in recent years as well. Lastly, guarding their systems against obsolescence in the face of long product life cycles is a challenge all defense contractors face.

From an FPGA perspective, the biggest challenges remain in the programmability and maintainability aspects of FPGA technology, and this is even more so the case for multicore. Tool flows such as Nallatechís have matured considerably over the last few years to address these challenges. As a result, FPGA computing is continually becoming accessible to a wider market.

DSP: Nallatech seems to be taking a new direction in its collaboration with Intel on FPGA socket filler products. How does this align with its embedded products, if at all?

ANDERSON: Interestingly, the worlds of HPEC and HPC are merging, specifically with the typically real-time, deterministic characteristics of the HPEC world becoming more and more prevalent in the HPC world. The new buzz phrase is triple play, standing for voice, video, and data. Voice and video have the obvious connotations of real time and determinism, but more importantly, data also must exhibit these characteristics if we are to achieve the most optimum and efficient solutions to parallel computing implementations.

While the appeal of a Xeon processor socket filler is very attractive from an algorithm acceleration perspective with FPGAs, the added bonus of a ìback doorî into the better determinism and latency of the FSB [Front Side Bus] interface over a standard, nondeterministic, OS-controlled interface such as PCIe is extremely appealing to the data-centric computing community. This is where Nallatech sees the greatest value of access to the FSB being made available.

Therefore, Nallatech has focused its FSB products to support a high-bandwidth back door I/O that provides access to our overall data-centric product portfolio. This approach has garnered a significant amount of interest from our existing customer base as well as a new range of customers looking for that extra level of performance and integration.

DSP: Nallatech has been offering FPGA system design tools for a number of years. Is the failure of these tools to break into the mainstream an indicator of customer indifference to such tools?

ANDERSON: We have seen significant growth in demand and acceptance of productivity enhancement tools for FPGA systems. This is driven by the need to accelerate system development times and reduce costs. Early adopters of these design tools are now beginning to reap the benefits they can provide. In a recent example, a major defense subcontractor trimmed an anticipated 18-month IR&D program down to six months using our software tools. They are now advocating the use of this tool flow internally in their organization across multiple facilities. This type of evidence is dramatically changing market perception of such tools and pushing them toward mainstream acceptance.

DSP: Accelerators seem to be back in vogue after they came and disappeared in the í80s. Are accelerators just another passing fad, or are they here to stay this time?

ANDERSON: DARPA [U.S. Department of Defense Advanced Research Projects Agency] coined the ideal processor as a polymorphic processor in the late í90s. They depicted the different data types required for an ideal processor and showed that FPGAs, DSPs, and microprocessors could coexist to deliver a heterogeneous equivalent to an ideal polymorphic processor. Since then, many flavors of reconfigurable processor that fill different sections of DARPAís polymorphic processing graph have been presented to the community.

However, the reality is that the FPGA has moved from high-performance bit-level processing to overtake the DSP at stream processing. Likewise, the processor is moving toward stream processing as a result of its multicore implementation.

So it would appear that certain esoteric accelerators will indeed be squeezed out by either the FPGA or multicore ìcrush.î However, the question still remains as to whether the FPGA will eventually be squeezed out as an efficient accelerator. I truly believe that the FPGA will always have its place as an accelerator and I/O manipulator. On the other hand, should we also be questioning the viability of von Neumann architectures in an increasingly parallel computing world? In a bizarre twist of fate, the FPGA or a derivative thereof could end up being the mainstream parallel processor of tomorrow, thereby making the accelerator become tomorrowís processor rather than the processor evolving to kill the accelerator, as history has already demonstrated.

Craig Anderson is CEO of Nallatech in Glasgow, Scotland. Before joining Nallatech, Craig turned an underperforming business that lacked investor support into the fastest growing health care IT company in Europe Previously, Craig held group financial responsibility at Kwik-Fit, Europeís largest automotive repair specialist with annual turnover of $1.4 billion. Prior to that, Craig was with Arthur Andersen for nine years, where he spent considerable time advising high-growth companies. He is a qualified Chartered Accountant

Nallatech, Ltd.
1-877-44-NALLA
c.anderson@nallatech.com
www.nallatech.com