DSP and FPGA suppliers vie for growing embedded vision market

DSP and FPGA vendors are giving embedded vision designers a boost for developing these next-gen systems thanks to image processor advancements.

3Embedded vision systems for various markets, from industrial automation to transportation to surveillance to defense, are set to improve accuracy and safety in critical situations. Traditionally, vision systems that are able to interact with the world around them have been dismissed as science fiction, but advancements in embedded image processors have DSP and FPGA vendors developing platforms with new capabilities in a wide range of applications that are giving vision system designers the tools to create systems that can truly “see and understand.”

Image processing – the use of computers to execute sophisticated algorithms on still images or video – has been a specialized branch of Digital Signal Processing (DSP) for many years, with published scholarly work from the IEEE spanning more than two decades. Now, with advances in processor and sensor technologies, the industry is on the verge of seeing image processing becoming pervasive in people’s daily lives: in homes and automobiles, mobile phone applications, and in shopping centers and workplaces.

To help foster the development of products and solutions for embedded image processing applications, 15 companies formed the Embedded Vision Alliance (EVA) in May 2011. The alliance was initiated by Berkeley Design Technology, Inc. (BDTI) – well known for its DSP benchmarking methodologies – FPGA vendor Xilinx, and market analysis firm IMS Research. The alliance has now grown to 20 members, ranging from suppliers of general-purpose processors, DSPs, and Graphics Processing Units (GPUs), to software and design tool vendors, and developers of Intellectual Property (IP) for image processing applications.

During the DESIGN West show in San Jose in March, EVA members came together to discuss the opportunities and challenges of embedded image processing, and to demonstrate their products for the press and industry analysts. Brian Dipert, Senior Analyst at BDTI and Editor-In-Chief of the EVA website, says that the alliance sees embedded vision as being distinct from traditional multimedia and image processing applications; unlike graphics or video processors, embedded vision systems process images in order to “see and understand.” These embedded vision systems have the potential to address many important application areas, and DSP and FPGA vendors are creating technologies to turn those possibilities into reality.

From factory automation to embedded vision systems that can save lives

One of the long-established markets for image processing has been in manufacturing automation and inspection, often referred to as machine vision. Cameras and sensors can be used to monitor the orientation of components to ensure proper assembly or to detect defects. Vision systems are also commonplace for video surveillance, which in the past relied on human monitoring of live or recorded images. With more sophisticated analytics capability in embedded vision systems, objects and people are not only detected and tracked automatically, but they can also be identified.

The U.S. government’s Defense Advanced Research Projects Agency (DARPA) is working on a program called “Mind’s Eye,” which they hope will take automated visual intelligence even further – to using computers to verbalize descriptions of the scenes that are being monitored. In his keynote presentation at the EVA meeting, James Donlon, DARPA Program Manager for Mind’s Eye, showed working examples of how smart cameras in the battlefield could be used to report on activity, allowing soldiers on scouting patrols to stay out of harm’s way. While far from perfect at this early stage of development, experiments with Mind’s Eye technology have been able to automatically generate simple text messages of observed events, such as “the person lifted something” (Figure 1).

Figure 1: The DARPA Mind’s Eye program uses smart cameras to analyze acticity and generate text messages to describe its observations.
(Click graphic to zoom by 1.9x)

While the U.S. Department of Transportation is considering a proposal to mandate installation of backup cameras, one can imagine how the commercial application of Mind’s Eye technology could enhance vision systems embedded in automobiles for safety and other applications.

After vehicular fatalities, according to the Centers for Disease Control (CDC), the second leading cause of death for children under the age of 14 is drowning. Embedded vision systems could prevent many of these deaths. Such a system has been developed in France by MG International: the Poseidon system. This vision system uses advanced video analysis to recognize texture, volume, and movement within a swimming pool. By surveying the pool in real-time, Poseidon can issue alerts within seconds of an accident and provide the exact location of a swimmer in danger.

Implementing embedded vision systems – DSPs or FPGAs?

While GPUs continue to get more powerful for gaming and multimedia applications, designers will need to employ specialized hardware accelerators for advanced embedded vision applications requiring object recognition and tracking, DSP and FPGA vendors are addressing this need with the introduction of several new products in the form of either DSP-based system platforms or FPGA-based processing platforms.

DSP-based system platforms

At the EVA meeting, Analog Devices, Inc. (ADI) demonstrated their new series of 1 GHz, dual-core Blackfin DSPs. To optimize performance specifically for embedded vision applications, ADI has integrated a video subsystem into the ADSP-BF608 and ADSP-BF609 model processors, which offloads image processing tasks from the general-purpose DSP cores.

The video subsystem in the new Blackfin DSPs is capable of accelerating up to five concurrent image processing algorithms, says Colin Duggan, Director of Marketing in the Processor Technology Group at ADI. The Pipelined Vision Processor (PVP) in the BF608/09 video subsystem functions as an accelerator for performing video analytics, and contains twelve configurable processing blocks for use in object detection, tracking, and recognition. The functions include four 5 x 5, 16-bit convolution blocks, a 16-bit Cartesian-to-polar coordinate conversion block, a pixel edge classifier, and a 32-bit threshold block, among other functions that support commonly used image processing algorithms.

The Blackfin video subsystem also includes a Pixel Compositor (PIXC), which designers can use to overlay and blend images and perform color space conversion for output to LCD displays and video encoders. A Video Interconnect (VID) block provides a connectivity matrix to tie together the PIXC and PVP, with a set of three Parallel Peripheral Interfaces (PPIs) for use with image sensors, displays, analog-digital or digital-analog converters, and other peripherals.

FPGA-based processing platforms

In 2011, Xilinx and Altera each announced development of a new type of System-on-Chip (SoC) processing platform, integrating ARM Cortex-A9 processors on the same chip with programmable FPGA fabrics. At DESIGN West, Xilinx showed production silicon for the Zynq-7000 in an embedded vision application.

The processor system in Zynq consists of dual ARM Cortex-A9 cores, along with ARM’s NEON general-purpose Single Instruction Multiple Data (SIMD) engine. Designers can use NEON as an accelerator for multimedia and signal processing algorithms in image processing applications. For embedded vision, Xilinx used the programmable logic in the Zynq FPGA for hardware acceleration of High-Definition (HD) image processing algorithms. In the DESIGN West demonstration, objects could be tracked in a live 1080p video stream at 60 fps, implementing a real-time closed-loop control system.

Software developers now have several new choices for Operating Systems (OSs) to run on Zynq’s ARM processor subsystem, enabling the device to be used in place of conventional embedded system processors. Xilinx has announced that Wind River is supporting the platform with their VxWorks Real-Time Operating System (RTOS) and with Wind River Linux. Designers can also employ the Microsoft Windows Embedded Compact 7 OS on Zynq, with a reference Board Support Package (BSP) from Adeneo Embedded. Along with a number of other commercial operating systems, Xilinx also supports Google’s Android 2.3 (Gingerbread) OS on Zynq. The source files are available for download from the Xilinx GIT repository, providing support for the display controller and OpenGL ES 1.1 graphics accelerator that are implemented in the Zynq-7000 programmable logic.

The better choice is conditional

The emerging embedded vision market is following the same path as other high-performance DSP applications, such as wireless infrastructure, where FPGAs and DSPs both complement and compete with each other. The addition of specialized accelerators in ASSPs, such as the ADI Blackfin devices, will eliminate the need to add an FPGA to a general purpose DSP for many embedded vision applications. Designers can then concentrate on developing the software. On the other hand, with FPGA SoCs now available, developers can customize their hardware as needed, while employing familiar software platforms and eliminating separate, general-purpose processors. As usual, cost, power, and performance all need to be weighed in order to select the optimal solution.

Bringing sci-fi vision systems closer to reality

Embedded vision conferences frequently conjure up memories of the scene from the movie Minority Report, where iris-scanning devices identify Tom Cruise’s character as he walks through a shopping mall for delivery of personally targeted advertisements on holographic digital signs. This type of biometric application is no longer science fiction, and digital signage is but one of many projected growth areas for embedded vision systems. EVA member IMS Research forecasts a wide array of emerging applications in intelligent transportation, medical patient monitoring and diagnostics, consumer electronics, and security. Fortunately, designers will have no shortage of options for hardware platforms on which to implement such systems, with SoC, GPU, DSP, and FPGA manufacturers all competing for their piece of the developing embedded vision market.

For more information, contact Mike at mdemler@opensystemsmedia.com