Embedded Vision Alliance gestures towards visual computing

New alliance sees Microsoft Kinect, smart cameras, algorithms and processors as the future of input devices.

Has the newly formed Embedded Vision Alliance (EVA) come at just the right time to catch the next great technnology inflection point? Chris caught up with EVA creator and Berkeley Design Technology, Inc. (BDTi) founder Jeff Bier recently to learn more about embedded vision.

It’s funny how new ideas for technology associations are born. In the 1990s, the design firm Concept Development decided to get some vendors together as a way to showcase its services, and the traveling Real Time Computing (RTC) Show was born. Microprocessor Report analyst Marcus Levy thought there needed to be a better industry standard benchmark than SPECint and formed the Embedded Microprocessor Benchmark Consortium (EEMBC). Recently, DSP analyst and founder of Berkeley Design Technology, Inc. (BDTi) Jeff Bier became inspired by Microsoft’s Kinect vision sensor for the Xbox360. Jeff brought together more than a dozen companies (Figure 1) to form the Embedded Vision Alliance (EVA). EVA aims to inspire and empower engineers to design systems that see and understand.

21
Figure 1: Founding members of the Embedded Vision Alliance. Since inception, more members have been added, including Intel.
(Click graphic to zoom by 1.8x)


When Jeff first called to brief me about the EVA, I dismissed it as a group of people working on machine vision, the camera-plus-computer stuff used in tomato soup and automobile assembly lines. Instead, EVA defines computer vision as the automated analysis of images and video to extract valuable information, while embedded vision (EVA’s focus) is using computer vision in embedded systems. Examples include Microsoft’s Kinect sensor, which has sold over 10 million devices in 5 months, making it the fastest ramping consumer product after the smart phone. Other embedded examples include Volvo’s active driver safety and lane departure system, smart surveillance cameras that not only capture and transmit images, but also interpret and act upon the information they’re gathering, plus dozens of other vision sensors hooked to embedded devices.

Admittedly, outside of these three examples (Kinect, automotive, and smart surveillance), the market for EVA isn’t well defined. But that doesn’t worry Bier, who sees the market and technology potential for embedded vision to be “larger than the market occupied by today’s smart phones.” Wow, that’s pretty big – over tens of millions of units per year spread across both Western and emerging markets like China and India. What inspired Bier into action?

He cites a couple of things. Examples abound of researchers who have mocked up systems based upon the Kinect multicamera sensor to provide human gesture feedback. There’s the consumer system that uses Kinect to replace myriad remote controls in one’s TV room. All the user has to do is walk into the room, and the system recognizes not only the human’s presence, but which member of the household it is and if it’s after 5 p.m., can turn on the TV to CNN with the volume muted, allowing Mr. Smith to fix himself a drink while sorting the day’s mail. That’s smart. (Maybe some day it will have the drink ready.) According to Bier, some MIT researchers have also created algorithms that rely on a Kinect to discern pulse and heart rate merely by looking at a person’s face. Of course, with facial recognition software added that same info can be automatically saved to an individual’s health file. Think of the ramifications for this in the e-health arena, particular with home health care and shut-ins.

Bier has other examples of how embedded vision is going to dominate all sensor platforms in the future. Suppose a child neglects to buckle her seat belt, or the squirmy toddler climbs out of the booster seat in the back of the van: visual sensors wired to embedded processors notify the parent of this situation. Or in the back yard: the sensor alerts mom that Timmy is close to the unsupervised swimming pool (or heaven forbid – actually in it). When you think of sensors that see the world and recognize human features and activity, and consider all the ubiquitous embedded computers that can act upon this visual information, you get a sense of the market size for embedded vision.

As for the Alliance itself, the goal is to bring together sensor, processor, algorithm, and other fundamental technology to make embedded vision a reality. Already www.embedded-vision.com boasts such companies as Devices, Avnet, CEVA, Freescale, Intel, MathWorks, TI, Xilinx, and many others – for a total of 17, and growing. BDTi’s partnership with these guys is designed to bring together over 30 years of research in algorithms, fundamentals, semiconductors, and sensors – and create meaningful products that can really benefit people’s lives.

During Step 1, Bier says the website provides mostly educational information, along with a clearinghouse of organized data bringing together various stakeholders in industry and academia. Step 2 of the EVA will follow with newsletters, webinars, virtual conferences, and the inevitable technology standards.

If you’ve ever met Jeff Bier of BDTi, he’s a serious guy. It’s hard to get him to crack a smile, much less tell a joke. DSP benchmarks and services form the basis of his deadly serious professional life. But as we chatted about the Embedded Vision Alliance and how embedded vision technology might fundamentally change the world, Jeff grew more animated. His excitement for this market opportunity was tangible. But more importantly, he really does see visual computing as the next great inflection point in technology, with a huge market potential. To see Jeff this stoked about anything is reason enough to pay attention. My gesture to you: check out the EVA from time to time.

 

Topics covered in this article