Portrait of a power miser: Open-architecture DSP core teams with a number-crunching accelerator for audio apps

David argues that an embedded open-architecture audio processor can hit the below 1 mW active power consumption mark.

Typical audio transceivers now involve more sophisticated solutions, and demand for audio processors has never been higher. This stems partly from the popularity of convenient communication devices, such as mobile phones or even laptop computers using VoIP, and from the demand for clear communication free of echo and noise in all environments. Such communication quality is only possible thanks to the implementation of increasingly complex DSP algorithms that can improve audio performance under increasingly diverse and challenging conditions.

At the same time, design engineers have less freedom when it comes it to the amount of available system power and board space. As a result, the need to implement more complex digital audio processing solutions that consume less power and require less physical space is driving the development of audio processing techniques in general and embedded DSP-based solutions in particular.

Demanding more from less

When considering power consumption, both hardware and software play a role. As algorithms become more complex the hardware that implements them needs to deliver greater performance. With general-purpose processors more complex algorithms will increase the total power required – a trend that contradicts the market’s requirements. While modern RISC cores implement power-saving techniques to minimize power consumption based on activity, it is still difficult for a general-purpose processor to reduce its active power consumption to levels that can be achieved with dedicated chip hardware. Because of this, designers of new and emerging audio applications are looking for approaches that will address the trade-offs of performance, functionality, size, and power consumption.

Audio processing alternatives

In the development of processors for applications such as hearing aids that demand ultra-low power consumption, it has been shown that active power consumption for an embedded open-architecture audio processor can actually be reduced to below 1 mW [1]. ON Semiconductor’s BelaSigna audio processors, for example, implement configurable hardware accelerators capable of performing common signal processing routines over 10 times more efficiently than a software-based solution. By making good use of an accelerator’s number-crunching capabilities alongside an open-programmable DSP core, systems engineers can achieve the same results from a lower operating voltage and clock speed. Since power consumption reduces with the square of the operating voltage, this load balancing capability delivers substantial power savings. A processor architecture dedicated to audio can, therefore, be realized to minimize power consumption while the device is fully operational, complementing ordinary power-saving sleep modes. Developers can also implement additional custom power management in software.

Within the scope of audio processing, architectures benefit from being further optimized to meet the different needs of broadcast and portable audio markets. The increased number of channels, diversity of standards, and improved fidelity required from a broadcast audio DSP, for instance, would typically result in a sub-optimal architecture for portable audio applications.

While general-purpose DSPs may be used for a wide variety of applications, a processor optimized for portable audio processing is able to achieve significantly lower power consumption while meeting all of the market’s requirements for performance and features.

Performance enhancements

When considering audio clarity and call quality, the time it takes for audio to pass through a processor, known as group delay, is one of the key performance indicators governing user experience. It is inevitable that, however optimized the solution, applying audio processing algorithms will introduce a group delay. As long as the group delay is maintained below 10 ms it is usually tolerable, allowing two-way communication such as telephone conversations to proceed naturally. If too much delay is introduced, it will cause constant but unintentional pauses in conversation, hampering the user experience.

When operating in the time domain, group delay as low as 1 ms can often be easily achieved. However, many powerful audio processing technologies use Fast Fourier Transforms (FFT) or filterbanks so that the signal can be manipulated in the frequency domain to deliver better results. By nature, time-frequency transforms require more time to collect data, which increases delay. With carefully designed hardware accelerators, it is possible to minimize group delay. These hardware accelerators are designed to take advantage of the best algorithms known to reduce group delay, such as using a Weighted Overlap-Add filterbank rather than an FFT. Clearly, there is a performance relationship between the software and hardware dimensions of the solution. Designers must consider an architecture that has the commensurate algorithm support to meet their specific requirements.

The subjective nature of evaluating a given audio processing solution is complicated by the lack of standardized or widely adopted measurement methodologies. As a result design teams need to develop performance metrics that reflect their design constraints and requirements and decide which solutions deliver the best results for their application.

To optimize the solution, engineers need the flexibility to select the algorithms that will deliver the best results and ultimately, the greatest user experience under all operational conditions. Tuning algorithms for a particular application can complicate this design process, introducing delays and increasing cost and risk for developers. When several algorithms must be combined, easy interoperability is also a necessity. Both challenges can be overcome if the processor vendor is able to provide and comprehensively support robust and interoperable audio processing algorithms to satisfy a range of low-power audio communication applications.

Algorithms should meet a comprehensive and openly available software development standard that defines all applicable APIs and interfaces. One example of such as approach is the BelaSigna processor family from ON Semiconductor (Figure 1 Interoperability of all standard algorithms should take place without modification, affording engineering teams the ability to simply integrate their custom algorithms. Through vendor-selected algorithm combinations, engineers can develop applications to meet specific requirements, such as mobile phone uplink/downlink noise management with equalization, echo cancellation, and adaptive volume controls. For speakerphones, a solution that integrates echo cancellation, 4-mic beam forming, and noise reduction may be more appropriate.

Figure 1: The BelaSigna processor family from ON Semiconductor
(Click graphic to zoom by 1.9x)


Enabling smaller solutions

Smaller end products impose significant restrictions on the space available for power cells, and these same design restrictions also constrain the size of the audio processing solution.

The total board space required will include all ancillary components; if the algorithms used aren’t stored on-chip this may include external memory devices. A highly integrated solution can have a distinct advantage by reducing the solution component count, if it has been well designed. Processors using advanced package technologies such as Wafer-Level Chip Scale Packaging (WLCSP), enable designers to implement audio processing with minimal impact on the size of the PCB and the end product. It is also worth noting that low-power design principles can help to simplify power supply design, allowing the use of smaller components with lower ratings to achieve further size reductions.

Through optimized architectures and sophisticated algorithm development and availability, coupled with an advanced interoperability ethos, it is possible to meet both advanced audio performance and ultra-low power consumption requirements with a non-intrusive PCB footprint.

David Coode is Audio Group Manager, ON Semiconductor. David began his embedded audio career on the research team at IVL Technologies and has since held various technology roles at ON Semiconductor in applied digital signal processing, marketing, sales and product development and design management. With more than 10 years of acoustics, low-power miniature design, and digital audio experience, David currently manages ON Semiconductor’s audio solutions group.

In addition, David is an experienced speaker and has presented at many international industry events including WiCon World Expo 2004 in Amsterdam ICSPAT 2000. David earned his Bachelor’s of Applied Science in Computer Engineering from the University of Waterloo in Canada and is fluent in both French and English.


[1] E. Chau, H. Sheikhzadeh, R. Brennan, T. Schneider, “A Subband Beamformer on an Ultra Low-Power Miniature DSP Platform,” 2002, IEEE, http://www.onsemi.com/site/pdf/ICASSP2002_Beamforme.pdf


ON Semiconductor