[Guide]Low-latency, real-time acoustic processing is a key factor in many embedded processing applications, including speech preprocessing, speech recognition, and active noise reduction (ANC). With the steadily increasing requirements for real-time performance in these application areas, developers need to properly respond to these requirements with strategic thinking. Since many large-scale systems provide considerable performance by the chip, we tend to load any additional tasks that appear on these devices, but we need to know that delay time and its determinism are very critical factors. If you are not careful Consider, it is easy to cause major real-time system problems. This article will discuss the issues that designers should consider when choosing SoCs and dedicated audio DSPs to avoid unpleasant surprises in real-time acoustic systems.
The application of low-latency acoustic systems is very wide. For example, in the automotive field alone, low latency is essential for personal audio areas, road noise reduction, and in-vehicle communication systems.
With the emergence of the electrification of automobiles, road noise reduction has become more important because there is no obvious noise produced by an internal combustion engine. Therefore, the noise associated with car road contact will become more pronounced and annoying. Reducing this noise not only brings a more comfortable driving experience, but also reduces driver fatigue. Compared with the deployment of low-latency acoustic systems on dedicated audio DSPs, deployment on SoCs will face many challenges. These issues include latency, scalability, upgradeability, algorithm considerations, hardware acceleration, and customer support. Let’s introduce them one by one.
In real-time acoustic processing systems, latency issues are very important. If the processor cannot keep up with the real-time data handling and computing requirements of the system, it will cause unacceptable audio interruptions.
Generally speaking, SoC will be equipped with small on-chip SRAM, therefore, most of the local memory access must rely on cache. This leads to uncertainty in the use of code and data, and also increases processing delays. For real-time applications like ANC, this alone is unacceptable. However, in fact, SoC will also run a multi-tasking non-real-time operating system with heavy management. This amplifies the uncertain operating characteristics of the system, making it difficult to support relatively complex acoustic processing in a multitasking environment.
Figure 1 shows a specific example of an SoC running a real-time audio processing load. When processing higher priority SoC tasks, the CPU load peaks. For example, when performing SoC-centric tasks, including media rendering, browsing or executing applications on the system, these peaks may occur. When the peak value exceeds 100% CPU load, the SoC will no longer run in real time, which will cause audio loss.
Figure 1. In addition to running other tasks, the instantaneous CPU load of a typical SoC running high audio load processing.1
On the other hand, the architecture of the audio DSP is to achieve low latency in the entire signal processing path (from sampled audio input to processing (for example, sound + noise suppression) to speaker output). The L1 instruction and data SRAM is the single-cycle memory closest to the processor core, enough to support multiple processing algorithms, without the need to transfer intermediate data to off-chip memory. In addition, the on-chip L2 memory (far away from the core, but the access speed is still much faster than the off-chip DRAM) can provide intermediate data operation cache when the storage capacity of the L1 SRAM is insufficient. Finally, the audio DSP usually runs a real-time operating system (RTOS) to ensure that the input data can be processed and moved to the target location before the new input data arrives, thereby ensuring that the data buffer does not overflow during real-time operation.
The actual delay time when the system is started (usually characterized by the start-up sound) is also an important indicator, especially for the car system, which requires a prompt tone to be broadcast in a certain window after the start-up. In the SoC field, a very long startup sequence is usually used, which includes starting the operating system of the entire device, so it is difficult or impossible to meet this startup requirement. On the other hand, an independent audio DSP that runs its own RTOS and is not affected by other unrelated system priorities can be optimized to speed up its startup and meet the startup sound requirements.
Although in applications such as noise control, latency is an issue for SoCs, scalability is another disadvantage for SoCs that want to perform acoustic processing. In other words, SoCs that control large systems with many different subsystems (such as car multimedia hosts and dashboards) cannot easily scale from low-end to high-end audio requirements. This is because the scalability requirements of each subsystem component There are always conflicts, and there is a trade-off in terms of overall SoC utilization. For example, if the front-end SoC is connected to a remote radio module and adapts to multiple car models, the radio module needs to be expanded from several channels to multiple channels, and each channel will exacerbate the real-time problem mentioned earlier. This is because each additional feature under the control of the SoC changes the real-time behavior of the SoC and the resource availability of key architectural components used by multiple functions. These resources include memory bandwidth, processor core cycles, and system bus structure arbitration slots.
In addition to issues related to other subsystems connected to the multitasking SoC, the acoustic system itself also has scalability issues. It involves low-end to high-end expansion (for example, increasing the number of microphone and speaker channels in ANC applications), as well as audio experience expansion, from basic audio decoding and stereo playback to 3D virtualization and other advanced features. Although these requirements do not have the real-time limitations of the ANC system, they are directly related to the choice of the system’s audio processor.
Using a single audio DSP as the co-processor of the SoC is an excellent solution to the problem of audio scalability, and can achieve modular system design and cost-optimized solutions. SoC can reduce the attention to real-time acoustic processing requirements of large-scale systems, and transfer this processing requirement to low-latency audio DSP. In addition, the audio DSP provides code compatibility and pin compatibility options, covering several different price/performance/storage capacity levels, allowing system designers to maximize the flexibility to choose audio performance products suitable for a given product level.
Figure 2. ADSP-2156x DSP, a highly scalable audio processor.
As today’s cars increasingly adopt OTA, it is becoming more and more important to upgrade by issuing critical patches or providing new features. Due to the increased dependence between its various subsystems, this may lead to key SoC problems. First, multiple processing and data movement threads will compete for resources on the SoC. When adding new features, especially during peak periods of activity, this will intensify the competition between processor MIPS and storage space. From an audio point of view, new features in other SoC control domains may have unpredictable effects on real-time acoustic performance. One of the negative effects of this situation is that new functions must be cross-tested on all operating planes, resulting in numerous permutations and combinations among various operating modes of competing subsystems. Therefore, the number of software verifications for each upgrade package will increase exponentially.
From another perspective, it can be said that in addition to the functional map of other subsystems controlled by the SoC, the improvement of SoC audio performance also depends on the available SoC MIPS.
Algorithm development and performance
Obviously, when developing real-time acoustic algorithms, audio DSP aims to achieve mission goals. The significant difference from SoC is that independent audio DSP can provide a graphical development environment, allowing engineers who lack DSP coding experience to integrate high-quality acoustic processing in their designs. This type of tool can reduce development costs by shortening development time without sacrificing quality and performance.
For example, ADI’s SigmaStudio® graphical audio development environment provides a variety of signal processing algorithms integrated into an intuitive graphical user interface (GUI) to create complex audio signal streams. It also supports the use of graphical A2B configuration for audio transmission, which is very helpful in accelerating the development of real-time acoustic systems.
Audio auxiliary hardware features
In addition to the processor core architecture designed for efficient parallel floating-point calculations and data access, audio DSPs usually use dedicated multi-channel accelerators to run common algorithms, such as fast Fourier transform (FFT), finite and infinite impulse response (FIR) And IIR) filtering, and asynchronous sampling rate conversion (ASRC). This allows real-time audio filtering, sampling, and frequency domain conversion outside the core CPU, thereby improving the effective performance of the core. In addition, because they adopt an optimized architecture and provide data flow management functions, they help to build a flexible and user-friendly programming model.
Due to the increase in the number of audio channels, filter streams, sampling rates, etc., we need to use the most configured pin interface to support online sampling rate conversion, precision clocks, and synchronous high-speed serial ports to efficiently route data and avoid delays. Or the external interface logic increases. The digital audio interconnect (DAI) of ADI’s SHARC® series processors demonstrates this capability, as shown in Figure 4.
Figure 3. SigmaStudio graphics development environment of ADI.
Figure 4. Digital audio interconnect (DAI) block diagram.
When developing with embedded processors, we often overlook one point, that is, customer support for the device.
Although SoC vendors advocate running acoustic algorithms on their built-in DSP products, this will bring some burden in actual use. On the one hand, vendor support is usually more complicated, because the field of SoC application development generally does not involve acoustic expertise. Therefore, it is often difficult to provide support for customers who want to develop their own acoustic algorithms based on SoC-based on-chip DSP technology. Instead, the supplier provides standard algorithms and charges a considerable NRE fee, and then transplants the acoustic algorithm to one or more cores of the SoC. Even so, there is no guarantee of success, especially when the supplier cannot provide mature, low-latency framework software. Finally, third-party ecosystems suitable for SoC-based acoustic processing are often quite fragile, because this area is not the focus of SoC.
Obviously, dedicated audio DSPs can provide a more powerful ecosystem for the development of complex acoustic systems, from optimized algorithm libraries and device drivers to real-time operating systems and easy-to-use development tools. In addition, audio-based reference platforms (such as ADI’s SHARC audio module platform, as shown in Figure 5) that help accelerate product launches are relatively rare for SoCs, but they are common in the field of independent audio DSPs.
Figure 5. SHARC audio module (SAM) development platform.
In short, it is obvious that the design of a real-time acoustic system requires meticulous and strategic planning of system resources, and it cannot be managed solely by allocating processing margins on a multi-task SoC. On the contrary, optimizing an independent audio DSP for low-latency processing is expected to increase its durability, shorten development time, and achieve excellent scalability to meet future system requirements and performance levels.
1 Paul Beckmann. “Multi-core SOC processors: performance, analysis and optimization.” 2017 AES International Car Audio Conference, August 2017.
The Links: BQ76925PWR LQ13X02C LCD-DISPLAY