Sometime More Is More And Less…

Adding intelligence to systems lowers power usage while making the impractical practical.

popularity

Anyone who has been reading this blog has already figured out that as an ex-system designer, I am a fan of intelligent IP subsystems, and in a couple of my previous posts I already talked about how they make design easier by distributing the overall complexity.

The other day however, I found myself trying to describe to a non-semiconductor person why this move is good and what benefits it delivers to end users, rather than system designers. It took me a while to come up with a good example, but I believe the one I used demonstrates well what is possible as the subsystems become intelligent — and how even though we are adding additional logic to the design it can significantly reduce overall power while simultaneously adding capabilities previously deemed impractical.

Its an example I thought would be good to share here.

OK Google…
Voice control is becoming a major driver in consumer electronics with Google Glass prototypes popping up everywhere (or so it seems in downtown Mountain View), the new Nexus 5 launch last week, and this week’s launch of the Xbox One (the PS4 had voice control disabled at launch so I will not consider it).

The thing these examples have in common is a reliance “key phrase” recognition to trigger voice control behavior — “OK Google” for the Glass and Nexus, and “Xbox” for the Xbox One. This is very different from previous voice control schemes such as Apple’s Siri, which relies on pressing a physical button to trigger recognition.

Using a monolithic system design to do key phrase recognition means running a large part of the system at all times, listening to the audio, and then deciding what to do. Even though you can run the processor in a lower power state, it is still very demanding on the battery. And while it works for a stationary device such as the Xbox, it is not practical for portable electronics as it quickly reduces the battery life. This is the reason for requiring the button press to activate Siri on the Apple devices.

A more practical solution is to add tiered intelligence into the system, with each tier having just enough computing power to do its job independently from the rest of the system, and then wake up others as needed.

OK Google

The picture above shows how key phrase recognition could be implemented in a tiered subsystem approach.

The first subsystem consists of a really limited voice detection capability that only understands the key phrase. Ideally, this will be using a small embedded processor with some dedicated DSP instructions to improve computational efficiency. It runs at all times, but the rest of the system can sleep without impacting the ability to recognize voice commands.

Once the key phrase is detected, the second subsystem is quickly woken up and performs the more advanced voice recognition (again using a specialized CPU/DSP embedded processor). This recognition is done while the rest of the system sleeps, and only once the context of the voice command is known are the necessary system components woken up.

The overall result is that the system can be voice-commanded at all times, with very little impact on the overall power consumption. You can have less power, with more capabilities that would have previously required an unacceptable tradeoff.

This is by no means the only example I could use. Apple did something similar from a system design perspective with the new iPhone 5s, but it used a two-chip solution. To improve battery life while allowing better GPS and motion tracking, it moved intelligence into the M7 chip, which now does all the ongoing tracking, waking up the A7 only when needed. If I were to upgrade, hopefully it would mean I could go for a run or bike ride without completely killing my battery.



Leave a Reply


(Note: This name will be displayed publicly)