Understanding demands that application-specific processors put on SoCs can determine success and failure.
Application-specific processing is a very broad category. It includes processors that are tuned for a specific application domain such as vision processing or software-defined radio for high-end wireless, or voice trigger in IoT devices. This category also includes narrowly focused processors optimized for a specific SoC, with a specific application within the chip. And for application areas such as automotive and , where power and performance tradeoffs are critical, this type of processor is ideal.
There is quite a bit of activity in both of these areas. “In the broadly domain specific area, there are some very hot areas like vision processing, as well as sensor processing for IoT,” said Chris Rowen, a Cadence fellow. “Needless to say, IoT is very hot, and while there is an uptick in design activity, there’s a huge uptick in column inches associated with IoT. General customer awareness of what’s going on in that space and sensor processing is very much at the heart of it.”
He said there are some unique applications and algorithms associated with sensor processing, along with more domain-specific processors, and in some cases application-specific processors that are really targeting that area.
Where vision is concerned, it is almost unbounded in its computational demands. “People always want higher resolution, higher frame rates, more levels of analysis in order to extract and identify features in the video stream,” Rowen said. “The functionality of the systems is very much limited by how much processing can be fit onto that chip within a given power budget. The appetite is enormous.”
The difference between an application/domain specific processor and a general purpose CPU is particularly dramatic here because it’s reasonable to expect a factor of 10 or 20 in performance and energy relative to a general-purpose processor in some domain-specific cases.
“As we focus on interactive devices, which includes both mobile devices and IoT and all of the current emphasis on automotive applications, we have to think about different categories of sensors,” Rowen noted. “When people talk about sensor processing, the first thing they think of is motion processing because that’s what you get in your Fitbit. That represents one level of performance because motion sensors and motion processing have a certain characteristic data rate where you might have a dozen sensors. Those sensors are operating at 100 to 200 samples per second, and you have 1,000 to 10,000 operations per sample per second. That represents a certain performance envelope. The next layer up is sound processing (voice, audio processing, room acoustics and audio environmental processing) that is typically a couple of orders of magnitude higher sample rate, the computational levels are correspondingly higher, and therefore, the emphasis on efficiency of processing is correspondingly higher. Ultimately you get to the big daddy of sensors, which is image sensors, and they are another three or four orders of magnitude higher in sample rate above audio sensors/microphones. There you have huge computational demands that are associated with it, and even more opportunity for application specific processing because now you really move the needle a lot the absolute impact of improving the efficiency by 10 or 20X is very very noticeable. This also makes a difference between whether a product is viable or not, not just a question of how long the battery life is.”
It typically is the more sophisticated customer, or the customer who has the narrower end product target—someone who’s got a grip on what the final usage is of some chip, where they know they’re targeting a very specific customer or it is that system OEM itself who is doing it. “Those are the ones who are more likely to build more narrowly application specific cores. If you’re a generic semiconductor guy expecting to play the field and sell a platform to a wide range of customers, you’re more likely to go with a domain-specific processor because, by definition, it’s useful across a range of applications in the target market,” he added.
Use models
From the perspective of the design services provider, Sundar Raman, director of engineering at Synapse Design sees more focus shifting to multicore designs as engineering teams seek to reduce area, power and memory footprints along with improving performance.
He noted there is also a desire to implement specific algorithms for audio, video, security, automotive, biometrics and other IoT applications in application-specific processors/controllers. “IoT, because of potential market volume, is driving every processor IP company to go back to develop smallest core for dedicated market.”
That seems to be a common perception in the industry. Prasad Subramaniam, vice president of design technology at eSilicon, agreed that the IoT will drive application-specific processor products and has seen some interest in this.
But what is it about today’s application use models makes application-specific processors more attractive now?
Drew Wingard, CTO of Sonics, said that at least part of the IoT buzz involves the idea of devices that are either hooked to our bodies or part of our environment and appear to always be on. “We saw that on a Motorola phone that was announced about a year and a half ago where if you uttered the phrase, ‘OK, Google,’ suddenly the phone was alive and you could give it voice commands. There was a microphone running all the time and a specialized version of a signal processor that was sitting there looking for exactly those two words and nothing else. If it found them, its job was to wake up the main brain so it could process whatever came next. You see that in tons of things around these different kinds of sensors. It’s a very tall order to be on all the time, on a small battery. Those are the kinds of things that people have always said these application specific processors would be very good at.”
He also pointed out that there is a lot of experimentation right now at applying application-specific processors around very, very targeted use models. “Maybe that’s the most important part. If you understand this sub-part of your application very well, then the most efficient way to implement it probably is in dedicated hardware but that’s pretty expensive. So the question ends up being, how much away from optimal do you end up if you use an application-specific processor because in many cases it can be a couple orders of magnitude more efficient than a general-purpose processor?”
Interestingly, another area garnering some venture capital investment all of a sudden are applications in the general area of machine learning, Wingard continued. “Whether that be for image processing or for other things, there appears to be some pretty interesting work going on there. While a lot of work had been focusing on using GPUs, if you look at the problem space you find that the kinds of processing elements and the kinds of control of those processing elements that show up in GPUs are not ideal.”
All of these issues bring up questions about what demands those kinds of processors, along with the application they are trying to serve, have on the rest of the chip and the rest of the system. Getting those answers right can make the difference between success and failure of the end product.
Leave a Reply