Cost vs. Value

With low-power requirements complicating mixed-signal design, engineering teams are following the money in an attempt to streamline the architectural process.

popularity

By Ann Steffora Mutschler
The increasing amount of mixed-signal content being included in SoCs for automotive, networking and all manner of mobile devices is reinvigorating the mixed-signal industry. While this is great news for companies playing in anything related to mixed-signal technology, it also means increasing complexity for the engineering teams pulling all the pieces together.

“People have been designing mixed-signal for a long time and the composition of mixed-signal is changing drastically,” explained Mladen Nizic, engineering director at Cadence Design Systems. “Traditionally, mixed-signal was viewed as big digital with some analog in there, but now we see that mixed-signal has really expanded in complexity. So we have designs today that are about equal mixed with respect to analog and digital content. With designs that in the past were predominantly digital, engineers didn’t need to worry about analog impacts and effects. Now they have to, at least to some extent. Digital designers doing verification need to have some representation for these analog parts or mixed-signal parts or they might not completely verify their designs.”

Given that consumers are the driving force behind semiconductor demand today, there are very high performance and cost demands. In fact, cost is now viewed as a primary design variable, according to Navraj Nandra, senior director of marketing for DesignWare analog and mixed signal IP at Synopsys.

Packaging—a significant portion of system cost—plays a deterministic role in the architecture of SoCs because the choice of package dictates a number of technological aspects of the system.

“Our customers are selling package-tested parts, so packaging becomes an important part of the cost equation. Customers would like to use the cheapest possible package that they can get away with, and the design challenge is that the cheapest package has the worst performance in terms of parasitics. You’ve got really bad parasitic inductance, parasitic capacitance, lead frames are very badly put together, and there’s a lot of leakage through the substrates. These things are typically in some kind of cheap BGA or wirebond implementation,” he explained.

To illustrate, Nandra shared a recent situation with a customer that wanted very high performance capabilities on the die but were going to put it into a really cheap package because it was going into a low-end smartphone they wanted to sell under $200. “The discussion was around how to get a very-high-speed memory to connect to the chip being developed without compromising signal integrity. In that particular package configuration that they had, there would be a limitation on speed. At a certain speed they’re going to get skew and reflections on the line, which is going to impact performance of the chip. Then the customer asked if they could save cost by compromising on the board—maybe use a two-layer board instead of a four-layer board. That’s certainly possible but, again, you have to downgrade or degrade the performance because the two-layer board doesn’t have that many degrees of freedom in terms of performance. So cost is absolutely a critical part of the equation when you’re coming into designing not only IP but also when you’re looking at it from an SoC perspective.”

He believes a lot of engineers don’t quite understand these tradeoffs. While it is certainly possible to get the performance with a very expensive technology at 28nm with all the process options, all the different masks that allow all the different voltages, a nice package and an expensive board or connector, the reality is that many SoCs must be designed and manufactured in the cheapest possible environment.

“This could be the biggest mixed-signal challenge, because every six months or so engineering teams look at ways of getting the cost down on their product. But they want the performance, too, so the ingenuity from the engineering side really needs to apply to that: ‘How can I get the most out of very little in terms of the package, the board material and such,’” Nandra continued.

Complicating mixed-signal designs is the persistent drive for lower power, said Pat Hunter, product marketing engineer responsible for developing strategies for point-of-load power solutions at Texas Instruments. “Integration and power consumption [are trends,] but the biggest trend I really see is battery life because we all know—we’re consumers—the biggest complaint we all have about our cell phones is the battery life. In TI we do a lot in the area of charging the battery, but the more important part of it is accurately gauging the capacity of the battery.”

Low-power challenges in mixed-signal come from a couple of aspects, noted Cadence’s Nizic. “One is that we brought more digital into analog. Before we didn’t worry much how much of that digital was consuming because it was really small parts. If I have instead of a few hundred or a couple thousand standard cells now I have a hundred thousand or a few hundred thousands with my analog, that’s becoming a significant part of my overall power budget. Second, I want to use this digital to better optimize power of my analog — shut it down when it’s needed — it’s all interacting — now I have to apply low-power techniques on my digital part but at the same time, that complicates my interfaces with analog. That’s another dimension when I try to verify power modes and functionally entire design.”

Like Nandra, TI’s Hunter has seen customer demand for cost reduction, as well as the accompanying struggle to make the right architectural tradeoffs.

Speaking to designing devices for longer battery life, Hunter said, “If you look at them like they are fuel gauges, the biggest architectural tradeoff is you are adding cost to your system because the microcontroller will have an analog/digital converter on board and they can do their own gauging. But it’s very inaccurate because the batteries have internal impedance, and if you don’t keep up with internal impedance you’ll think that there’s less energy in the battery than there really is. I’ve got customers that were doing laser wrinkle removers and so they had their own A-to-D gauging the battery. They were doing a cost reduction. But the biggest complaint from their end customers (the consumer) was that they could never trust the battery reading. Here’s the case where they were going to do a cost reduction but they are adding my part because they needed that extra accuracy. The challenge is trying to justify the extra cost. The way you do that is with consumers. If you’ve got two smartphones side by side—they pretty much all do the same thing nowadays—but if you’ve heard this one’s got twice the battery life you as a consumer will buy that. Nobody cares that my solution is in there. They just care what my solution does for the product.”

When it comes down to it, cost defines everything. “It’s choice of process technology, choice of the IP that you’re going to use, speeds that you’re targeting, packaging, risks you’re willing to take. In the end, if you were to devote significant resources your quality would be great, but you have to make that tradeoff now between ‘my cell phone probably is going to be on the market for six to nine months before someone is going to expect an update or a new cell phone, so do I need to now run through all the qualification standards that require five years of operation?’” Nandra said.



Leave a Reply


(Note: This name will be displayed publicly)