The push toward lower-power and higher-performance is creating some unexpected tradeoffs.
By Ed Sperling
The amount of third-party and re-used IP content in an SoC is on the rise, but once a decision to buy vs. make has been made it doesn’t always stay that way.
In fact, chipmakers are swinging the pendulum back and forth across a variety of chips, building IP themselves, standardizing on another vendor’s IP, then sometimes rolling it back the other way. The reasons are usually performance and power, but sometimes it’s based on form factor, internal knowledge and expertise, whether commercial IP can be easily integrated, third-party vendor support—and whether any of those factors can command a pricing premium.
This isn’t always as clear-cut as it looks on the surface, either. It may be more expensive to use flash memory over DRAM, for example, but if the flash can be assigned so certain functions only use a certain number of bits then the flash ultimately could be a less expensive option.
The term du jour in the memory market is “tiering,” which is basically mixing and matching memory as needed for capacity, bandwidth, latency and power. That includes everything from high-bandwidth memory, stacked memory, DRAM, flash and possibly STT-MRAM, according to Bob Brennan, senior vice president of the system architecture lab for Samsung Electronics’ memory business.
“You can go wide (Wide I/O) and still have access to DRAM with the same latency,” said Brennan, referring to problems in the data center. “We’re facing a massive capacity problem with CPU scaling. You can scale the number of cores, but you still need about 2 gigabytes per core. The solution is to add a second core. That will require changes in the application and the tools. Everything has to come together, but the result will be a win for everyone in the industry. The architecture has not changed in about 20 years. We all have to do it together.”
Making modifications
That kind of approach is more about customizing with existing pieces than creating new ones, although it can be equally effective. In some cases, as with memory, it’s using some of the same pieces differently—stacking them, changing the bus architecture and the connectivity fabric, and only using as much memory as needed for a particular function.
One of the problems with customization is figuring out what parts of a design to customize, and just how far that customization should be pushed. IBM has a whole separate division that builds customized hardware and software for customers, but those typically are high-priced machines where cost is not a factor. For high-volume mobile chips, there is some pricing resilience built in, as well, which allows vendors of those chips to actually bend the process rules at foundries. But for the vast majority of chips, it’s typically an ROI exercise and custom work is kept to a minimum.
Still, there are big gains in performance and power for those with the time and money to figure out where the bottlenecks are—and potentially huge market differentiation. In a complex SoC, as with memory, sometimes it’s the interaction of components rather than the components themselves that can really bog down performance or increase power. And sometimes it’s the volume of information about those components that can affect how much engineering time is devoted to a project.
“For an IP-based design, the integration team frequently is working with a block-level representation,” said Ming Ting, senior manager and product specialist for RedHawk at Apache Design. “Each block can be represented as a black box or in full detail, but just adding more data doesn’t necessarily help because it can bog down the integration and simulation. You need to build a methodology so you can do a thorough analysis of every single block and not find any surprises. The only way to do that is to decide what information you need versus what you want to throw out.”
One of the biggest problems in all of this is that the number of unknowns, or X’s, is skyrocketing. “In multiprocessing and multicore designs, the physical memory is limited, so even if you take advantage of multiple CPUs the memory is still shared. That’s one challenge. A second challenge is to partition everything into smaller blocks. You divide and conquer for block-level analysis, but you have to sign off on the entire power delivery network. That means you need a global solution. Unless you have a very good methodology you’re not going to be able to solve all the unknowns.”
Part of the analysis also has to include the package, which is no longer just an inexpensive part of the design. “As of today, we’ve seen a lot of design failures,” he said. “The packaging team and the chip team have to work together to find a solution that’s still cost-effective. So you need chip and package information and look at them together.”
This doesn’t always produce the expected results. NXP has gone from custom IP to standard IP and back to a mix of both for its microcontrollers, despite the industry’s trend toward more standard IP.
“The rule of thumb is that things driven by standards you buy, things that are custom you make, and with unique IP you make it,” said Robert Cosaro, NXP’s MCU architecture manager. “That’s not always the case, though. We used to buy USB IP, but now we make it because we can do better with smaller parts.”
In the microcontroller market, power consumption is a differentiator, Cosaro noted. But just because the components are supposed to be low power, that not always easy to prove. NXP has optimized PLLs, flash and lowered the voltage, and one of its customers was reporting voltages that were multiples of what NXP advertised. After weeks of analysis, it turned out the customer had done the integration work wrong, but the amount of work necessary to prove that was non-trivial.
Knowing when to stop
One other advantage of customization is that it can provide market differentiation.
“The big question with customization is how much is enough,” said Mike Gianfagna, vice president of corporate marketing at Atrenta. “At one level, the answer is straightforward because the consumer tells you. The market resists being homogenized. That used to be done in hardware, but with a dramatic increase in SoC starts the differentiation is happening more in software.”
Software differentiation isn’t cheap, though. It takes a lot of engineers working long hours to achieve that differentiation. “So how do you contain the cost? If there is no breakthrough in writing software, companies will do the bare minimum to beat a competitor. That means a minimum amount of modification, test and making sure it’s backward compatible. But what if someone figured out the way to reduce the cost of software development? That would change the entire industry.”
Leave a Reply