High Stakes Domination

Winning a market depends on system performance, and the only way to improve it is to boost DRAM access and eliminate choke points.


By Frank Ferro
As an I IP provider, I have been watching with great interest the patent battles taking place between the major players in the wireless market. The now seemingly daily announcements of new lawsuits in the mobile consumer market translate to several things—wireless devices have become true consumer items, the dollars are high and new companies have reshaped the market landscape.

When I was selling wireless baseband chipsets, the first (and $64,000) question asked by the companies (to remain nameless) trying to establish themselves in the wireless market was, “Will you indemnify us?” Clearly they all knew that this day would come when the incumbent players were challenged to the point of asserting their patents. And they should. They have years of investment in these technologies and their property has to be protected. Asserting basic patents are undoubtedly a way to get paid, but what companies really want is total market dominance.

Today, the answer for “dominating” (from a technology perspective) often comes down to system performance in order to give the user the best experience—performance typically means adding more processor cores. The challenge has always been, however, how to utilize all the processors effectively to gain the maximum performance increase. If we look at the real system performance problem it usually boils down to DRAM access. Regardless of how many processors you have in the system, the memory is the choke point. Most of the system traffic has to access DRAM, and any inefficiency in the memory subsystem will affect the overall system performance (i.e. the number of applications that can run effectively), power and cost.

To solve these challenges, there are several areas of IP investments in the memory subsystem emerging today, including embedded DRAM, Wide I/O and TSV technology). To complement these silicon improvements, there is also an effort to improve quality of service algorithms (QoS), non-blocking network flow control, memory scheduling and interleaved memory access technology (IMT). These are the key components that will be necessary to take full advantage of the today’s and tomorrow’s new memory technologies.

Let’s consider first the effect of using embedded DRAM. This long promised technology has clear system performance and power advantages. Not having to access external DRAM saves significant power (all those pins gone and no PHY) and increases performance for the same reasons. When you layer on top of these performance gains better flow control algorithms, the net system improvements can be more efficient than adding a second processor core! The use of wide I/O DRAM will have a similar effect by opening up the DRAM bottleneck. Having fast access to multiple banks of DRAM with efficient load balancing technology (such as IMT) will provide the step function in performance needed to drive true product innovation.

As products commoditize, it becomes increasingly difficult for companies to differentiate. The user interface and underlying chipsets can only be so “unique” from the consumer perspective (hence the emergence of most patent battles). Although the product paradigm is established for now, innovation in the memory subsystem will be a key element in overall technology innovation to move the products forward and for the establishment of entirely new product paradigms.

With the stakes so high in the mobile market, I expect that the fight for market share will continue using all means available. How this plays out will be interesting to watch, too. But one thing is clear: The companies that take full advantage of these shifts in the underlying SoC IP will be the ones that stay on top—or else suffer the way that some of the incumbent players have, namely a slow, painful death.

–Frank Ferro is director of marketing at Sonics.