Wide I/O’s Impact On Memory

Opinions differ about the projected success of Wide I/O, but one way or another the logic-to-memory bandwidth problem is getting solved.

popularity

By Ann Steffora Mutschler
Driven by the need to reduce power but increase bandwidth in smart phones and other mobile devices, system architects are grappling with new technologies to take system performance to the next level. Wide I/O, as well as some DDR technologies, are vying for center stage in tomorrow’s leading-edge mobile designs.

“The big technological advancement that allows a protocol standard like Wide I/O to have a chance is the belief that we will be able to manufacture in high volume these chips with through-silicon vias (TSVs),” noted Drew Wingard, CTO of Sonics. “Like many new technologies, but this one in particular, there are a lot of challenges not on the design side but on the actual technology development side—and then the quality improvement necessary to make all of these new steps work in high-volume manufacturing. Wide I/O is part and parcel with the emergence of the smartphone as a really important category of communication device, consumer electronic device or computing device.”

What’s particularly interesting about Wide I/O is that it will be the test case for high-volume manufacturing of TSVs and the business and integration challenges of getting DRAM from one company and logic from another. In addition, the memory bandwidth bottleneck on the way to DRAM has been a huge problem in computer architecture for the past several decades. Wide I/O dramatically improves this bandwidth for a relatively low cost.

“In computing systems, to get this kind of bandwidth we’ve been needing to use multiple DRAMs in parallel,” said Wingard. “We don’t need that much memory itself. We don’t need that much memory capacity in a smart phone form factor. In fact, many smart phones should be able to get away with only one or two Wide I/O DRAMs for their entire main memory system. From a cost perspective it’s really attractive. You don’t have to buy all these extra DRAMs and find some place to put them inside your box or case in order to get the bandwidth that you need.”

Marc Greenberg, director of marketing for Cadence’s SoC realization group, agrees that most of the early adopters of Wide I/O in the mobile space are using the technology as a direct replacement for the existing technology today, which is LPDDR2.

“But I think very quickly we’ll start seeing it in other places where it does become an L4 cache or something like that. It’s still kind of early to predict how that exactly might work out. A lot of these items are still very much in a formative phase in the industry,” he observed.

Greenberg noted that the manufacturing and supply chain issues are real and today the technology to use TSVs is fairly expensive. “It has reliability and test issues that are different from the technologies that are commonly used today. Those are issues that we’re assuming will get worked out over time. It’s going to be a process of time to improve both the capability of the TSV process as well as the cost of it. But it certainly will come down over time.”

From a design perspective, Sonics’ Wingard said Wide I/O is easy. “That’s true at the level of the actual protocol between the logic chip and the DRAM, which is far simpler than what we’re dealing with in regular DDR. There’s no PHY to speak of, there are no funny clock circuits—it uses full swing CMOS signaling—and it’s really nice from that perspective. There are a couple little wrinkles around test but it’s pretty easy. Where it gets challenging is, in order to deliver this much bandwidth, they’ve gone very wide. That’s why it’s called Wide I/O.”

However, load balancing is a real problem with Wide I/O because most smart phone designs have a single channel of external memory, he explained. “Some of the very current generation, the ones that are just shipping or just starting to ship, have gone to two-channel memory systems and already have this problem of how to spread the data across the two memories. When it’s time to access it, you’re going to be accessing these just about as often from Channel 0 as from Channel 1 because the two channels are not connected to each other in any way except by the interconnect within the SoC. Based upon an address, your data is either here or there and you’ve got to get to the right channel. That’s why this load balancing becomes such a big deal.”

Wide I/O DRAM with its four channels, and each accompanying memory controller. Fabric shows where interleaving would take place.

Wingard pointed out that there are other ways of trying to do this that involve using lots and lots of buffering, which means lots and lots of extra area on the SoC as well as longer latencies for the CPUs.

What about LPDDR?
Getting highly complex and featured smartphones to work with very low power DRAMs is at the top of development teams’ agendas, with the thrust of the requirement coming from something that can be small and consume very energy as it interfaces into a high-speed memory.

Navraj Nandra, senior director of analog/mixed signal IP at Synopsys, said that about 18 months ago when buzz began building around Wide I/O, Synopsys concluded that Wide I/O is really a throwback approach.

“Everyone’s been marching along this serial path and what Wide I/O does is goes back to a low frequency single data rate so it’s not even DDR anymore,” he said. “It’s SDR. It’s just a huge whopping bus. The way it was suggested, it you could figure out some packaging technology, like TSV, and if you could make that manufacturable in high volume, then Wide I/O would succeed. That was the positioning about 18 months ago of this new technology. The bottom line was that it required some kind of breakthrough in packaging technology.”

In the meantime, LPDDR3 started to be talked about. LPDDR3 is projected to appear around the same timeframe as Wide I/O—sometime next year.

The table below compares key attributes of the technologies.

Source: Synopsys

Nandra expects Wide I/O to eventually become reality when people figure out packaging technology. “I just think it’s going to take a lot longer because there’s the initial part of getting it all to work. Then you have to get it into producton and make it cheap in volume production. I think LPDDR is going to have an extended life and that LPDDR3 is going to delay the adoption of Wide I/O. LPDDR3 satisfies the bandwidth demand and it doesn’t have a lot of the technology challenges that requires TSVs and all that so it’s a good-to-go technology.”

The spec is in the process of being finalized in the JEDEC standards organization with functionality for the basic specification expected in December. LPDDR3 also boasts some of the industry’s heaviest hitters behind it: ST, TI, Renesas, Samsung and Panasonic.

But there are counterpoints to this argument from the Wide I/O proponents, as well.

“With Wide I/O you get rid of the PHY and you use less power,” said Kurt Shuler, director of marketing at Arteris. “You’ve got 4 x 128 individual pins talking to RAM. The only problem is cost, so it will be relegated to stacks on a chip that is high-margin and high-volume. But if there are 512 pins situated on a die and there is no nice interconnect or switch you’re going to have huge routing congestion.”

On the shoulders of the system architect
With the move into high speeds, the system architect has to have a good understanding of all the system tradeoffs and how they impact the available bandwidth they have, Synopsys’ Nandra said. “You can run at these super high speeds, but if you’ve got a lot of noise in your system through clock skews or poor package understanding you’re going to eat into that available bandwidth. In theory, you might have 12.8 Gbps with Wide I/O, but if you’ve done a really poor job with load balancing and a lousy package that’s going to come way down. So theoretically, you had a very nice bandwidth but with all the system issues that may come up, if you haven’t thought them through, that bandwidth goes down.”