Experts at the table, part 1: The importance of memory, especially if users stay at 28nm, plus issues such as security and place and route.
Memories have become a hot topic, so Semiconductor Engineering sat down with experts during DAC to discuss some of the issues. Taking part in the conversation were Herbert Gebhart vice president of interface and system solutions in the Memory and Interfaces Division of Rambus, Bernard Murphy, chief technology officer for Atrenta; Patrick Soheili, vice president and general manager for IP Solutions and vice president for business development at eSilicon; and John Koeter, vice president marketing for the solutions group of Synopsys. What follows are excerpts of that conversation.
SE: If 50% of the chip surface area and power is being consumed by memory, why do we spend so little time and attention on it?
Gebhart: Memory design has become so specialized that only a few companies do it. The rest of the ecosystem and industry depends on those few companies. This is because of the close interaction between memory and the silicon processes. Few companies have that expertise. The rest of the industry relies on those few for their solution.
Murphy: From a logic design perspective there is not a lot you can do. You hope that the memory designer has done a lot of things to manage power, security and other things. What you have to do is select the right memory.
Soheili: We get access to the nodes early on and we keep working on it and focus on it.
Koeter: Since the bit-cells come from the foundry, you may ask, ‘How much optimization can you do?’ The short answer is a lot. While we all use the same bit-cell, you can wrap a lot around it. For example, sleep modes, shutdown, dual voltage rail supplies for the periphery and core cell and a lot more. As we move to finFET, it opens a whole new space. It allows us to run at much lower voltage so we can run them at .5V, well below the Vcc nom. This also opens up new challenges because the SPICE models from the fabs don’t accommodate these ultra-low voltages.
Murphy: What the design team cares about is how many knobs they can use to adjust the memory.
Soheili: Another way to look at it is how much money is being spent on enhancing the logic versus the money spent by IP providers and foundries on the memories. While it may not be a 50-50 split, it is considerable. Memories have the advantage of a lot of repeats and that may reduce the total time.
SE: More people are accepting that we have reached the end of Moore’s law and the economies of scaling. If more people stick with 28nm, what impact will this have on the memory IP industry?
Koeter: I wouldn’t classify it as good or bad. It is just a reality. We will see a lot of new innovation coming in at 28nm. We have just heard about Samsung and ST jointly putting together 28nm SOI, and TSMC announced HPC, which is a more compact version of their high-K metal process. From a process standpoint we will continue to see a lot of optimizations. Two major markets are rushing to finFETs, namely high-end apps processors and the cloud computing guys. For apps it is a mix of power and performance and for cloud computing it is all about performance.
Soheili: It democratizes the market, so more people can get access to the technology. This will establish a momentum in the market place and allows more people to do more chips and that is good for the industry.
Murphy: Just because we stay at 28nm does not mean that the design of the memories stops. There is a huge number of things that you can do, both in the areas of power and security. This requires more creative design, and you continue to do that at 28nm.
Gebhart: People stayed at 180nm for quite a long time and there were opportunities for trailing edge IP. It means you have to differentiate in other ways. So, first order, the slowdown is not a detriment.
SE: What does security have to do with memory?
Murphy: This is an underserved topic. Memory is a soft spot for security. Consider cache memory. You can use side-channel attacks that rely on differential power analysis or timing analysis where you time an algorithm many times and use statistical analysis to extract information, such as encryption keys. This is particularly relevant to virtual machines where two processes are running on a single machine and sharing cache. You can create a victim and attacker configuration where the attacker floods the cache and waits to see what gets overwritten by the victim process. Then using differential power analysis you can extract a lot of information. So, you have to think about partitioning caches to stop this from happening.
Soheili: And the way you do that is by dividing and conquering.
Murphy: Yes. You can either split it or randomize it. The beauty of a virtual machine is that all resources are shared but you can’t do that anymore.
Soheili: Cloud storage has the same issue.
SE: Do we have the right tools to help users select the best memories?
Murphy: The first thing is to decide whose memory you are going to use. You don’t change this decision often. Then the IP provider will have an infrastructure to generate instances of memories.
Gebhart: But they still have to work out when they need a dual port or a multiport memory. This is an architectural problem and they may need help deciding which memory types and sizes to use.
Koeter: We do have platform architecture tools that can be used exactly for this kind of system exploration. It takes traces from a processor, has an interconnect model, a model of the memory interface, and you can sweep the configuration space for size of bus, interrupts, number of masters, and from this determine latency, throughput and other properties of the system. Many decisions are application specific, so for example GPUs often use two port memories, and in a processor it is likely to be a register file.
Soheili: On the chip design side of things we don’t get involved. When we are handed a place and route job, when trying to close timing, we often run into problems because of the preselected memory types, and they have speed and power issues. We often need to have some flexibility to change them.
Koeter: At a bare minimum it has a dramatic effect on the floorplan. This is why we allow different aspect ratios and users will often run the tool many to get the optimum configurations. Sometimes floorplans can limit the sizes of memory.
Soheili: Consider a 10 x 10 chip at 65nm with fixed complexity and 50% memory. The physical implementation of it was about 2,000 hours of engineering, or about 1 man year. The same complexity chip in 28nm is about 6X the number of hours it takes to close. When you add variations or take away flexibility, the degrees of freedom are less and that number blows up further.
Gebhart: Are you suggesting optimizing specific instances to help with place and route or optimize timing?
Soheili: Yes. Use the compiler for 90% of the instances and use an optimized instance for the ones in the critical path.
[…] part 1 of this roundtable, the participants talked about the investments being made in memory […]