Electronics Butterfly Effect

The electronics industry is about to go through the biggest change it has seen since the introduction of the transistor. Every decision from the past 50 years could be changed.

popularity

Everyone has heard of the butterfly effect where a small change in a non-linear system can result in large difference in an outcome. For the past 40 years, the electronics industry has approximated a linear system, fed primarily by Moore’s Law. The incremental changes available at each new process node have led us to make incremental changes and improvements in many aspects of the design, its architecture and the design process.

With the future of now in question, or at least slowing, many new technologies have been in the research pipeline and some of them are getting very close to the limelight. This creates an interesting situation. Some of them are not incremental improvements over what came before. They are fundamentally different. And they are about to turn the electronics industry into a non-linear system, which could cause a rethinking of many decisions that have remained unquestioned for decades.

It has been tried before, but Moore’s Law actually stifled innovation. Kevin Cameron, principal at Cameron EDA, recalls events from his early career. “Way back I worked for Inmos and they were working on parallel processing, and while you could do something that worked you were always up against Intel. Intel was always on the next node, so they were always faster. This killed many new architectures. If Intel can no longer scale its stuff, does that provide an opportunity for others to come in and change the architecture?”

Intel has said it can scale to at least 5nm, despite some recent glitches with its finFET process. And any changes due to a shift in direction away from shrinking features will not happen overnight because there is so much inertia in the system, even if the number of companies following Intel’s lead continues to shrink. Design companies will attempt to use new technologies as if they were incremental improvements for as long as they can. But eventually, someone will break with convention and the butterfly effect will begin.

The result will be a lot of uncertainty and possibly even some instability for a while. However, it will probably result in the most rapid change of technology that has ever been seen. There will be many losers, but just as many winners. What is certain is that no part of the supply change will emerge unscathed.

This is the first in a series of articles that will examine the impact of some of the emerging technologies. Semiconductor Engineering wants to hear from you as we explore what may happen by 2025. How will the industry be different and what impact will they have on the semiconductor industry, the IP industry, EDA, design methodologies, architectures and software? All of these areas are under pressure and we can either sit back and watch things evolve, or attempt to look at what may be possible, possibly even inspire some people to try a different approach compared to those that have been in use for decades. [Drop me an email if you have ideas for what the industry may look like in 10 years.]

Let’s refresh our memory
Consider memory. The relationship between memory and processing was defined in the 1940s. In fact, the entire computing paradigm is based upon it. The architecture called for a single contiguous memory space that would hold both instructions and data and would be accessed through one, or in some cases two, bus-based communications structures.

While the efficiency of this arrangement has been challenged in the past, it has remained in place largely because of the inertia built into the system and the ways in which both processors and memories are designed. The investment in the software base is so large that we have migrated into an era where the hardware is made to do everything possible to extend the existing paradigm without upsetting the software.

The result of this hardware dependence on software is evident in the migration to multiple processors. The vast majority of the memory space still remains as one large contiguous memory space that is coherent between all of the processors. “The problem with symmetric multi-processing (SMP) is that you are always pulling data to the processor and everything gets stuck in the data bus coming from memory,” says Cameron. “So you try and add cache coherence and that creates even more overhead.”

Where do all of the problems start? “In software there are dependencies,” says Drew Wingard, CTO at Sonics. “I cannot do this until I have done that, and that required an access to memory. But if that memory is not sufficiently close to the processing element, I have to wait for it before the next step can happen. That is the reality.”

This highlights the problem with the architecture in that there is a speed disparity between processing and memory that has been growing wider and now sits at about 1000 to 1. The power profile disparity of between them is also growing. Part of the problem with both of these is the speed and power consumed in the transfer of data off-chip.

“The whole point of Rambus is to help the DDR interface where memory is connected to the logic using an analog structure,” says , president and CEO of MonolithIC 3D. “This enables it to send information very quickly and convert it back to logic. The DDR interface allows the memory and logic to interface by manipulating the data in a way that allows it to go faster, but the assumption is based on the fact that those two entities are separate. If you put them together, all of that disappears. The connection channel is prohibitive—8 bits, 16 bits, 128 bits—it is inconceivable that this would get bigger than 256 because the mechanics become almost impossible. When memory is put on top of logic, you could have hundreds or a million wires connecting memory and logic. It is not just an enhancement. It changes the whole paradigm and will change the architectures.”

Building chips vertically has been steadily gaining steam over the past year. “Die stacking with conventional inter-chip interposers and bonds, and through-silicon via connections is becoming fully mainstream,” says , fellow at Cadence. “It will enable both tight, cost-effective integration of disparate semiconductor technologies for memory, logic, MEMS and RF, lower energy for communications and new form-factors for products.”

It will also have a major impact on memories. “The 3D and memory trends are closely linked,” says Dave Eggleston, principal of Intuitive Cognition Consulting. “Packaging technologies do not change the economics. Fab integration in 3D changes the economics. Packaging is only a half-step.”

Or-Bach explains why 3D memory is more than just die stacking. “3D memory can be architected so that with the processing of one layer, you can achieve multiple layers. So when Samsung and Toshiba went from 24 layers, then 32 and then 48 layers, the incremental cost of adding more layers is almost insignificant. They are processing all of the layers together and that is the key for 3D memory. These types of gains are not achievable by stacking them.”

This does not mean that integration using TSVs does not have value. “The TSV reduce the I/O power considerably, but it does not reduce the cost,” says Eggleston. “It gives you more density and it has its place.”

Pranav Ashar, chief technology officer at Real Intent goes further: “3D manufacturing is making rapid progress in allowing logic, interconnect, memories and analog to be layered in technologies best suited to them. For example, you could have a 40nm GaAs opto-electronic interconnect layer sandwiched between logic and memory layers for inter-processor communication and to maintain coherence. Further, this will allow chips like memories in which varying circuit types are currently shoehorned into one die to be disintegrated into heterogeneous layers for better performance.”

New memories
It is not just the ability to integrate memory into the package that will, on its own, create the non-linear changes. There is a second important part of this change. SRAM and DRAM stopped scaling a long time ago and this increased research into new kinds of memory.

“With the next generation memories (spin torque, ReRAM, phase change, cross point) there is commonality and the first of these is the material science,” says Eggleston. “These all require new materials in the fab, and this means that they have long lead times. These are decade-long projects of trying out different atoms and different combinations of atoms. They are also using new physics. It is not charge storage. Each of them is uniquely different – spin in magnetic materials, heat transition in an amorphous material, ionic movement. You have to model more than electronics. You have to model the ionics as well.”

Whenever 3D stacking is mentioned, there are usually concerns about power and heat build-up in the sandwiched layers. “Memory, in general, is not power hungry because it is non-volatile and does not require power for standby,” explains Or-Bach. “It only consumes power when you read and write. Wires are relatively short which reduces capacitance so you can get more bits for the same amount of power. You can also connect more bits for the same cost.”

For all the excitement surrounding these new memories, it is possible that they will not cause change to happen. “I am a recovering memory designer,” admits Wingard. “The fundamental issue that has plagued computer systems for the last 30 years is that you can never get enough memory close enough to the processor to keep up with the operations inside the processor. The communications delay to get to the memory cells never scales in the way you want it, and so you may be able to reduce it somewhat but memory hierarchy does not go away.”

Virgile Javerliac, CEO for eVaderis, agrees. “Access time is not only a function of the memory technology and interface (bus width, distance on-chip and off-chip), it’s also a function of the memory size. It means that even if you have a super-fast non-volatile technology, you’ll always pay the price of the size (even with 3D) and you will use intermediate small caches of maybe the same technology or some hybrid between SRAM and the new non-volatile memory. The huge difference is that you will decrease the latency a lot.”

Despite all of the technical difficulties, progress is being made. “You need something faster, low power and non-volatile that can be integrated with logic,” says Eggleston. “That is where spin torque will be the winner. They are talking about how to get this down to single-digit nS. Spin torque cells are smaller than SRAM. ST MRAM is complicated and uses something like 40 materials put in a stack before it is etched. Because of this there are innovations needed in deposition and etch, but Everspin working with GlobalFoundries appears to be making a lot of progress. It has 40nm devices that can be made up to 256Mb and they are working on 28nm that can go to 1Gb of embedded memory. That is a winner.”

The race is far from over. “The industry requires a combination of high capacity (for the storage side) and high endurance (for L2/L3 side),” points out Javerliac. “Today, there is no such NVM technology. The only possibility is a mix of two disruptive technologies: ReRAM or PC RAM (higher capacity but lower endurance) for the storage and STT MRAM (lower capacity but higher endurance), so it means that we need an L2 (STT MRAM) cache and storage (ReRAM or PCRAM) very close to the CPU because of the delta of performance between the technologies. This is getting closer to the unified memory, but not yet unified.”

There are additional manufacturing benefits to be had as well. “Another big benefit for embedded memories, and necessary for the , is that all candidate technologies once they have decoupled themselves from a transistor (RRAM has done it, SpinTorque has not done it yet) become back-end-of-line memories,” explains Eggleston. “You have isolated the memory array from the front-end-aligned logic, and once you have eliminated the transistor you can build decode and sense amps underneath the memory array. That makes it ideal because you are not giving up as much space as you would with SRAM. That is the target – to displace SRAM in SoCs.”

The path forward
There are many possible impacts that these memory advancements may have, some of which will be covered in more detail in future articles in this series. “We’ll see new device types first in memory,” predicts Rowen, “Technologies like MRAM and memristors will bridge the cost, performance, leakage and endurance characteristics of flash, SRAM and DRAM, but ultimately we may see new logic transistor and wire types emerge.”

These could foster big shifts in computing. “Memory advancements enable us to create structures more like the mind where memory and logic are tied very closely together,” says Or-Bach. We are already beginning to see such advancements in the adoption of neural networks for vision processing.

How big could some of the changes be? “Semiconductor technology as we know it today will be gone in 15 years,” says Mike Gianfagna, vice president of marketing at eSilicon. “2.5D and 3D will have a good run, but none of these technologies can deliver the throughput required to fuel the innovation in a 15 year time span. Those innovations will require real-time natural language processing, image processing that mimics human cognition and the ability to access massive databases of information with human-level response times. We’re talking about fundamental shifts in programming paradigms and compute architectures.”

Most see some kind of change as being inevitable. “Whatever the best memory technology is, systems change around the memory technology,” says Eggleston. “System guys hate to hear that and this is why I don’t talk about replacement, but by moving non-volatility close to the CPU, the compute architecture will change.”



1 comments

Kev says:

Intel’s recently announced XPoint memory technology might be another game-changer, however looking at there on-line media they’re still thinking about moving data in and out of their legacy architecture processors much the same way. Die-stacked XPoint with Stratix-10 (quad-ARM) FPGAs would be a more interesting compute platform, but the software tools are trailing a bit far behind the Silicon at the moment.

Leave a Reply


(Note: This name will be displayed publicly)