The Internet Of Cores

Plug and play compatibility is back on the table, but will it work? It depends who you ask.

popularity

Ever since the birth of the third-party market, there has been a desire for plug-and-play compatibility between cores. Part of the value proposition of reuse is that a block has been used before, and has been verified and validated by having been implemented in silicon. By re-using the core, many of these tasks no longer land on the SoC developer, thereby improving time-to-market and reducing risk.

Plug-and-play seems like a simple goal, but it’s also an elusive one. The standards body, the Virtual Socket Interface Alliance (VSIA), was formed in 1996 to address this problem. While many good things came out of its efforts, it closed its doors in 2007 before meeting its initial goals.

The closest thing to an industry standard was perhaps the Open Core Protocol (OCP). Originally created by Sonics, control of the standard was transferred to an independent organization (OCP-IP) in 2001. According to OCP-IP documents, “OCP2.0 is a point-to-point, master-slave interface between two communicating entities. The master sends command requests, and the slave responds to them. All signaling is synchronous with reference to a single interface clock, and all signals except for the clock are unidirectional, point-to-point, resulting in a very simple interface design, and very simple timing analysis.” Version 3, released in 2004, added capabilities for system-level cache coherency and power management. There have been no further updates since then, and in 2013 the assets were transferred to Accellera.

Other industries have suffered from a lack of standards. For many years, you could only buy a phone from the phone company, which argued that allowing anyone to connect devices to its network would cause havoc and bring the system down. That argument proved unfounded. When the Bell system was pried open to allow third-party devices, innovation exploded and phone technology improved significantly. Moreover, a plethora of other devices were developed that could connect to the telephone network.

By comparison, the Internet proliferated almost explosively because it managed to establish standards very early on. While some of those standards may seem like odd choices today, they have lasted over time. Replacing them will be difficult and expensive.

“I am still frustrated that the industry collapsed down to Ethernet on top of TCP/IP,” muses Drew Wingard, chief technology officer at Sonics. “It was an oversimplification of the seven-layer OSI model that has eliminated the ability to innovate at the lower layers of the network stack.  So much of networking, from every domain you can imagine, from WiFi to storage networking, to phone networks have all made everything look like Ethernet. This blows my mind. This focus on simplified interoperability rather than optimality is a frustrating abstraction failure, but an economic reality.”

With this in mind, Semiconductor Engineering asked the industry what it would take to create plug and play for IP and to extending those notions to 3D pre-implemented IP blocks, so that building chips would be just like the PCB industry. You could select pre-implemented cores, such as processors, memories, sensors or analog blocks, and integrated them vertically into your design. This should reduce a lot of back-end risk because it has already been done and proven for all of the cores, leaving only the integration network and custom circuitry to go through the back end implementation.

“Plug and play is something that cannot really happen,” claims Johannes Stahl, director of product marketing for virtual prototyping at Synopsys. “You can plug certain things together, using standard interfaces, but if you don’t know how they should be plugged together and how they can be configured, plug and play doesn’t give you any value. The problem is much more complex than being able to plug things together.”

Others see limitations in this approach. “We still have a tough interconnect and interconnect verification challenge,” points out , a Cadence fellow. “This is where every chip is different. There may be 27 processors, 20 I/O ports, several types of memory – and it is not just about hooking it up correctly.”

Interconnect is clearly a central part of this methodology. “The migration of system complexity to the interconnect is a natural offshoot of the increasing reliance on third-party and re-used IP in today’s SoC designs,” says , chief technology officer for Carbon Design Systems. “When design teams look for ways to differentiate their offerings, the interconnect has become the natural target for this work.”

On-chip, the challenge is that the interface used between sub-systems and the network has to be much friendlier to latency requirements and the neediness of the sub-systems. They do not live by themselves and are dependent on other sub-systems provided across the network. This means chips end up with more bare-metal interfaces.

Bernard Murphy, chief technology officer for Atrenta, is not a believer of the concept, at least for the near term. “Interconnect crossing between die in 2.5D / 3D is still a research project in my view. It’s possible in principle, but you have to ask, ‘What is the economic motivation for splitting digital logic between multiple die?’ Most of the motivation for multi-die today seems to be process-driven.”

But there are some believers. Steve Schulz, president and CEO at Si2 says, “Economics is a power force. It drives solutions to problems. The IP blocks of tomorrow will be die that are integrated together for some markets.”

That market, which will bring about all kinds of change to the entire ecosystem, includes the IoT edge devices. “The economics of the coming wave of IoT and the 28 billion devices that get added to the network in the next five or six years is absolutely huge,” says Schulz.

There are certain pieces that need to be put in place first, including a strong backer. “The fact that there is not a heterogeneous die stack interface standard makes it so that the future ecosystem cannot exist,” points out John Ellis, Open3D directors at Si2. “If the leader makes a de facto standard and says we are going to have a heterogeneous bus for stacking die, then the industry has to come together and approve this. Then you find out if there is real energy for it. Once it exists, the market will decide.”

Wingard sees three challenges with 3D system assemblies. First, you need an electro-mechanical physical standard for the stack in three dimensions. “You need to know where the pillars and TSV are located. Where is the optimum place to put those for a DRAM, and is it likely that this is optimum for flash, and can you imagine that the SoC will be happy with that? Each of those die create obstructions in the floorplan that prevent some level of optimization. It is not unsolvable, but it is challenging. And it is not clear who will be able to make it happen.”

Second, you have to have a standard for a 3D network. You need the concept of a sufficiently generic set of protocols that can run in the vertical direction and can be flexible enough to do anything you want. Wingard uses MIPI as an example. “Everything MIPI did was oriented around the reality of bond wires existing between cores. Nothing they have done would be optimum in the vertical direction. The benefit we get from an energy and power perspective is that we don’t need complex PHYs. The total inductance and capacitance of the connections is so much lower so we can be closer to power optimum by not having all of these PHYs.”

Third, how many such standards do we need? Wingard examines the PC where you find PCI, USB and Thunderbolt, in addition to display interfaces, camera interfaces, baseband interfaces. “Is there one that can rule them all? Perhaps the most interesting test case isn’t a 3D project at all, but the Google ARA phone where they are trying to use the MIPI MPHY as a single interface between all of the sub-system components that you might want to plug into a phone. Imagine trying to add a third dimension. The implication is that there is very little memory sharing that can go on between those sub-systems. This means they will require local memory in discrete form as part of the sub-system.”

Problems aside, Schulz sees few alternatives. “One of the fundamental characteristics of the IoT is sensors, and these are not ideal to be manufactured in 10 or 14nm. Neither are the types of memory they will use. If it can be done on separate dies, from different foundries at the most appropriate nodes and processes, you will be able to do more cost shopping. And then it comes down to the cost of integrating the cheaper dies. That is where the economic clout will solve the problem.”

Many people in the industry do not yet see the need for 3D logic stacking, but it may well prove to be the best solution for the least amount of money, risk and power. It is also possible that many of the controlling interests in the industry would see no immediate reason to back such an effort.



Leave a Reply


(Note: This name will be displayed publicly)