Design Reuse Vs. Abstraction

IP reuse has reduced the urgency for a higher level of abstraction in complex system-level design, but that still could change.

popularity

Chip designers have been constantly searching for a hardware description language abstraction level higher than RTL for a few decades. But not everyone is moving in that direction, and there appear to be enough options available through design reuse to forestall that shift for many chipmakers.

Pushing to new levels of abstraction is frequent topic of discussion in the design world, particularly at advanced nodes where complexity can quickly become unmanageable. In the late 1990s and into the 2000s, some believed high-level synthesis might take over. While HLS has found a niche, broad application of this technology still hasn’t happened. And now, because of widespread design reuse of pre-defined elements and components, it appears even less likely that another abstraction level will be added to foster universal top-down design—at least for now. In fact, the focus of many discussions is less about another abstraction level and more about what can and should be re-used within a design that increasingly is viewed in the context of a broader system of systems.

“Due of complexity, everything is exacerbated,” said Frank Schirrmeister, senior group director for product management in Cadence‘s System & Verification Group. “System design is only getting more complex. How do you get the systems, and systems of systems, to be integrated? There’s really no way to describe the full system from the top down and then refine and implement everything. It’s really more of a meet-in-the-middle approach, where for some of the new components you develop these thing. But most of it is assembly and reuse, with validation to make sure the components work with each other.”

As systems developers create complex systems for everything from airplanes to automobiles, the discussion increasingly involves the whole airplane or car, leaving them to ask what is actually needed to build those.

“This is where the notion of multi-fidelity specifications comes in,” Schirrmeister said. “Sometimes there will be part of the system that is completely virtual, like a virtual node of an ECU in a car, and other elements in the car. You have to re-use silicon. You need to plug a board into it. For other components, you only have the RTL. Is there something to unify the requirements? These are all true system-level design questions.”

Many design teams take a divide-and-conquer approach today to manage complexity. “You basically try to define proper interfaces between the different components, and then abide by those interfaces and hope everything works well,” he said. “Sometimes you figure out somebody measured that interface non-metric, the other in meters, and they don’t fit together. But hopefully those type of issues don’t happen too often.”

One of the key approaches to making this work is system modeling, and there is more activity surrounding multi-fidelity variables that can be put together in an emulator with a simulation of the rest of the environment. This can be useful when putting different elements of a design at different abstraction levels together, Schirrmeister said.

That helps bolster the case for IP reuse, rather than a new abstraction level, because it can be customized for system requirements.

“The availability of third party IP, such as high-speed interfaces in the technology nodes needed by the system-on-chip designers, is the key reason for aiding system design,” said Navraj Nandra, senior director of product marketing for interface IP at Synopsys. “What is important is that the interface IP meets the compliance requirements for the protocol standard. The latest versions of USB, PCI Express and DDR in, say 7nm, go through silicon testing, and if applicable, logo certification. This enables systems-on-chip to include very complex high-speed interfaces meeting both cost and time-to-market requirements. Without the availability of third-party IP, many systems-on-chip could not be realized, or be late to market, or not have the latest version of interface.”

Not surprisingly, machine learning and ADAS are two of the current drivers for the requirements for the next generation of interfaces. To support these machine learning, ADAS, and IoT edge-based systems, IP providers such as Synopsys will make DDR5, HBM2E, LPDDR5, USB 3.2, PCIe 5.0 in the latest technologies nodes available.

“[Chipmakers] are expecting more from their IP suppliers,” Nandra noted. “As both hardware and software complexity increases, design teams require configurable, pre-verified IP subsystems that deliver complete, complex functions ready to integrate as-is, or which can be modified. A critical aspect to this is understanding the integration challenge. Signal integrity/power integrity (SI/PI) analysis for high-speed interfaces evaluates the amount of on-chip decoupling capacitance needed, the ratio of power to ground pins, and for high-speed memory interface PHY & SDRAM, the termination strategy, SoC package design, PCB stack-up and trace width/spacing, and read/write/address/command/control timing budgets.”


Fig. 1: Integration challenges that can be solved through SI/PI analysis. Source: Synopsys

IP reuse also can help to grow and scale system designs much faster.

“There is definitely more space for integrators who re-use available IP, which allows the focus of development to be on new features and algorithms instead of common elements of the system,” said Zibi Zalewski, general manager, hardware division at Aldec. This puts complicated projects in the range of smaller design teams, with the reuse and customization of available systems. Design engineers may re-use processor and GPU IPs and develop only the functions needed. Such an approach specializes the engineering market. There are IP development companies, integration teams and new hardware developers. Of course, system-level design is still doing fine, but the complexity and cost of such projects has grown significantly, which makes it less accessible for smaller engineering teams.”

The age factor
Not everyone is taking the same path, however, and a key variable is the age of the design team.

“Basically, everything is done today at RTL,” said Wally Rhines, president and CEO of Mentor, a Siemens Business. “But as complexity increases, you have to take a higher level of abstraction. We will go to high-level synthesis. We will go to developing the data paths. All of these AI algorithms will be implemented into design using high-level design. There have been products in the market since 1993, but what has happened now is the whole infrastructure is there. You can do the design, the verification at the level of C/C++. Now the world has made available the capability. The question is who’s going to go to it. And it appears to be, just like the last time, the new college graduates. It’s the Googles and Facebooks and Amazons. These people have already adopted it. They’re writing the core functionality for data processing in C, and synthesized and verified at a high level. They’re looking at alternatives. They’re analyzing power dissipation.”

So while design reuse is one way of tackling this problem, it’s certainly not the only way. Developing designs from the outset at a higher abstraction level also works. “They do tradeoffs of power, performance and area, all at the next level of abstraction,” said Rhines. “This will happen just as it did for RTL, but you’ll want to use the appropriate abstraction for what you’re doing. The control logic will stay where it is. But RTL designers are finding that by starting with C, they can do a large share of the design and tradeoff analysis early. So this will be the next abstraction wave.”

One of the original drivers of IP reuse was the need to more easily verify complex system designs.

“The challenge with verification was it was getting worse and worse in the mid-’90s, and we were trying to find ways to improve things,” said Simon Davidmann, CEO of Imperas. “One of the solutions to improve productivity was that instead of having to verify everything, what if you could buy blocks of IP, as it’s called now, building blocks of non-differentiated RTL, of standard components that could be pre-verified? If you think of where Arm came from, companies were building their own processors that got very complex once you had to try and run Linux on it. So the idea was that Arm and MIPS sold you a solution, which was a pre-verified large block of RTL to provide functionality that really shouldn’t be a core competence of most companies. They wanted to run the software on it and they wanted to design their accelerators and bespoke hardware around the edge.”

Also from the verification point of view, bringing in large blocks of RTL was a good thing because it improved productivity.

“A lot of people think they can design these IP blocks, but the benefit is that they’re proven to work as a standard if they’ve gone to silicon several times,” Davidmann said. “You’ve gotten rid of a lot of the verification of the block. All you need to do integration testing. We have come across companies that had maybe 100 or 150 different peripherals on the device, as well as the main processors, along with many different blocks. The challenge was, and still is, how to handle that from a construction point of view, as well as verification. One of the issues at the electronic system level is when you’ve got a large SoC with all of these components, how do you connect them up? This still requires a smart architect. However, when it comes to blocks, it is a different game because in a block you can specify an algorithm, and then you can almost implement that algorithm automatically. One of the successes where SystemC and behavioral synthesis [now called high-level synthesis] has worked is in implementing algorithms into code.”

Dave Pursley, product management director in the Digital & Signoff Group at Cadence, has a similar view of HLS. “High-level synthesis is clearly an implementation technology, so it belongs with digital implementation, but where it has the biggest impact on the flow is actually on the verification side. One of the things that makes it a really interesting part of the whole system design is that a lot of times you’re re-using 90% of the blocks, arranging them, and connecting them the way you need—and then perhaps designing or buying the other 10%. But if you have this higher level IP, even with 90% that you’re re-using, it’s sort of malleable. You can change it, you can make it run faster or slower or with lower power, and it’s all that same piece of IP. And while it complicates IP reuse, it complicates it in a way that allows you to create more optimal SoCs.”

So while a single, top-down executable doesn’t exist just yet, there are self-professed holdouts like Schirrmeister. “In the good old days, I thought SysML would take over the world, and it still hasn’t. You can always debate the absolute correctness of the charts that Gary Smith used to show, and the ITRS used to put up, but the principle is true. If we wouldn’t have worked out the productivity improvements over time, designs would not be possible at the cost that they are today. We just have to bear in mind that the complexity of the pieces we are building these days is ridiculously big, and has been growing, and if we hadn’t achieved the productivity improvements over time the cost would be completely out of bounds. At the end of the day, for systems of systems, divide and conquer, proper definition of the interfaces, and software programmability all contribute to the ability to assemble a system and tune it to its need after the fact — and you don’t need a full system level spec of everything which is executable and drives the implementation.”

Opinions across the industry vary on this point. “Divide and conquer didn’t kill top-down system design,” said Sergio Marchese, technical marketing, manager at OneSpin Solutions. “In fact, it enables it. Having high-quality IP with a well-defined, configurable, behavior as building blocks is crucial to quickly put together complex systems that will work reliably. As complexity goes from systems to systems of systems, it’s yet to be seen whether the same paradigm works. No alternative is emerging at the moment.”

What is clear is that IP reuse has not derailed system design. But it has at least deferred the move to a higher level of abstraction for many chipmakers. “Systems design has been, and always is, about interconnecting large blocks, and it does not matter whether their processes or IP brought in or IP you’ve designed yourself,” said Davidmann. “You still need ways of bringing these systems together and that’s really what system design is. What IP has done is allow the designer to ask, ‘Rather than build this block, should I make it? And if I make it, should I do it at an RT level or a higher level of abstraction than with some ESL language like SystemVerilog for design or SystemC.’ IP hasn’t derailed it at all. It has, in fact, enabled us to build systems. At the same time, you could say that if we had a better high-level language, we could implement everything. We wouldn’t need to have a processor in there and accelerators and C software running. We could just describe things at a high level language like Matlab, push a button, and boom, out would come with the silicon.”

Finally, with all the new technologies like machine learning, data centers and automotive, new chips will need to designed considering all those requirements. “That means system level design will still be growing architecting the systems based on IP reuse and dedicated modules development,” said Aldec’s Zalewski.

—Ed Sperling contributed to this report.



1 comments

Zakk says:

A very natural conclusion is, you can not reduce such a problem from at least magnitude-N^5 to magnitude-N by so-called “High level abstraction”. Too much abstraction actually means “you don’t precisely know what you are doing”, but there must be someone to know it. Obviously a mathematical system is needed to make chip totally to be “design and implemented” or “abstracted” in brain or on computer. Just like nuclear bomb, before a mature theoretical structure is constructed, people need to explode the bomb physically on/in the earth rather than “abstract” the explosion on computer.

Leave a Reply


(Note: This name will be displayed publicly)