中文 English

Does System Design Still Need Abstraction?

If high-level synthesis and transaction-based development beyond RTL is so necessary, why haven’t they happened yet?

popularity

About 15 years ago, the assumption in the EDA industry was that system design would be inevitable. The transition from gate-level design to a new entry point at the register transfer level (RTL) seemed complete with logic synthesis becoming well-adopted. The next step seemed to be so obvious at the time: High-level synthesis (HLS) and transaction-based development beyond RTL—also taking into account software—would then be the next entry point from which design and verification could be automated. Well, 15 years on, it still hasn’t happened. Why?

Back in 2003, at the 40th DAC, EDA legend Alberto Sangiovanni-Vincentelli presented a session titled, “Tides of EDA”. In it, he used a version of a chart that I had originally prepared for him in about 1999 as part of the Felix project that eventually resulted in the product called VCC:

Cadence

Source: Alberto Sangiovanni-Vincentelli, “The Tides of EDA

It was like Mr. Smith in the movie The Matrix stating to Neo, as they faced an oncoming train, “You hear that Mr. Anderson?… That is the sound of inevitability.” Layout-based design had been abstracted to transistors in the ‘70s, complexity and automation allowed clusters of transistors to be entered as gates in the ‘80s, and further automation moved the entry point in the ‘90s to RTL and logic synthesis, creating clusters of gates. Clearly, this would continue, and we would find the next level of abstraction from which design would be automated.


Source: Cadence

Some abstraction clearly has happened since then.

HLS has happened and today is a mainstream component of development flows. While it typically does not synthesize full systems on chips (SoCs) that include several processors, peripherals and interconnects, it works at the block level and allows re-targeting blocks between different technology nodes really well. It’s at the top of the implementation flow.

For full SoCs, the assembly of top-level RTL from higher-level topology descriptions like IP-XACT is happening and is a pretty standard methodology today. Together with the re-use of IP blocks, HLS and the automation of interconnect fabrics, this is probably the next level of design entry for hardware design, and it is mostly automated.

The other abstraction used today is around software enablement. SystemC is also used for virtual platform assembly and allows abstraction of the hardware to run real software (not abstracted). As indicated in the second figure above, the SystemC abstraction for virtual platforms is still very different from the abstraction for HLS, so we do not have a combined entry point for automation here, unfortunately.

That means today, in 2019, almost 15 years on, we are still not quite at a point where we can synthesize from a higher-level description using automation-complete hardware/software systems like RTL, which allowed us to get to gates for chip-design. So how is hardware/software system design above RTL done then? Why aren’t we there yet at the end of the second decade after the industry embraced RTL?

One can argue that it was not necessary for three reasons: parallelism, simulation performance and hardware-assisted development.

Two years after Alberto’s “Tides”, Herb Sutter published his article, “The Free Lunch Is Over, a Fundamental Turn Toward Concurrency in Software”, in Dr. Dobb’s Journal in 2005. Its premise was simple. While the number of transistors per processor chip was growing, single-thread performance, frequency and typical power were flattening. The only way out to get more performance was to put more processors on a chip. He was right, of course. As a result, a flurry of activities in parallelism happened, new programming models, enhancements in C++, education in parallel programming. Today, huge improvements in EDA tools have been made using parallelism, allowing us to maintain full accuracy versus having to abstract to higher levels—our recent announcements in this domain show impressive results.

Simulation performance is the second reason. Not even accounting for RTL simulation now—also using parallelism for long runs—the intrinsic single-core performance has been growing about 10% to 20% per quarter, increasing simulation speed by about 1.4X to 2X per year. This means what took one hour to simulate in 2009, today may takes between two and 40 seconds with a combination of simulator and processor improvements.

Thirdly, hardware-assisted verification—emulation and prototyping—has fundamentally changed the way we develop these days. As I previously wrote in “Confessions Of An ESL-Aholic”, hardware-assisted development is often even an alternative to worrying about abstracting hardware to achieve higher performance. We see a lot of usage of virtual platforms together with emulation and prototyping, which allows an efficient balance of fidelity and performance for software developers.

So, bottom line, is abstraction still needed for system design?

Absolutely—because system complexity continues to grow!

However, we have been able to avoid a crash into a wall. With the improvements that come from parallelism, simulation performance and hardware-assisted development, we have been able to continue to improve development productivity without having to trade accuracy for performance.

Interesting times…



Leave a Reply


(Note: This name will be displayed publicly)