A New Breed Of EDA Required

When the problem statement changes, it sometimes pays to use a completely different approach.

popularity

While doing research for one of my stories this month, a couple of people basically said that applying methodologies of the past to the designs of today can be problematic because there are fundamental differences in the architectures and workloads. While I completely agree, I don’t think these statements go far enough.

Designs of today generally have one of everything — one CPU, one accelerator, one memory sub-system, one USB interface, etc. Yes, there may be multiple accelerators, but each one is different. Each block is a unique block within the design, and in a way that means the whole chip is a custom chip — even though most of the industry is creating a design that has 90% in common with the designs from their competition and probably 95% in common with the last design they created.

The IP industry has been responding by creating ever bigger multi-function blocks or sub-systems and this continues to work for one-of-everything designs. But you would never design a memory by considering it to be a one-off. The cells are designed once, the logic that surrounds it are designed in a generic sense, and then compilers or generators provide the exact configurations that you require.

IP-XACT came about from development within Mentor Graphics where design was done in a more modular fashion. Blocks were defined with “interfaces” and then they just had to be connected to each other. The system knew how to connect the interfaces, so long as they were compatible, and insert necessary logic if required. It was crafted for one-of-everything designs, and it did become a useful way of describing IP blocks.

But we are getting to the point where designs do not contain one of everything. Processors for graphics are arrays of computing blocks, and the same goes for machine learning and AI processors, audio and video processors — and I am sure other blocks where I do not fully understand the internals. Where are the compilers for these types of systems? Why does everyone have to design their own MAC block rather than having a few companies, or even the foundries, produce highly optimized cells that can then be replicated, connected, interfaced to memory and IO in a programmatic manner? Where are the tools that can analyze regular structures to find out what types of dataflows they would do well for, and which would produce bottlenecks?

The industry has been looking for the next level of abstraction for one-of-everything designs, but perhaps that is the wrong approach. They should be looking at a level of abstraction where dataflows are defined, and appropriate structures created that include massive replication. Along with that, far better verification methodologies can be created that could utilize hierarchy in an effective manner.

Perhaps the closest example of this that does exist today are the network-on-chip (NOC) interconnect structures, where the number of requesters and providers can be defined and the relationships between them established, such as which requesters needs to talk to which providers, are they one-to-many or many-to-one connections, the throughput requirements, etc. From that, the interconnect is fully generated.

If that generation process is as close to correct-by-construction as possible, then there is only the need to verify the implementation at sign-off time. Everything before that can be done with abstract models in just the same way that a CPU delivered as IP from a trustworthy source is rarely verified at the gate level.

It is perhaps this change in the character of designs that will create the opportunity for more efficient tooling that centers itself on the attributes of the design. I would presume that this mentality would also flow into place and route, thermal analysis, and many other aspects of the back-end flow because each of the cells can be more thoroughly analyzed individually and then replicated.

Of course, there will always be that small part of every design that is fully custom digital logic and that can continue to use the existing tool flow. But it just seems as though that is now a fairly small part of the problem.



1 comments

Amit Garg says:

A very apt commentary on the current state of affairs and the need for next generation of automation that is needed to scale the systematic scaling needed for today’s SoC design. There is a definitive need for innovation to enable data flow driven design automation.

Leave a Reply


(Note: This name will be displayed publicly)