Looking Back At 2012

Balancing abstraction and detail: Where have we been and where do we go next?

popularity

By Frank Schirrmeister
This my last post in this Blog for this year and it is time to look back at the year and try to see what’s next. How fast time flies became clear in a funny way to me this morning when listening to the “Tagesschau” while showering – the 8pm news I grew up with back in Germany – courtesy of modern video Blog technology. Apparently the parking fines in inner cities have to be increased by 5 Euros. My fellow countrymen had not increased them since the 1990s and it now has simply become cheaper to pay the ticket fine than to pay the actual parking fee. Oops.

Abstraction versus Detail in EDA

Abstraction versus Detail in EDA

Some things change, others don’t. In EDA, while enabling the latest semiconductor design, we have been constantly balancing abstraction and detail as the graph in this blog shows. The way to deal with more complexity is to use more abstraction – which the industry has done successfully in various ways to raise the design entry level for mainstream design from transistors to gates to RTL. On the implementation side for design at deep-submicron levels, process technology is forcing us to deal with more and more detail than previously. An illustration of how little the actual fundamental drivers have changed is the fact that I took this graph straight from a presentation I did 10 years ago in 2002 at the Workshop on Electronic Design Processes in Monterey. The presentation was called “IP Authoring and Integration for HW/SW Co-Design and Reuse – Lessons Learned” (slides, paper) and I have to give credit for this illustration and the slide to Paul McLellan – now with SemiWiki – whom I had worked with at Cadence and who had shown a similar graph around 2000.

So where are we heading in chip design now in 2012? If one combines market estimates then we are dealing with much more complex designs, but a flat or only slightly growing number of them. Advanced Systems on Chips over the next couple of years will soon have more than 110 IP blocks to be integrated, will involve more than 70% re-use and more than 60% of design effort likely will be in software. They will contain multiple processing cores, forcing  software to be distributed across cores. Low power issues are becoming so important that they have to be addressed at the architectural level too. Application specific issues will drive specific design flows, and designers have to take into account large analog mixed signal content. Overlaid over all these technical complexities, we are seeing changing team dynamics on what tasks teams are performing – as in design vs. verification for example, combined with rapidly changing industry dynamics in the constant changes between aggregation and disaggregation, making it interesting to observe who actually designs chips – system or semiconductor houses.

Now looking back in the context of all that at the year about to end, two key trends stick out from my perspective for verification:

First, the need for more and more verification to make sure that the actual chips can be taped out requires more and more verification cycles to be executed before confidence is high enough that a tapeout can happen. It is not just the hardware but also the software executing on the hardware processors. This is one important driver to revert to hardware assisted verification – acceleration, emulation and FPGA based prototyping. It has been a hot area in the industry this year with some re-alignment, new products and acquisitions.

Second, pure force by itself is not sufficient to deal with complexity — verification also needs to become smarter. This is what is driving the combination of the different verification engines. We have seen in 2012 more combinations of transaction-level models and virtual platforms with RTL in simulation, acceleration, emulation and FPGA based prototyping, as well as smarter connections between RTL simulation and hardware assisted verification. This trend is only likely to become stronger in 2013.

Looking at the system-level, in light of balancing detail and abstraction, the elevator has stopped so to speak, and is even moving down towards more detail. While the industry has always attempted to enable making architectural decisions for users as early as possible, that has always had the issue that the amount of detail to base the decisions on had been limited early in the design flow. The simple reason is that models are required for early decisions and the effort to create the models has to be balanced against the gravity of the decisions they enable. Especially in light of complex interconnect that has hundred of parameters, model creation has become so complex that the actual architectural decisions have been pushed downstream using cycle accurate models at the RTL level. More and more customers I meet find it too risky to make decisions based on abstracted models and instead generate the RTL, execute it and make their decisions based on the results they get.

This situation creates some interesting perspectives for 2013 and going forward. For one, automation in the creation of the assembled RTL will become even more important, as will automation in making the associated software fit, i.e. updated drivers etc. This also directly impacts the two verification trends I mentioned above – getting to more cycles faster to make decisions will still be important in 2013, and so will be the need to combine the different engines in smarter ways.

Happy Holidays and a successful 2013!


Tags:

1 comments

[…] too early for their time, leave us in the area of electronic design automation tools? I stand by my thoughts in last month’s blog—software continues to grow in importance at all levels and we will must continue to address Wilf […]

Leave a Reply


(Note: This name will be displayed publicly)