Considering the value of models in software development and architecture analysis.
During the first half of this year I had more discussions with customers on models again. Are models back? For what purpose? In short, it looks like models are well adopted and in use for software development. For performance and architecture analysis, however, as a recent presentation from Renesas at CDNLive Japan shows, users just use RTL as that accuracy is required. In combination with emulation, they get the appropriate speed to even bring software into the analysis.
Personally, I have quite some history as a strong believer for models. At one point I even thought that abstracting both hardware and software is a good idea, as I presented back in 2002 at EDPS in Monterey in a paper called, “IP Authoring and Integration for HW/SW Co-Design and Reuse – Lessons Learned” (full paper here). Well, that didn’t happen, mostly for lack of models. Back in 2008, quite consistently between different companies in the system-level market, I talked about the different modeling styles in “Bring in the Models!” More recently, in 2014, we had a roundtable called, “Are Models Holding Back New Methodologies?” …and models again were still identified as a key issue.
So why is modeling still not perceived as a solved issue, even though we have been working on this for the better part of two decades (if I look at the Felix initiative as a trailblazer starting in 1996)?
Let’s just declare victory…
Models, using real abstraction, work for software development.
Models, using real abstraction, work for implementation – high-level synthesis from SystemC models is a reality – and also for integration using topology models like XML driving the SoC integration.
Models, using real abstraction, were never going to work for architecture analysis, because speed, accuracy and time of availability would never align. Users have reverted to accurate models either in RTL or automatically derived from RTL.
Very recently, Javier Orensanz, GM of ARM’s Development Solutions Group, gave a presentation on models – see the write up here in “Speed or Accuracy? ARM Shares Insights on Virtual Prototyping” – and used a great table to crisply confirm my points from above:
For software development, Fast Models are the solution. We connect them to the Palladium emulation platform for early software development using hybrid emulation as well. For architecture analysis, ARM provides Cycle Models, formerly known as carbonized models from Carbon Design Systems. These models are automatically derived from RTL and can serve as an RTL replacement.
Avoid the other two quadrants. If a model is slow and inaccurate, then something went severely wrong. And Javier called fast and timing-accurate models snake oil – when in software. The industry tried, and we were not successful. However, users look at RTL in an emulator as a model – with a rather big dongle like the Palladium platform attached – and that is what customers are using to get to the speed required to bring in software and to accelerate software development.
The most recent example is Renesas. They had announced a while back that they adopted Cadence Interconnect Workbench together with the Palladium Z1 platform. At CDNLive Japan, they provided an update on actual results. With the automation provided by Interconnect Workbench, Renesas reduced the testbench generation time from six days to less than a day. The overall effort was reduced from 22 days to 6 days. Renesas also confirmed speedup from 50 hours in simulation to 2 minutes on the Palladium Z1 platform, with 10 minutes of post-processing. The smart combination of simulation and emulation allowed an almost 4x effort reduction and 250x speedup.
Models exist and work for software and implementation and assembly. For architecture analysis, users have found viable workarounds to allow the right accuracy at the right speed. This does not solve the issue of early availability, but for automatically generated RTL like complex interconnect, and for platform-based design with derivatives, it solves the actual user problems.
Models are not dead. Long live the models!