ESL Requires New Approaches To Design And Verification

Lines blur between front and back end as problems that need to be solved span from end to end.

popularity

By Ann Steffora Mutschler
As more data gets front loaded into SoC architectures today, understanding verification challenges as well as communication between the front and back end has never been more critical.

“All of this is getting more complicated,” said John Ford, director of marketing at ARM. “There was a time when an ARM processor core was all that was on a chip. Now there’s a lot more IP resident.”

That means the problem spans far more than just the IP. It now reaches from the architecture all the way through verification, tapeout and manufacturing, which explains why TSMC and ARM this week extended their joint development agreement to 20nm. Under the terms of the deal, TSMC will get access to everything from the A9 to M0 cores, as well as the core link fabric. That can be combined with the memory and standard cell technology from TSMC so both can create processor-optimized packaging.

“Nothing has gotten easier, and in many ways it’s gotten more complex with the process,” said Ford. “The challenge now is to optimize for both performance and power and put something in the customer’s hands that works.”

ARM is not alone in its work with foundry giant TSMC. All three top EDA vendors—Mentor, Cadence, and Synopsys—made announcements detailing their contributions to TSMC’s Reference Flow 11 for ESL and integrated design and manufacturing closure.

“For the first time in history the TSMC reference flow now has been extended to the system-level,” said Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys. “So what happens is that the loop includes not only the semiconductor IP. It also includes technology parameters of how that IP is implemented. The flow with TSMC shows that you can annotate power and performance data from the technology all the way back into transaction-level models, which then run basically power characterizations from the technology. This is the first step towards physically driven system-level design, where basically you have characterization, although it backwards.”

What is driving all of these complex changes in design today? Clearly, in order to meet the ‘application thirst’ of consumers, things have to be very flexible, which has driven more differentiation into software, leaving the design chain to question where the value can be extracted.

“If you analyze in detail the design chain where companies like Apple make money you will see that basically the dollar moves uphill in the sense that they make money from the applications running on the device so the software fueling the hardware and using the hardware becomes where the main value is. And you have this in all areas. It is the case for music and the iPod where a lot of the money comes from the music itself. It’s the case from the wireless network versus the hardware where the value is really in the wireless contracts,” Schirrmeister said. “From the semiconductor side it goes probably back to Mr. Moore again–you have much more technology capability than you really can fill with applications easily so you need to commoditize some of it. You can’t really fill it fast enough, which leads to this whole notion of software becoming the differentiator.”

Architecture decisions moving forward
Another interesting dynamic is a shift of architecture decisions being front-loaded from the application and being driven to the implementation flow for the semiconductor side. That begs the question, ‘Who is making which decisions?’

Without a doubt, decisions are changing over time in terms of which parts of the software are developed by whom, he said. Looking back 8 or 10 years, when TI launched its OMAP platform, that was really the first time a large semiconductor vendor like TI integrated that many players—from IP vendors to software vendors—and brought them all together. That really connected the front end and back end of the design chain.

“It has become more much common over the last 10 years that in all those domains you have to provide more of the software than you had to years ago. The responsibility between doing the software as a semiconductor vendor versus a system house has changed quite a bit, and that has an impact on the architecture. In the past it was purely the semiconductor guy with one or two architects going out and discussing with the system house, ‘This is what I think I will build in my next chip. Do you think that can be useful?’ Then, like in any good system development process, you decide on two to five early adopters who define the product with you. But that situation is sometimes now in the reverse where the application house basically says, ‘Here is how I want the architecture to look,’ so they are taking a more active role in actually defining the architecture. Our offerings in that domain like the algorithm tools and the architecture definition tools and the virtual platform tools help that interaction between the different customers because I can tell my supplier what I need and those other requirements your chip has to meet in order for it to fit into my system,” he continued.

At the same time, new verification challenges are on the rise, according to Thomas Bollaert, marketing manager for Catapult C at Mentor Graphics Corp.

“When you really want to do verification, your expectation is that what you do, and what you test, is reliable for the rest of the design process. If it is correct, if it is golden at this stage, and it should remain so for the rest of the process. Obviously one of the key questions is, as you do your simulations up front, how do you make sure that you do achieve verification and not just some form of validation that it should be correct and so on? That’s a very important starting point because you may add a lot of information in your high-level models–software, IP, handwritten system-level models–the real thing is that you’ll only be able to achieve verification at this level if you are indeed able to carry all of that design detail downstream, to, let’s say, an RTL representation in a correct-by-construction fashion,” he explained.

If you don’t have this deterministic process everything you do upfront will merely be simulation and validation but cannot be verification because you need that automated or correct-by-construction implementation process to take what you do at the system level down at the RTL. That’s requirement number one, he said.

That being said, there are obviously a number of things that can be abstracted in high-level models and then high-level synthesis tools can be relied on for the correct-by-construction piece so that you know that what the tool does will guarantee correct results, Bollaert noted. “In essence, your high level synthesis tool is actually a key component of your ESL verification strategy.”

There are other things that come into play other than high-level synthesis including software, IP–all of these are essential components of a complex system these days so depending on what needs to be verified, and will need to be taken into account or not. “Again,” he explained, “this is very similar to verification and the way it is done today at the RT level–you have a block level testing which only tests a subset of the design functionality and you will have system-level verification, still done on the RTL, that will verify the entire chip so depending on the scope of what you want to verify you might need more or less of the entire design description,”

Biggest challenge at system level is not technical
The biggest challenge right now is really not so much from a technology standpoint. It’s more for people to understand what they can do at the ESL level, Bollaert said. “ESL actually spans different abstraction levels: You can do purely untimed C++, you can do to TLM in SystemC, you can do cycle accurate and the big question really for people is to understand what each of these abstractions means and what they can do with those. You can only verify what is in your model so if you don’t have timing in your model you shouldn’t be expecting to verify clock cycle timing or overall performance because it’s not in your source. For many this is sometimes a pretty big conceptual gap when moving from RTL activities. What we see still is a learning process in the coding style and the abstractions and what we can do. Moving from gates to RTL involved extracting gate level timing–it took a while for designers to do the shift but eventually they went there. What we see is that when you move from RTL to higher abstractions—at least above cycle accurate—it seems to be slightly more difficult.”

One tool that can help shed light on this is the “High-Level Synthesis Blue Book,” written by Mentor’s Michael Fingeroff. The book is a 300-page compilation of coding style, what you can do for system-level models and how to make them synthesizable.

“This is your Holy Grail because if what you are doing at the system level is not synthesizable then your verification is lost and you have to do it all over again,” said Bollaert. “So the biggest motivation for adopting ESL is eventually to reduce your verification effort at RTL and below so people should not forget that what they do at the system level should be, first of all, verifiable. That means defining what verifiable means at this level. And then, of course, it should be synthesizable so that you get RTL that is guaranteed to match.”



Leave a Reply


(Note: This name will be displayed publicly)