Debate focuses on just how far new tools should go and whether engineers will stick to what they know best.
By Ann Steffora Mutschler
With the complexity explosion occurring in SoC design today, there is a relentless force to push design decisions further up in terms of abstraction. Resolving issues at the gate level is not possible any more because there just isn’t enough time or resources. Further, the resulting design may not even be competitive because optimization at the gate level can leave a lot of power and/or performance on the table.
Performing optimizations earlier in the flow will almost always result in a better design, noted Mike Gianfagna, vice president of marketing at Atrenta. “The movement toward 3D IC stacks is making the whole problem even more urgent, since iterating at the detailed implementation level in a multi-chip design will make design schedules far too unpredictable.”
All of this requires system-level flows to be much more productive and efficient than ever. “A fundamental issue with these types of flows in the past was that they existed in isolation,” he said. “The user performed some exploratory work and then did it all over again during chip implementation. That ‘gap’ between system level and implementation level flows is now closing. The tool chain needs to be integrated and consistent if true efficiency is to be realized. Better modeling and standards, such as IP-XACT, are helping. The development of architectural level chip assembly tools is also making a big difference.”
Not everyone is convinced, however. eSilicon chief architect Sam Stewart pointed out that engineers that have been doing ASIC design for 15 to 20 years are sticking with what they know. “There is a lot of time spent just figuring out how to hook up all these blocks as well as understanding how each of these blocks works.”
“On all of the projects I’ve worked on, sometimes you don’t have a good handle of what is a good solution. What people do is they generally work in C or SystemC and try to capture something about how the system would work, so you could say that’s a system flow. It’s a system flow in the sense that what they are doing is capturing what an ASIC would do. You have to have something to translate what you are doing in SystemC into an implementation, which is instantiating things, and connecting these instances,” Stewart explained.
He said one customer has spent the past two years on its ASIC. If asked what they did during that time, Stewart said that they would say they spent their two years implementing the system—conceiving it and then hooking up all the various pieces on the ASIC. As for the high-level system aspect of it, they would say that was the easy part.
“The difficult part is first specifying the ASIC. There is an ARM CPU, for example. What does it talk to, what is the width of the bus, etc. Then, this block over here, what does it talk to? Our customers would claim that system flow stuff is probably not as much of a concern as whether there is an easier way to design an ASIC,” he continued. Stewart also noted that it is not clear exactly what information needs to be, and should be, carried forward.
This information gap should ease as tool providers solidify flows and get the word out to customers directly, or at conferences and other events.
What designers need today in terms of strengthening the connection between system-level flows and implementation flows falls into three areas, according to Frank Schirrmeister, director of marketing for system-level solutions at Synopsys. The first area includes interfaces to the front end, such as UML. Second, the huge model ecosystem and the tools needed to enable the system-level ecosystem. The third area comes in once decisions have been made and the system-level partitioning has been determined, along with the system-level flows, so that everything doesn’t have to be re-entered at implementation.
From left to right, the first thing needed is correspondence of IP blocks implemented from RTL down to models of those blocks, Schirrmeister said. “We all hear these numbers like, ‘We are at 60% IP re-use on the hardware side going towards 70% over the next couple of years.’ That’s all good and proper if you integrate this at the RT level. But you want to have models in the front end that represent, so that may be everything from a model for a processor core like the ARM processor cores, as they all come with the system-level models, or instruction-set simulators. You want the same for any type of peripheral.” For example, Synopsys has a USB 3.0 model that can be used to write drivers against, so the driver doesn’t have to be changed when in RTL.
Then, looking toward the middle of the diagram but still in the left-hand column of block implementation for the new blocks that aren’t re-used as IP, he explained, “you need implementation from the high-level model to the implementation, and there’s a verification linkage there and implementation linkage.” In the two areas where Synopsys plays are high-level synthesis tools, whereby the high-level C model is the basis for generating the implementation from there. As part of that, all testbenches are also generated here to validate that what was implemented is correct. “It’s the same issue as RTL downward. You have RTL to implementation, RTL to gate, you have the synthesis flow and then you have functional verification by simulation and eventually equivalence checking. And the same will happen here throughout, where equivalence checking goes together with high-level synthesis,” Schirrmeister explained.
Also still in in the block implementation column is processor design. If a processor designer wants an application-specific processor or a custom processor for doing some offloading of the most intense tasks, he/she takes a high-level model and generates from there. Synopsys has a C dialect language for instruction set architecture, while other vendors use the nML description for processors.
In the right column, “Software Development,” Synopsys hasn’t abstracted that much in the areas that are close to hardware, he said. “Essentially the trick is what we do today with virtual prototyping. We execute the software on a model of the hardware. We are not executing the model of the software on the model of the hardware. That would be the next step upwards.” On the software side today it is more in the software implementation, with similar linkages as for high level synthesis but for automated software implementation. “What we are following fairly closely are things like code generation from UML or Mathworks Simulink, which are equivalent to high level synthesis on the software side,” Schirrmeister said.
Moving to the middle column, “Chip Integration/HW-SW Integration” includes for Synopsys IP-XACT-compliant “core” tools (coreAssembler, coreBuilder, coreConsultant) and for Mentor Graphics, Platform Express.
The correspondence on top of that is the system-level and prototyping tools, which at the transaction level hook things together.
Synopsys is providing high-level models to system-level fabrics, such as the Sonics fabric, ARM’s AMBA fabric and the Arteris fabric. These models allow the interconnect to be configured, as well as configuration of all blocks in the design, and all of the items from the left column in the diagram fit together on the hardware side. The software is mapped onto the fabric, the design team assesses whether it will fit together, and once the system is configured appropriately, it can be linked down into a tool like ARM’s AMBA Designer, Mentor’s Platform Express, or Synopsys’ coreAssembler.
Leave a Reply