Keeping Models In Sync

The addition of power models and software models is making it harder to keep SoC development in sync.

popularity

By Ed Sperling
Models and higher levels of abstraction have been hailed as the best choice for developing SoCs at advanced process nodes, but at 28nm and beyond even that approach is showing signs of stress. The number of models needed for a complex SoC has been growing at each new process node, which makes it much more difficult to keep them updated and in sync as the design progresses down the flow.

There are architectural models, transaction-level models, IP models, software models, and more recently power models that span the entire design from hardware to software and even to manufacturing. Problems can creep in at every step of the flow, from the time they are created to the constant tweaks by design teams, which often are in different locations. Those tweaks are necessary for a variety of reasons ranging from optimizing performance to cutting or in some cases boosting power, as well as for adding or subtracting functions. But all those changes make it easy to skip a synchronizing step, which is critical in keeping a design on track. And it’s becoming even more critical as chipmakers move to design for variability and computational scaling on the manufacturing side.

“Being able to build models that talk to each other is a tools issue and a methodology issue,” said Shabtay Matalon, ESL marketing manager in Mentor Graphics’ Design Creation Division. “The problem is when you split into teams and not all the models are alike. That means the abstraction of the models above RTL are not the same.”

That’s particularly troublesome when the models are created by different groups or even different companies, which is the case with software and IP. The TLM 2.0 standard is a first step toward creating a standard that all other models can work through. The problem is that while there are hooks for the various power formats there are no tools to automatically create those power models. Large IDMs such as IBM, STMicroelectronics, Qualcomm, Broadcom and Intel are all developing their own power models at present.

“Because there is nothing available we’ve had to do it ourselves,” said Bhavna Agrawal, manager of circuit design automation at IBM. “It’s not because we love to do it.”

Power models are particularly troubling because they have to span all of the pieces, all of the functions, and all of the various power tricks and tools that engineers use to make complex chips. Turning on and off a single power island is a fairly simple thing. Turning on and off 20 power islands, and putting some into reduced states of functionality can produce unexpected results that requires constant adjustments, and each adjustment affects everything else in the design and all the other models.

More software and IP
The complexity increases yet again when software and third-party IP are thrown into the mix. Software developers have their own methods and models, but they’re not typically compatible with the hardware models. Having teams work side by side helps force the issue, and having project managers bridge both worlds also helps. But those ideas are just being tested by major chipmakers. In the mainstream chip world they’re almost non-existent.

“Right now we only model the hardware and load the real software,” said Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys. “But we do have IP models, and keeping those models in sync with the rest of the design is a problem. Even being able to keep IP models in sync with the correct version of IP is not always trivial.”

It’s particularly troublesome because SoC makers increasingly are being called upon to develop not just the embedded software, but also to make sure the operating systems and application software functions correctly on a device. Cadence’s whole push into EDA360 revolves around starting from the software down instead of the hardware up. But that also means creating models from the top down that don’t exist today, and potentially integrating them with the models that do exist.

“One of the challenges is that you may make changes in the TLM that are not reflected back to the overall spec,” said Schirrmeister. “You need to make sure everything stays in sync, whether you update that manually or automate it by connecting the TLM with the implementation model. About half of the users of TLMs rely on software to verify the surrounding hardware, so that becomes the link between the TLM and the implementation world. But not everyone is using TLMs, either.”

Opportunity knocks
Many of the large IP providers, such as ARM, Denali (soon to be part of Cadence) and Virage Logic (soon to be part of Synopsys) are keenly aware of the need for sharing information among various models. But Mentor’s Matalon said that to effective, tools are needed to capture the user’s intent and translate it into models that are TLM 2.0-compatible.

“Most design today is based on re-use and integrating IP, which is why the leading IP vendors are moving in this direction,” he said. “But that also means the platform creators need to change. Some of them are invested in legacy models and that cannot be re-used downstream because the use of legacy models is limited to one task. You need to use the same model for verification and power and build models that are scalable and can be re-used at the back end of the process.”

In addition, design teams need to understand how all the models fit together. And at each new node, where complexity is making models much more attractive, they also need a way of updating the changes made in those models. And while that seems intuitive enough, it’s a lot harder than it sounds.



Leave a Reply


(Note: This name will be displayed publicly)