Incremental Design Methodologies

Making changes in designs and still hitting narrow market windows with complex chips requires a different approach.

popularity

There are times when we become stuck in the past, or choose to believe something that is no longer true or actually never was true. As we get older, we are all guilty of that. History tends to rewrite itself, especially given that this industry is aging. One of these situations occurred recently, and comments from an industry luminary didn’t align with the thoughts and memories of other people. However, following the path led to some other interesting questions that were very much related to the original misconception.

It all started with a statement that the industry has a problem caused by the flattening of the design database and that this in turn leads to problems with incremental design. “Because a layout and the back end always starts with a fresh sheet of paper,” said the source, “then it is impossible to re-use much from design to design.” This is very troubling when looking at some of the new kinds of designs such as those necessary for the (IoT) edge devices. Here we can expect to see families of devices, each with a small variant such as a different sensor or different types of local processing. There is strong industry call for these devices to be designed and verified much faster than is typical today.

briangraphic1

The need for speed
Performance is not the primary criteria for these devices. Time to market and cost are the two biggest drivers, closely followed by power. A few months ago, Simon Rance, product manager at ARM said, “Many of the EDA tools today have too much overhead for IoT edge type of designs. Three months is not an aggressive enough goal [for the amount of time necessary to design and verify a new design]. It needs to be in the one- to two-month range. The back end must become push button.”

Later in that discussion he also said that small incremental changes should follow a different timescale. “We have to bring the timetable down, even to the order of a couple of days.” This requires a new way of thinking about the problem, including new tooling and methodologies. In short, we need a methodology based on small changes rather than new designs. To tackle this, some are looking at an , while others prefer to see it as an incremental design philosophy. The EDA industry has invested a lot in developing engineering change order (ECO) capabilities in their tools that allow late stage modification to be made in a design. The question becomes, “How big can an ECO be, and if the platform is designed with this intention in the first place, can an ECO be a change in an IP block, or the addition of a sensor?

Problems in the front end
High-level synthesis (HLS) appears to be a leader in supporting these new methodologies. These are tools that enable quick results to be obtained and if time remains, to fine-tune the architecture or to add additional optimizations. Achronix recently announced incremental compilation that it claims “dramatically increases productivity by allowing FPGA designers to compile portions of their design that have changed while leaving the remainder of their design intact.” Those incremental changes can then be passed through the tool flow. The company claims this provides up to 58% speed-up compared to a non-incremental flow.

Calypto announced similar capabilities back in November 2014. Bryan Bowyer, product marketing for high level synthesis at Calypto, notes that “in the main flow we conceptualize the design as a lot of little pieces and focus where you need to focus. You may have decided how you want to structure the design but when you push it through synthesis, something may not be good enough. You didn’t meet timing or power. You want to be able to lock down the pieces that were good and feed that information back. So the tools can consume timing or power information out of RTL synthesis and back annotate it. You then use that to resynthesize other pieces.”

briangraphic2
Source: Calypto. Rather than allow the entire design to change because of a small change in the source, parts of the design are locked down.

Dave Pursley, senior principal product manager for system level design at Cadence also points to some of the natural advantages of HLS in this context. He says “HLS enables you to get to functional hardware quicker, even if you haven’t spent large amounts of time optimizing it. At any time you can stop development and use what you have. This is akin to Agile development methodologies.”

But HLS only accounts for a small amount of the logic in each chip. “There is already much reuse in the case of IP,” points out David Botticello Jr., senior staff support application engineer at Cadence. “It is probably more appropriate than any kind of incremental design.”

The concept of an ECO can be applied on a larger scale to make incremental design changes. “Functional changes can be made on a large scale if the changes are constrained to a block- or top-level hierarchy,” says Ruben Molina, product marketing director for timing signoff at Cadence. “Many designs are done this way where designers simply insert updated versions of the hierarchical blocks without affecting other parts of the design. The interfaces between the block level and the top level remain the same, which means reduced physical design work for unchanged areas of the chip. This is an ECO on the largest scale.”

But even when most of the design remains fixed, there are methodology problems in verification that need to be resolved. “The problem is one of re-verifying at the functional level,” notes Randy Smith, vice president of marketing for Sonics. “When you buy IP it has already been verified many times by the vendor and any further verification of what is inside the box is a re-do of the verification that has been done. You should only ever have to treat it as a black box.”

Even a small change in the design can have a large impact on functional verification. “An SoC is a sea of interfaces,” says Pranav Ashar, chief technology officer at Real Intent. “The verification problems are analyzing information flowing through these interfaces and its far-downstream impact.”

“In many SoC projects it is the lack of rigor around interfaces that causes a lot of the cost and a lot of the delay,” adds Drew Wingard, chief technology officer for Sonics. “When you create an API, you have to create the necessary test structures around them, you have to get notified if you make a change that is incompatible. These are the things that quickly cause the types of problems we saw in the big chip projects.”

There is considerable industry hope that the effort just starting within Accellera will help to change some of the verification priorities such that important end-to-end functionality is tested first before implementation corner cases are considered. It also may enable more focused verification when small changes are made. But preserving things at the top level is much easier to do logically than physically.

Problems in the back end
The reality is that at some point in the flow, you have to get rid of hierarchy. “Abstraction and a hierarchical flow are useful in as much as they help with complexity and workflow, but you cannot wish away the need for flat full-chip analysis capability,” says Real Intent’s Ashar.

Complexity is definitely one of the big drivers for change. “Today it gets more complicated because the area of effect can be pretty large,” points out Sonics’ Smith. “Things placed near each other can have an impact on each other. We are stuck with the problem of having to worry about adjacent things due to . If we compare chips to boards, the individual components on a board are better isolated electrically and thermally. While they do complicate things at the board level, it is much worse on a chip.”

Cadence’s Botticello looks back on the early days of physical verification. “Physical verification didn’t run hierarchically. It was actually done flat by all vendors. Only a few internal customer CAD groups did hierarchical verification. It was simple, and since the early days there was no need to preserve hierarchy in the final stage of verification. Hierarchy was usually only added to improve LVS debugging and for DRC performance.”

“There has always been the notion of doing ECOs but those were not at the level of new designs,” says John Ferguson, technical marketing engineer at Mentor Graphics. “Since we are now doing more hierarchical routing, you at least have large reuseable blocks that can be moved around, change them out, but keep large sections of the design very similar. I would venture that most of the leading IC companies are doing some level of that today.”

Hope for an incremental flow
There are people in the industry who do hold out hope for true incremental design. “If the platform has been defined well, then adding a new block that has been designed to sit in the platform doesn’t change the basic design,” says Sonics’ Smith.

Mentor’s Furguson agrees: “IoT would be the perfect application for incremental design. If you have one new block, or want to add some new capability, I can’t think that something like that would be too difficult to swap in, assuming the footprint is roughly the same size.”

Keeping the same footprint is not always easy though. “With a Network on Chip (NoC), you are attempting to guarantee quality of service for each of the blocks communicating through the network,” Smith says. “When you add a block that doesn’t have high quality-of-service requirements, you can probably do this with little perturbation. If it needs significant quality of service requirements, then it may have a more serious impact. Preserving the physical layout may be difficult.”

But in the end, there are still those interrelated physical effects. “Incremental design would have little impact on physical verification,” says Ferguson. “There is not a lot we could take advantage of because runtimes are dominated by things that require you to look at the entire chip, the full context. Trying to take shortcuts doesn’t help you. There are increasing amounts of analysis that have to be done flat.”

Ferguson adds that, when pressed for a solution, “you can attempt to do hierarchical design by making large blocks that are routed together, and then stitch them together at the top so you have less final flat work at the top. To do that you have to introduce more guard-banding and you have to do more abstracting. Each block needs to have a timing library and this is not as accurate as looking at the detailed parasitic path all the way through. Timing, power and other types of analysis will be less accurate, so you have to increase the guard-bands and, in effect, over-design.”

The tradeoff is to maintain a physical hierarchy and confine the changes to one area, says KT Moore, group director of product marketing for the silicon signoff business unit at Cadence, “The result is generally suboptimal with respect to performance and area, but if time to market and enhanced features are the most important factors, and the design change is incremental, then it probably doesn’t matter that the design burns a bit more power and area. Product lifecycles in the IoT sector are so short that the benefits of being first to market are more important than a few milliwatts of power. Considering that some designs are I/O-limited versus core-limited, even area becomes a nonfactor.”



Leave a Reply


(Note: This name will be displayed publicly)