Architect Specs Harder To Follow

Each new node adds uncertainties and problems, especially at 7nm. Interdisciplinary communication becomes essential.

popularity

Interpreting and implementing architects’ specifications is getting harder at each new process node, which is creating problems throughout the design flow, into manufacturing, and sometimes even post-production.

Rising complexity and difficulties in scaling have pushed much more of the burden onto architects to deal with everything from complex power schemes, new packaging approaches, and to develop innovative ways to handle congestion, throughput to memory, and to ensure signal integrity and prioritization. But following architects’ blueprints isn’t always as straightforward as it might appear.

“Perhaps the spec is very high level,” said Mark Carey, product marketing manager at Mentor Graphics. “Perhaps it comes from a customer that says they need a product that can do this, and it has to process this many frames in a second, or it has to have some kind of high-level requirements about how much power it uses, how fast it does something.”

Once the spec is handed off to design teams, prototyping tools can be used to start investigating, budgeting, analyzing and making decisions about the number of cores, how much RAM is required, what size of cache is needed, and where the functionality should live. For example, should a certain function be implemented within the hardware or software?

The virtual prototype almost becomes the next level spec in itself because it helps drive the requirements for the lower level. “It becomes the requirements for the RTL designers,” Carey said. “They can almost use the abstract models as the spec itself. As well as using it as a spec, they could use the models as a reference to make sure that they are implementing things consistently with the requirements. So the virtual prototype isn’t just useful at the start of the process. It can be used throughout the process.”

But it also may be different from the original spec, and in the case of advanced nodes, that delta may be significant.

Learning as they go
The handoff and interpretation of a specification has been getting more difficult ever since device scaling became problematic. This helps explains why there are so few successful designs at 20nm, and why it is taking longer to roll out designs at each new node after that. There are lots of unknowns on the part of architects, as well as a lack of experience in dealing with new problems that can crop up at each new node.

At 7nm, for example, design teams and spec architects are learning the technology-related challenges as they go. “With any new technology, you always end up with many things that are much harder than legacy technology,” observed Vassilios Gerousis, distinguished engineer at Cadence. “Designers definitely face additional constraints or difficulties with 7nm. For example, RC delays become bigger (in a relative sense), and timing closure becomes harder. As such, power, performance and area (PPA) becomes harder to achieve. Designers are tasked with having to come up with an updated methodology and techniques to get back to a comfortable stage.”

So while technology challenges are harder to overcome, designers also need to learn about newer technology-related issues they have not dealt with before, which can have big impact on the design, Gerousis explained.

For example, self-aligned double patterning requires restricted design rules such as 1D (no wrong way), as well as a limited set of wire widths allowed on lower-level metal layers, where it becomes more difficult to develop clocks and do power distribution. There also are new classes of layers and rules that designers have to learn and get comfortable with, such as metal cut layers and rules.

“New technologies like 7nm affect how designers do clocks and power distribution. They have to create new methodologies to address self-aligned double patterning and trim effects on clocks and power distribution,” Gerousis said.

Better strategies
Rather than a fixed blueprint, the initial spec is evolving into much more of a living document that can be tweaked by the architectural, design and verification teams. “The more interdisciplinary things get, the better off it is,” asserted Larry Lapides, vice president of sales at Imperas.

Consider the case of a semiconductor company that has kept a fair bit of separation between different design functions. “As a result of that — not getting enough feedback, not getting enough cross-communication — they’ve had to, relatively late in the process, de-feature their devices. In one case, the features they were adding on the processor side were not going to be able to be supported by the OS team. If the architecture team had talked to the OS team six months earlier and gotten management buy-in for what they were doing, which actually were some pretty cool features, then it is likely they could have gotten support from the OS team. The problem was it came to the OS team a little bit late, and those people are not used to thinking about software up front — at least in the semiconductor world they’re not thinking about that.”

In another case, an embedded systems company that is developing its own SoC is starting from the product it is delivering, which has a heavy software component to it, so a lot of the value is in the software as well as the hardware. “If you think about an SoC coming from a semiconductor vendor that has to deliver a software stack, the value is still in the silicon,” Lapides said. “But in this embedded systems company, because they are coming from the end product, there’s much more of a understanding of the value of software and how they are going to be able to partition the value between software and hardware, what they’re going to be able to do in terms of implementing the overall design, so that interdisciplinary communication is incredibly important.”

Without that kind of interdisciplinary communication between the architecture and design teams, designs may end up too complex to implement or verify. “You can get something where you had to do more re-spins than you thought you would,” he said. “That’s the risk on the semiconductor side, and that re-spin risk is an obvious one that people run into on a regular basis.”

The same thing happens on the software side, except that it isn’t just stepping off a cliff and having to do a re-spin that going to cost millions of dollars. The fixes show up in multiple releases, each of which carries a cost. And while those costs are lower than a hardware re-spin, they do add up over time.

“Instead of stepping off a cliff, it’s rolling down a grassy hill,” Lapides said. “You end up going down just as far as with a re-spin. The numbers are pretty close to the same if you look at it over the timeframe of a project, except when you fall off a cliff you notice it. When you roll down a hill you’re thinking it isn’t so bad except it’s a long way down still. You build up momentum going down that grassy hill, and it’s an awful bump when you get to the bottom.”

Morphing effects
While problems will always crop up at new nodes, the bigger problem is that the delta between the initial spec and final product is getting larger at new nodes. That requires greater interdisciplinary communication, which isn’t always so easy. For one thing, not all of the IP is developed in-house anymore. For another, the process itself is constantly being tweaked to improve yield, which has an impact on whether the design will work according to specification.

“When an architect defines a product they have a certain vision in mind, and they define the system,” said Anush Mohandass, vice president of marketing at NetSpeed Systems. “The trouble with that vision as it gets handed across different teams, from architecture to RTL to physical design to layout to chip, it morphs. I have seen architects scratch their head and say, ‘This is not what I wanted.'”

Silos compound the problem. There may be an architecture team with 20 people, a design team with 35 people, and a layout team. Each one believes they have done their job, hands it off to the next person, but along the way they have made slight tweaks. Together those add up to big changes, which frequently subvert the architect’s original vision.

“How do you ensure the architect’s vision stays true throughout,” asks Mohandass. “Typically, the architect says, ‘Here is what I’m doing in the layout, go implement this for me.’ The physical design person needs to have the ability to say, ‘You are asking me to do this in 16nm finFET or in 10nm, where the cost of wires is a lot more than the cost of gates. I don’t mind you putting a lot more logic, but give me fewer wires.”

As such, a platform is needed that lets these two people have a conversation in a meaningful way throughout the design phase.

These challenges will only get more intense with the move to smaller geometries, and the metrics thought to be important will change. The classic example is gate count, he added. “Any person who has designed chips in 40nm and 28nm, they’ll want to know the gate count. But in 16 and 10nm, that’s not the question you should be asking. It’s about the wires because that’s the dominating factor.”

When it comes down to it, if separate tools continue to only address each silo, these challenges will remain. What’s needed now are tools that look upstream and downstream so that the different phases of design work together well.



Leave a Reply


(Note: This name will be displayed publicly)