The Controversial Spec

Defining what a good and complete specification is and how to create it is a controversial subject.


By Ann Steffora Mutschler
Design sophistication and complexity has made it increasingly difficult to fully specify the expected behavior of a block in an SoC, but this is necessary for design and verification teams. How do you write a “good” and “complete” specification of functionality?

It turns out that the discussion of defining what a good and complete specification is and how to create it is a controversial one. Today, there are three different levels of specifications.

“The first one, intuitively, is the system specification, and that really describes the product, how the product is going to be used and that influences the design,” said Harry Foster, chief scientist for verification at Mentor Graphics. “The second level is the IP block specification, and that defines the various IPs and what they’re going to do, how they will interact with software, hardware and other blocks. The third level that you tend to see today includes custom interface and interconnect specifications. Quite often these are standards-based. For example, it might be one of the AMBA bus protocols—that’s an internal protocol between IP blocks. Or it might be an external one like PCI-Express or USB.”

But who writes that spec? “Typically what you find if you look at the system specification that one tends to be written unfortunately by the final customer of the product,” Foster noted. “The reason I say ‘unfortunately’ is that while it will have performance requirements and all these aspects that the designers need like power and performance, it tends to be focused mainly on the end user of the product. That’s where things tend to fall apart, and the reason is other consumers of that specification are the design and verification teams. Certainly the design team takes that spec and tries to figure out how to implement it and write the microarchitecture specification that is hardware/software-related. And the verification team is trying to figure out the verification objectives that would define some of the specification. There’s so much work that goes on by both the design and verification teams, and the problem is that the original spec wasn’t written for them as a consumer of that spec. In other words, the industry as a whole could do a lot better if the architects that are writing the spec realized that one of the consumers is the verification team or the design team.”

Reuse issues
Another issue that complicates matters is the fact that in most subsystems and SoCs today, a very small percentage of the hardware is actually new design. “If you look at a big mobile SoC platform, for example, from any of the leading vendors out there I would guess that in a lot of cases they’re only changing maybe 20% of the actual design from one chip to another,” said Cadence Fellow Mike Stellfox. “Part of the answer to the question is you don’t normally sit down from scratch and write a spec for the entire system. You have something existing, and it’s more about specifying the incremental components or changes from the previous generation.”

As such, there is a lack of good specs, he said. “I’m coming from a verification background and in order to do verification well, you need to have a good spec. In general it varies from customer to customer, but there is a lack of really good specs out there that are well-written and complete. A lot times it’s just because there’s not enough time to write a full spec in addition to the fact that most designs aren’t really done from scratch.”

However, this lack of specifications is key, Stellfox asserted, “because if you go and write a spec for a given block, that’s not a huge challenge. It’s just a matter of carving out the time to say, ‘I’ve got these interfaces that need to send and receive data at these types of throughputs, and then I have these types of features inside of the design, configuration and modes of operation along with different types of processing engines or accelerators or what have you.’ What becomes a bigger challenge is specifying the system behavior in a way that’s predictable, because systems today have multiple cores, multiple masters/accelerators sitting on top of an interconnect fabric, and the big challenge there is to specify the behavior and the performance characteristics of a combination of many of these blocks that make up a subsystem or an SoC.”

Until now, many design and verification teams have relied on the architect putting together a spreadsheet statically analyzing things such as latency and bandwidth throughput, and those tend to be worst case estimates. As a result, most of the time the project teams overdesign because they don’t really know, in a dynamic situation, for example, whether they will have enough bandwidth if data is being pumped in from the Ethernet to the memory or if there is live video coming in from the camera that needs a certain type of bandwidth to sustain the rates required for processing it inside of an SoC. All of that complexity makes it difficult to specify up front.

“Designers are looking for better ways to characterize their IPs, because if you look at most SoCs—especially ARM-based SoCs—most designs are an assemblage of a bunch of different IPs with some differentiated bit added in. And those IPs are highly configurable, especially the interconnects, which connect all the IPs. You have a huge amount of configuration where you can tune the interconnect for different performance scenarios. Then, the DDR controller is also highly configurable and can be tuned for specific SoC context to meet specific types of performance requirements. The real challenge is from the requirements that you have for a given application space that you’re trying to target, how can you characterize those IPs and then analyze the performance in a real design situation versus doing that on paper. If it’s on paper or on a spreadsheet, it’s just too complex. There are too many variables and too many unknowns in order to really, fully specify that type of information,” Stellfox added.

Ideally everybody would like to have the complete specification up front. “In an ideal world it would be possible, but with multi-million-instance designs, it’s probably not possible to have all of it integrated together,” said Mary Ann White, director of Galaxy Implementation Platform Marketing at Synopsys. “If you think about it, the way designs are handled, they are broken out into project teams most of the time. You can write RTL from an architectural perspective with what the goals could be and the constraints should be, but then sometimes the blocks may or may not meet all of those constraints.

Language problems
There also are issues about how the spec is written in the first place. “Traditionally, the specification of the system has been done in high-level modeling languages like C/C++,” said Abhishek Ranjan, senior director of engineering at Calypto. “It is easier to specify the system and to simulate real usage scenarios. However, the real challenge is when you have to port such a system to a hardware specification (RTL). Manual conversion is tedious and error-prone. The pressure to bring out a chip means that functionality is the sole objective and performance/power are mostly a secondary concern, time permitting. Verifying the functionality of hardware spec (RTL) vis-a-vis system spec (C/C++) is extremely challenging and no good formal technique exists. Fortunately, high-level synthesis has come a long way and modern HLS tools do a fairly good job of generating optimized hardware (RTL) from a system level description (C/C++). Some of these tools have tight integration with formal equivalence checkers, which can validate the correctness of the generated RTL with respects to the system level description. These HLS tools also provide capabilities to tradeoff performance (throughput/latency etc.) and power.”

That has to be coupled with a deeper knowledge of the design, though. “What you don’t know can hurt you in the sense that people build specs that are poorly defined in some areas and never complete and you go through the process of finding out the implications of your omissions the hard way,” said Mike Gianfagna, vice president of corporate marketing at Atrenta. “But there is a different way to do it. Our view is that with some methodology and some new technology, you can discover the holes in your spec and the potentially bad consequences of not caring about the details.”

Dr. Yunshan Zhu, vice president of new technologies at Atrenta explained further that the architect typically writes the architectural spec, then the RTL designer takes that spec and does the implementation. “Very often we see, for example, the RTL designer decides how big the buffer should be inside of that IP. He is going to estimate the speed of packets coming in, the speed that the packet can be processed, the speed going out, and then he needs a buffer. The RTL designer decides that. Usually he does this conservatively, but if he sets it too low, then the buffer will overflow and he’ll be dropping packets. So an implicit spec that does not show up at the architect’s level of explicit spec is, ‘the FIFO should never overflow’ or ‘the FIFO should never underflow,’ and that’s buried in the RTL—but equally, if not more important to guarantee the functional correctness of the chip.”

Leave a Reply

(Note: This name will be displayed publicly)