Methodologies for integration become a competitive tool as complexity and possible options skyrocket.
As chip complexity increases, so does the complexity of IP blocks being developed for those designs. That is making it much more difficult to re-use IP from one design to the next, or even to integrate new IP into an SoC.
What is changing is the perception that standard IP works the same in every design. Moreover, well-developed methodologies for reuse can give a chipmaker a competitive advantage. The final shape of the design depends on various factors, such as application demand, and interfacing or power requirements, all of which increase the number of possible configurations.
There are numerous efforts underway to help minimize those challenges, including standard protocols like AMBA AXI or ways to tie them together with an on-chip interconnect.
“Following that makes it easier for verification process because you don’t need to customize bus function models for simulations or transactors for emulation for each project,” said Zibi Zalewski, general manager for Aldec’s hardware division. “You don’t need to learn it from scratch. Once the verification infrastructure is reusable, it is much easier to update it with new subsystems and/or IPs. The rule is simple—IP reuse allows for VIP reuse. But for the current size of the projects it becomes a very important one to follow.”
Nevertheless, the ‘how’ of IP reuse is often a closely guarded secret. For large chip companies, it is a critical factor in being able to get huge chips out the door in a predictable fashion.
“It really is their secret sauce as to exactly how they package the IP they have, how they apply tools for assembling it, how they do analysis of what’s going on as they try to assemble, and what they put in place in order to help the debug of these enormous chips,” said Cadence Fellow . “These are logically complex things of enormous difficulty, in part because you have so many devices and so many blocks. But also, there are so many different kinds of things that can go wrong.”
While the IP industry would like to think of IP blocks as LEGO blocks that snap together, Rowen said it’s not that simple. “When assembling a chip, you have to be right in so many ways because you are assembling very different kinds of things. So when we talk about IP, it’s all IP in the sense that it’s a design element which is being reused. But you’ve got specific function digital blocks, processors, analog interface blocks, interconnect generators, and essential elements of low-level firmware, which are almost like hardware IP blocks in terms of their importance and their fragility. And they’re tied to the very specific characteristics of everything else that’s going on in that chip.”
Among the industry’s leading semiconductor players, a great deal of attention is paid to methodology and to standardizing within the context of the preferred flow. This means that Qualcomm puts together chips differently than Broadcom or Intel.
“They all have architected a flow, and they have applied as much as they possibly can a standard as to what those deliverables are,” Rowen said. “The suppliers of IP have to fit into those as best they can. But because there is a significant amount of variation in how these different teams think about solving the problem, they have made these proprietary innovations. They have to adapt themselves, and to some extent, they have to find the largest common denominator among all of those things.”
IP vendors also have to design their IP with re-use in mind, a concept that has been talked about for at least two decades. Case in point: A book entitled “Reuse Methodology Manual for System-on-a-Chip Designs” was published in 1998. It is now in its third edition.
“Designers need to easily configure, implement and verify the IP in their target environment, and for their target application, and they need to do this without going back to the designer/IP provider,” said John Swanson, senior marketing manager for DesignWare IP at Synopsys. “This is especially critical today, as design teams are racing to the finish line while dealing with highly complex IP. They don’t need and cannot afford the time to go back to the factory for every little tweak to the IP. So it is not something ‘new’ needed, but rather a continued adoption and evolution of existing methodologies to bring in needed modifications to design flows. New and smaller process geometries bring more challenging requirements. We learn, and tools evolve.”
In addition, the introduction of a methodology into a company can be ‘new’ for the company but the basics of reuse stay more or less the same, Swanson said, with design complexity perhaps the biggest change.
Specifically, technologies like Ethernet, the addition of Time Sensitive Networking (TSN), and new speeds like 25G make IP more complex — and this has happened to many IP titles, he said. More companies also are looking to re-use a market-targeted subsystem, meaning more complex IP interconnected to create a larger IP block. And to be more productive, companies want to enable more automation, which lends itself to a well-defined methodology.
There are also market specific requirements, Swanson said. “If you look at automotive, you have to provide many safety features in the design, and be able to achieve ASIL certification. Customers following an ISO 26262 process have requirements you need to address as an IP provider. So from a methodology view, you have extra rules and steps that need to be done, as well as a set of additional deliverables. This becomes part of your design flow, and requirements vary widely depending on the target markets along with the IP.”
The cost of reuse
Creating IP with the intention of re-using it in future designs brings with it many considerations. Rowen pointed out that when IP reuse was first discussed, everyone assumed it would be simple. “‘If I used it once someplace, then all I would have to do is tear it out of that context and throw it over the wall to somebody else, and it will be obvious how to use it.’ But it was so rarely true that you could re-use a piece of IP just because it had been successfully integrated into one chip. And it didn’t mean much of anything in terms of whether it was ready for a completely different set of engineers who had no idea about the internal functionality or the external interface or what the dependencies on bus bandwidth and process technology might be—and whether they could re-use it.”
Because the cost of integration is an inherent part of the cost of acquisition of a piece of IP, a great deal of work has had to go into developing a vocabulary, as well as systematic ways for people to be able to deliver IP blocks that are reasonably reusable.
“However, there are still cases when designers will say, ‘I know that’s available in the market, or in another division of my own company, but it’s too expensive for me to re-use it. I’ll just design it from scratch myself,’” Rowen said. “They’ve done a realistic assessment of how ready that block is for their reuse. While the fraction of design elements that do come from reuse or licensing rather than from reinvention of the wheel is certainly going up, it is a sobering tale when you sometimes hear people talk about why they redesigned something. ‘My requirements are just different enough that the block is not flexible enough,’ or in fact it doesn’t come with the kind of model that I expect so my whole methodology for performance verification or for power verification no longer works if I borrow this piece from another organization.'”
To be sure, the work that must be done to make a piece of IP work in one particular context versus the work that must be done to make something reusable by strangers in a wide range of other contexts are hugely different. In some cases, 10% of the cost of developing a piece of IP is the IP itself, while 90% of the cost goes toward building the infrastructure,and designing it in such a way that it can be readily re-used.
Further, once an IP team — whether internal or third-party — has developed that capability, even if it does cost 10X more to develop it for reuse, it gets re-used a lot more than 10 times. So the real cost that somebody is paying for that can be much lower than if they did something from scratch themselves, even given the fact that design for one time use is so cheap compared to designing for reuse, Rowen added.
What goes wrong
That’s not to say IP reuse is without its share of problems, though.
“From an IP management standpoint, most of it is around a web-based catalog for IP—and that’s it,” said Ranjit Adhikary, vice president of marketing at ClioSoft. “A second problem is that most companies prefer using third-party IPs with the rationale for that being short design time, and questions about support if using internal IP. The problem that comes in with using third-party IP is that most big providers are not going to customize it for individual requirements. Small providers will do that, but there is a quality risk associated with this, as well as timeline pressures. If you have IP such as USB, for example, it takes some time for it to mature, and it must be on the specific process that has been chosen. But this also means a lot of internal IP is not being used. Why? The knowledge is trapped in different vertical silos, based on geographical boundaries, organizational structure, business units. Sometimes the right hand doesn’t know what the left hand is doing.”
In addition, there is no mandate for engineers to share IP development from a top level, said Adhikary. “Even if some IP is shared, when it starts getting integrated and problems pop up, what can the designer do? What happens if the design being created has third-party IPs and there is no awareness of that? You run the risk of liabilities. From a semiconductor company’s perspective, there is a lot of merit in trying to re-use as much as possible. The problem is they don’t know how to do it. You cannot have an IP reuse system and integration separately. You need to think a little out of the box and leverage the notion of crowd-sourcing. What that means is that in any company, a lot of people working within the silos but develop smart ideas, scripts, designs, IP, etc., How do you leverage out of that? There are several examples of crowd sourcing that have been used in the software industry, such as Amazon, Google, Wikipedia. It’s like buying a product on Amazon, where the knowledge base of reviews can be used. Similarly, in the semiconductor IP space, there has to be an ecosystem for design reuse, and it has to be something that can be adopted easily without much hand-holding.”
Adhikary said no ecosystem for IP reuse exists today. “There are fragmented pieces of information lying around. For example, let’s say you are designing a piece of IP. A lot of information is contained in things like data mining systems, meeting minutes, notebooks, emails. If an engineer wants to look at a piece of IP, where do they get all of the information if the data is dispersed all over? Extracting that information becomes a problem, and they really have no idea what is there at the end of the day. You may have a netlist, a GDSII, but that’s about it. You don’t know why a certain decision was made. If you want to modify it, you don’t have the history for that, so it becomes a bit of a problem. There are some bits and pieces, but they may be in catalogs, Excel spreadsheets or various tools for discussion but there is no concept of having a knowledge base.”
Transferring knowledge
That knowledge is critical because in many respects, IP reuse is a forward-thinking concept.
Rajesh Ramanujam, product marketing manager at NetSpeed Systems, recalled that IP reuse 5 to 10 years ago consisted of using one particular IP block, which was a small fraction of the chip or an entire SoC. “Now it’s to the point that engineering teams want IP reuse, but it’s also required by the embedded software team to make sure they can re-use their Linux drivers, for instance, or their APIs. So it’s not only design. When you say design, people think RTL and about the chip, but we are to the point where reuse has gone beyond just the hardware. It has gone into reusing software and verification tools.”
As such, many companies are now approaching IP reuse from a very top-down perspective. “When you look at things from a bottom-up approach, it’s hard to re-use because you don’t know what the use cases will be at a high level for the next project, or the one following that,” Ramanujam said. “For a top-down approach, you have to have a methodology of how various IPs interact with each other, how there can be some kind of software database that contains all the programming models for a register map, for example, and that the programming model is consistent across all the derivative products.”
And that’s not easy because once the IP is changed from one product to another, the register map goes out the window. In order to maintain the register map, the IP must be managed intelligently.
Dave Kelf, vice president of marketing at OneSpin, said that in some cases, management is happening in much the same way that software engineers manage software blocks using a repository with version control and multitasking. “With globalization, more engineers are getting at the IP within an organization, so the repository has to be available for more than one team working on an IP block. That’s a big issue.”
As far as how IP reuse evolves, time will tell but given things like safety and security being built in where it wasn’t before, along with IP blocks getting bigger and bigger calling into question the very definition of an IP block, one can imagine a hierarchy of IP, he said.
Here is a link that exposes a lot of what an OOP compiler does to compile OOP classes that are analogous to HW IP blocks.
https://channel9.msdn.com/Blogs/Seth-Juarez/Anders-Hejlsberg-on-Modern-Compiler-Construction
HDL compilers are totally useless when interconnecting and debugging IP, so defining IP as classes would allow IP compilation and debug using the SW development.
I posted this same link on Sperling’s CPU and FPGA article.
The HDL compilers need a lot of fixing because HDLs are too primitive.
Thanks for that link, Karl.
inShare218
Here is a link to “10 ways to program your FPGA”
http://www.eetimes.com/document.asp?doc_id=1329857#msgs
The long list of comments are an insight to the real world.
Intro:
Despite the recent push toward high level synthesis (HLS), hardware description languages (HDLs) remain king in field programmable gate array (FPGA) development. Specifically, two FPGA design languages have been used by most developers: VHDL and Verilog. Both of these “standard” HDLs emerged in the 1980s, initially intended only to describe and simulate the behavior of the circuit, not implement it.
( Karl thinks HDLs are for building/synthesis and not design entry)
However, if you can describe and simulate, it’s not long before you want to turn those descriptions into physical gates(FPGA s are not gates, they use LUTs).
For the last 20 plus years most designs have been developed using one or the other of these languages, with some quite nasty and costly language wars fought. Other options rather than these two languages exist for programming your FPGA. Let’s take a look at what other tools we can use.
Karl Stevens