Knowledge Center

Knowledge Center

Digital IP

Categorization of digital IP


In the early days of semiconductor IP, an IP block fell into one of three buckets:

1) library IP. These are the fundamental building blocks that allow digital logic to be implemented onto a chip. Standards cells and all of the low level pieces. Those were characterized by the requirement to deliver many different views of the same object. If we consider the simplest digital device, the inverter, it will have the polygons that are going to go into the fabrication flow to make it, but then there are tens of other deliverables that are about other ways you may use that in different tools in the design flow. There is an abstracted layout view, a timing model, a digital logic simulation view and an abstract representation used by logic synthesis and may things that help with thermal modeling etc. The design of this has always been a data management challenge because there are a large number of cells, a large number of views and so the creation of that as a set of deliverables has always lent itself to high levels of automation.

2) Standard interfaces. In the earliest days of IP, some of the most popular IP blocks were the ones that allowed us to add the same interfaces as found in the PC – the USB interface or PCI which has since become PCI express. The basic business of IP revolved around trying to sell these basic interfaces that lots of people needed and typically weren't considered to provide differentiation. It was interesting that for those, they tended to have a fairly simple set of deliverables and they tended to have well defined interfaces because it was an interface and you had to have an interface that matched what the physical world expects. On the on-chip side, they either did things that looked like those on the original PC motherboard or they tended to use an on-chip bus or interconnect protocol such as OCP or AMBA. They were pretty simple because the task of the block so was so well defined. It simply had to provide the interfaces as defined by standards. It was valuable from a software perspective in that it looked like what had come before. The Intel chipset told you the way to turn things on by writing a value to this register and so the IP would mimic that behavior. They were also relatively fixed in that there was little choice for the user. It did not need to be flexible. They might need a connection to power and they had to connect to the package pins and then they would also need a clock and reset signal. Pretty simple.

3) Star IP. This includes processors. In the earliest days these were distributed in the same way you would distribute standard cells. They were distributed as layout. They had a version that would match the process technology. Over time that changed and they became soft design and described in RTL form. They were more complex from a deliverables perspective because to use a processor, you had to be able to program it. That means you need a tool to chain that allows you to take a software program and compile it, assemble it and link it so that it could be run on the processor. Still the blocks, while they had a lot of function, a lot of internal design, tended to be relatively fixed. Perhaps you might have a choice for instruction size or data cache to attach to it, but the rest of the choices that the user had were normally handled by a few signals at the top level of the design that you would tie to logic 1 or 0 to create the necessary behaviors.

What has changed in the past twenty years is that it has become less attractive to try and sell blocks of the second and third types as fixed function.

As Moore's law has given us more transistors to work with, we are using them to build ever more capable chips which means they are more complex and the difficulty in reasoning about a design that is composed of thousands of tiny blocks is beyond what most of us can manage. So it natural that we want to accumulate some of the small pieces together, along with the things they would attach to and think of them as bigger objects and then you start to realize that you have lost something in that transition because there are things that become hidden. Consider a block that needs a buffer for data. If I put that memory with the block, then I may lose the ability to manage the size of that buffer and so the IP provider may want to allow the end user to decide how big it is going to be. Now your IP starts to become configurable and there are a set of choices that the user needs to make at the time they instantiate an IP block to configure it and make it more optimum for their specific usage.

We went from relatively exposed, single function IP blocks into larger pieces that caused us to start needing the extra levels of customization or configurability.

Content provided by Drew Wingard, CTO of Sonics

Advertise Here
Advertise your products or services directly

Advertise Here
Advertise your products or services directly



We want to hear from you. If you have any comments or suggestions about this page, please send us your feedback.