Changing The IP Supplier Paradigm

Experts at the table, part 3: As the industry migrates from small blocks to larger integrated blocks and subsystems, it is having an impact on IP companies.


Semiconductor Engineering sat down with Rich Wawrzyniak, senior market analyst for ASIC and SoC at Semico Research; John Koeter, vice president of marketing for the Solutions Group at Synopsys; Mike Gianfagna, vice president of marketing for eSilicon; Peter McGuinness, director of technology marketing at Imagination Technologies; and Michael Mostovoy, senior director for ASIC development at SanDisk Corporation. In part one of this roundtable, the panelists identified many of the elements that drive the IP industry today: selection, software, configuration, integration and perhaps most importantly the total value provided. In part two, the panelists discussed the importance of software and the continued difficulties associated with integration.


What follows are excerpts from that conversation. Certain members of the panel also wanted to point out that the comments belong to them and may not represent the views of the company they work for.

Semiconductor Engineering: Why do your customers require cycle accurate models?

Koeter: It is my understanding that GPUs are particularly difficult to model in a loosely timed way.

McGuinness: Absolutely. There are a lot of things going on inside a GPU that are asynchronous. There are huge data dependencies and this make it very difficult to create a transaction level model that is cycle accurate.

Gianfagna: Cycle accuracy is the gold standard.

Koeter: There are times where there is a real need and it is also because there is a lot of legacy. But for software, it is too slow.

SE: How important are network on chip (NoC) architectures going to be in the future and how does this play into the notion of increasing block size?

Wawrzyniak: If you are going to the trouble and expense of creating a sub-system, the goal is to reduce the amount of effort taken to create a system-level function. At the same time, you need to increase the performance. To do that, it means you need an interconnect within the sub-system. This is where you need the real heavy lifting and you need to have the ability to move that kind of performance from sub-system to sub-system, so that the whole architecture can take advantage of it. This is something that everyone feels they can create, and in general they can, but there are an increasing number of things they have to be concerned about such as quality of service, cache coherency etc.

Koeter: We use the industry standard interfaces within our sub-systems, such as AMBA. It is an interesting question. We are not at the point of needing a NoC yet. Is it coming – maybe.

Wawrzyniak: If you want a really high performance design, I don’t think you have a choice.

Mostovoy: You also need flexibility. At the architectural level, you need to be able to make choices that optimize for your system. Is the interconnection configurable? Does it allow different options, such as pipelining? What about arbitration schemes? The more you add to configurability, the more value you can get from it.

Wawrzyniak: One of the problems with this type of discussion is that people who read it can get the wrong impression. They think that the first time they use a sub-system that they will see a reduction in their design costs. It is possible that they will see them go up. This is because they are in an exploration stage. Once you have gained the knowledge and expertise, then you will see savings.

SE: Ten years ago we were talking about many of the same issues, just at a different scale. What is likely to change in the next 12 to 18 months?

Gianfagna: We see a big change happening. It is perhaps at a different level of abstraction than most people are talking about around this table today. System-level is important, software interfaces are important, architecture is important but one problem people face is the bewildering number of choices at the lower levels of abstraction. Which process technology, which flavor, how many metal stacks? All of these choices combined with the number of different IP choices make for a huge problem. We see an opportunity with big data concepts to distill this information down to ways in which people can make meaningful choices.

Koeter: In 18 months we will see a paradigm shift. Customers will be asking, even requiring their IP vendors to provide more. Titles are not enough. It is software, it is prototyping, it is help with integration… If you can’t provide that range of services, it will impact you. There will always be plenty of viable niches for start-ups, but for most IP, they will expect more.

Wawrzyniak: You can’t put a limit on good ideas. The IP market came from someone with a good idea inside a large company who weren’t interested, so they leave and start a new business. It may be more difficult today but everyone will always need good ideas. In the next 18 months, the concept of sub-systems will get rolling. When they explore the concept, they will see the value in the reduction of effort. In general design costs will continue to go up so people are looking for ways to reduce this. Verification needs also must be addressed.

McGuinness: I agree a full suite of capabilities is necessary for top-tier IP suppliers. This will drive all the IP companies to start looking more like each other, in terms of their operations. They will all need a CPU, media processing etc. So how do they differentiate?

Mostovoy: Packaged solutions rather than individual IP is going to happen. The cost of failure in advanced nodes is high today. This is why it is important to have silicon proven IP.

Leave a Reply

(Note: This name will be displayed publicly)