Experts At The Table: IP Subsystems

First of two parts: Who’s responsible when two good subsystems create a bad system; adding a second spin to ensure the pieces work; ensuring quality and manufacturability.

popularity

By Ed Sperling
Semiconductor Manufacturing & Design sat down to discuss the transition to IP subsystems with Kevin Meyer, vice president of design enablement strategy and alliances at GlobalFoundries; Steve Roddy, vice president of marketing at Tensilica; Mike Gianfagna, vice president of marketing at Atrenta; and Adam Kablanian, CEO of Memoir Systems. What follows are excerpts of that conversation, which took place before a live audience at the Semico Impact Conference.

SMD: If we have two good memory subsystems and they no longer work when they’re put together, who’s responsible and how do we sort that out?
Gianfagna: The one who fixes it is always the system designer. But who’s responsible is a good question. Is it because the spec was wrong? Is it because the design was wrong? It could be the IP vendor, the subsystem vendor or the system designer. Natural selection will eventually weed out the problem, but at the end of the day is the system vendor who has take responsibility. They’re the general contractor. They have the responsibility of making sure everything works, which is a daunting task at this level of complexity. Even though things may comply with standards, that may not mean the same thing to everyone.
Roddy: The are two pieces to this. The first is that the subsystem has to be a naturally occurring subsystem. You can’t take two things that don’t normally go together because no one will know how to put them together. Expertise in one area isn’t the same in another. The second is that you see system designers doing an immensely larger amount of system simulation. If they can do it early enough they can determine if these things will work together with the system resources like bandwidth and memory. Planning ahead is the key to avoiding problems.
Kablanian: The best thing is to avoid having two similar subsystems on the same chip. You need to understand how to test a memory, for example, how to integrate it into a common test methodology, and how to debug it. If you have two similar subsystems, it’s almost impossible to figure this stuff out.
Gianfagna: But if you buy two subsystems from the same vendor, will they work together? I would hope so.

SMD: Where does the foundry sit in all of this?
Meyer: If it comes down to a design issue, there’s not much we can do. What we can do is enable design components ahead of the integration into the system level. For instance, if it’s a third-party IP partner, we work with them as we go through the process of silicon validation. Increasingly, for more advanced technologies we’re actually building in a second spin. We try to get out early in the process, work with our IP suppliers to get on early shuttles so we get back results early. We have a learning step through that to make the process more robust. We’ll do what we can relative to silicon validation with third-party IP. When we have a large customer that does its own IP, we have a team in design enablement that does co-optimization with them. That team actually works with our customers early. We often run their test chips as part of that process, as well, for their libraries, memory and IP. But we also have a team that goes in and does early co-optimization and gives them our input as to anticipated silicon effects. Whatever the problem is, we’ll get involved as much as we can, because until it gets into production everything we do is an investment to save time. We’re highly motivated to get all parties into production.

SMD: When you look at IP it’s a black box. Subsystems are bigger black boxes. Is it harder to verify and integrate them?
Roddy: If the vendor has done their work to validate the box, whether it has one element or four, it should be largely immaterial to the integrator. The inflection point is when you start stacking software on top of the multiple elements and it goes out and addresses other resources and you’re bus traffic-dependent. You have to factor in the expected system behavior, including latencies and interactions between multiple blocks and the system. What are the potential problems from the graphics subsystem fighting with the video subsystem? That’s where we get into system modeling and analysis. If the vendor assumed a certain set of conditions that your SoC doesn’t adhere to, then all the characterization goes out the window.
Kablanian: Testing the silicon is not sufficient because many of the IPs people use are configurable. What works in one case may not work in another. The crux of this is a verification methodology, and companies that figure that out will be successful with this strategy.
Gianfagna: There’s a topic we’re touching on here that is very important, which is what it means to deliver a certain level of quality. There are not a lot of standards there. This is a big problem. There is no vocabulary we can use that is consistent between IP vendors. You need machine-generated metrics, which is something the IP community won’t like. But we don’t even have a common language that says this IP is of a certain level of quality and it has passed these tests.
Meyer: Ultimately, if it doesn’t work then GlobalFoundries is involved. We’re doing all we can. Obviously we have partners at the very leading edge working on this, including verification of their IP. When we start looking at 28 gigabit-per-second SerDes, that’s a whole subsystem. We are not the systems experts, but we put into place a very good support structure for them. Now, when you start to move away from the bleeding edge, clearly the challenges are not as great. We’re working with customers on all types of IP, but for each different application and each technology node we are working with customers and IP partners to make sure their IP is as risk-free as possible.

SMD: Even in the most advanced devices we’ve seen major errors crop up in subsystems. There is no way to test all of this complexity. Are subsystems really a step forward in reducing complexity or are they just adding new wrinkles to it?
Roddy: I don’t think subsystems change the overall equation. There are too many pieces. In the fully synchronous synthesizable areas the processors interact with the software so it becomes an issue of software correctness. The good news is that you do have software, so it can be fixed. If you can do an analysis and make sure the parts that consume data flowing through the device are properly calibrated and you model all the use cases, then you can fix the software to address everything else. It’s all about appropriate system-level design. It isn’t any different today than 15 years ago when we worked on IP blocks instead of subsystems. It still gets treated like a block.
Gianfagna: There needs to be a methodology. You have to run tests on those blocks before you create masks. But if the quality of the incoming data is vetted you’re going to have fewer problems later on for everyone in the supply chain. From the front of the supply chain to the back, everyone has to step up.
Roddy: We look at a core as being a subsystem. It’s a complex set of registers, data paddles and so forth. That’s an integral part of an SoC. Typically you bring up a technology node with very regular structures. We’ve integrated an A9 core and integrated it into our test chip strategy. It gives us a different understanding of the technology problem, which is great for isolating defects. But you also now have to consider proximity effects, so we now use the A9 as part of our test strategy for bringing up our process technology. We started looking at these problems from the yield side. Once we see them work, we start increasing the frequency. We are trying to support the subsystems down to our test strategy.



Leave a Reply


(Note: This name will be displayed publicly)