Where Do We Stand With CDC?

Experts at the table, second of three parts: Use models; architectural choices and impacts; methodologies.

popularity

Semiconductor Engineering sat down to discuss where the industry stands on clock domain crossing with Charlie Janac, CEO of Arteris; Shaker Sarwary, VP of Formal Verification Products at Atrenta; Pranav Ashar, CTO at Real Intent; and Namit Gupta, CAE, Verification Group at Synopsys. What follows are excerpts of that conversation.

SE: What are the biggest use models for CDC verification today?

Gupta: When it was mentioned that users want to quality a design or a portion on a synchronizer and then they want to basically go to an upper level and say it is verified, what we have seen is that this scheme of things is really not working in the reuse because when they reuse, they change the configuration and the verified thing no more is verified in the new scheme of things. By way of background, CDC – the whole thing – started with 0-In 15 years back. They were the pioneers at that time and they came up with an architecture which were catering to the design sizes of that stage but you can see that they are not here at the table; that means something. Then came Atrentas and Real Intents – they came with a nice set of architecture, which caters to the next set of design sizes but right now what we are seeing is a need to go to the next level because design sizes have crossed the 100 million mark. The key important thing when you cross that size, even we have seen the hierarchical flows from the other vendors, that say you can verify this thing, you can abstract, you can come at the higher level and verify just the connections. It very well works from the tool perspective but again, from the design perspective, when you integrate multiple IPs, they want to look at the flat view. They say, apply my block level waivers, apply my block level constraints but if I have a hook up error, I don’t want to take any risk because of the limitation of the tool.

Ashar: Real Intent’s philosophy as far as the design of this tool and the architecture is concerned is that the final sign off is at the full chip level and the hierarchical flow is to enable a methodology rather than a way to compromise and take shortcuts. Real Intent recognizes that at the SoC level, the CDC bugs that creep in are because of assumptions that are made in terms of integration of IP and the paths that go across that. We believe firmly in the full chip model of CDC verification and the hierarchical flow and the IP-based verifying block level and then going up the hierarchy is a sort of trust-but-verify kind of paradigm. The enablement of the distributed workflow out of that is made possible through this hierarchical flow but the moment you go up the hierarchy to verify that what you did was correct, the final signoff is full chip.

Janac: The Arteris FlexNoC product has been used probably on 170 chips so far and until I got to this panel I had no idea that there was a problem with domain crossings. We’ve been shipping GALS (globally asynchronous, locally synchronous) since 2006 in mobility – where the bulk of those chips are made – have always had power domains because power consumption is one of the biggest things. I think back in 2009 we had people force us to use a domain crossing verification tool at the unit level – so it’s a little IP that’s part of the FlexNoC library is then assembled into modules, so we verified at the unit level. We have a tool called FlexVerifier which uses Synopsys or Cadence IPs that tests the domain crossings at the module level. When that goes, the customer then performs whatever full chip verification they do and if there have been big problems with domain crossings I would have known about this. So there is a methodology that customers have adopted. All the domain crossings where FlexNoc are used are in FlexNoc – they’re all in the interconnect – and it seems to work. And as long as the customer follows a methodology – there’s all the Snapdragons 800, 805s, OMAP 4s, OMAP 5s, Samsung Equinoxes – all that stuff has multiple power domains with domain crossings and they all work. So I think that maybe the problems are with ‘some assembly required’ interconnect, but as long as you followed a methodology there doesn’t seem to be a huge problem.

SE: I do find that the idea of a network on chip type approach is recognized but is not embraced by some of the bigger vendors. It’s an interesting thing because the idea itself is well accepted and supported.

Ashar: Network on chip is an architectural choice and for certain kinds of designs that have a clear data flow, and so on. It makes sense to have an architecture like that with a predictable latency but there is a latency associated with that choice.

Janac: That is an assumption.

Ashar: That’s an assumption but I think it’s a valid statement to say that in many definitions of IP you want to get the last cycle of performance out of the integration. And sometimes locking into a certain architecture that forces you to connect things up in a certain way maybe compromises that…

Janac: No. The latency actually, what you want, is an interconnect that gives you control over the latency so if you have I/Os, you want 8-bit connections which actually process packets in two cycles, so you want a high latency connection. You have some latency-critical connections like the CPU to the memory controller where you want to use extra wires so you put the packet header along the payload, so you pay extra wires like with AXI bus but you don’t lose any latency. The benchmarks actually come out that the latencies are roughly equal between a hybrid bus and a network on chip but you do have control of the latency by trace.

Sarwary: I think this was Charlie’s explanation earlier was very interesting. So, if you look at the choices people make as not based on CDC tools, they make choices based on the sound decisions on the architecture they choose for whatever purposes they have. For a tool to come and make the choice for them that this should be this way or the other way is not only completely wrong but it’s just a poor choice from a business point of view. We actually moved from a discussion of correct CDC design and verification that I was referring to earlier which I categorized users in these 3 categories; the traditional one – probably 90% of users do what Charlie was saying; but we are seeing also a growing number of other users that look at it from a functional perspective or from really verifying the individual components very closely. What is important, we moved from that correct design to a methodology of large SoC verification. Again there, our point of view … is that we should not enforce a methodology on designers or projects. There are 3 possible flows and methodologies and we support all three of them: a flat verification. It’s not because of tool and capability or capacity issues that you guide the design team to do it this way because our tool cannot handle it. This was the case 0-In had.

Gupta: I have a slight disagreement here. In terms of tool capability, that’s a key quotient. The designers’ time, the company’s time, their money – everything is on stake when they select a tool. And if the tool cannot do the justice to handle the design sizes, to me, that’s a big problem. In terms of the architecture, architecture always has a limit to support certain design sizes and when it hits the limit, either you have to rearchitect or you have to develop a new tool….If I put my personal money on something, I want that toy, that game, that tv to work.

Sarwary: That’s completely wrong. If you set a value on that and say that it doesn’t work…obviously if something doesn’t work, no one will use it. So it must work and capacity should not be a limiting factor. We have a big company in Asia, they are asking their IP vendor to hand off hard IPs and the IP providers do not want to give them soft IP. You can say this hard IP is not verifiable, it’s a black box, let’s ignore everything at the boundary. You must be able to verify that.

Gupta: I didn’t mean to say that people do not use the tools. I was saying that for the last decade, people are using the tools and I’m not questioning on that front that they are not able to successfully run Atrenta tools or Real Intent tools. Obviously, if you guys are doing business people are able to run.

Sarwary: Let me finish. These three methodologies are extremely important. I absolutely agree, if you have a design and if you want to do a signoff, it had better be correct. You cannot take the shortcut. I don’t think any of us here is advocating, hey, take this shortcut because my tool is not working. Everybody has to provide a tool that works and it is signoff quality, however, you cannot force methodology on users and I gave the example of hard IP – this hard IP has to be handed off and you cannot say, ask your vendor to allow you to open this IP so you can do your verification.

SE: What do you mean by methodology?

Ashar: What he means is that the reality is that designs are done in a distributed manner. That IP comes from different groups within the same company and outside the company and you need to be able to sign off on the IP before you sign off on the SoC but the final signoff has to be at the full chip level and you have to make sure that the assumptions that were made in terms of integrating the IP and the connections between the IP paths that run through the chip that all of those are kosher as far as CDC is concerned. Basically, the CDC problem is important enough that you have to enable the distributed methodology whatever is taking place in the real world in terms of IP integration and you also have to do full chip.

I wanted to make a point to Charles’ point that yes there are these design methodologies that you can use to make sure that it is CDC correct by construction. There will definitely be those methodologies but then the design market will determine which of these methodologies are being used. The reality today is that it is a mix.

Sarwary: I still haven’t finished my three methodologies. Number one is obviously flat and that’s a choice and maybe for some SoCs, this is all they will do. Even at a netlist level or at the RT level, they run flat and there’s one person responsible in that company to validate this SoC and nothing should come in the way. Number two, if they are handing off this IP, especially if it’s a hard macro, inside of it is not visible. At the boundary, there can be crossings that can kill the chip. That would require some way of characterizing that hard IP and still verifying it in context. They know that they are taking some risk with that…I think we have to be very practical. We are seeing ARM or whomever third party vendor is just handing off a hard IP which they don’t want to give away their IP. At least, the minimum part of it could be there is delay. They want to do verification of this block inside this single block is not yet ready. You cannot tell them, you know what, I don’t want to verify that, go ahead, finish everything, come back from you’re two weeks to tapeout…

Gupta: I’m still not understanding. My understanding when a person hands over the IP to a third party provider to a company it is mandatory to provide an encrypted RTL, it is mandatory to provide a netlist. Without that how can someone use the IP? If you have the netlist, again I question why do you need to characterize? Is it the capacity problem? Is it the performance? Is it the architecture limitation?

Sarwary: In terms of the verification that it has to be complete, there is no question. Our experience over the past 10 years is that you cannot force methodologies, you have to follow. One thing that is also important is the functional verification aspect of it because the functional verification is as important – the intent structurally you cannot guarantee that something is correct.

 

 

 

Related Stories:

Where Do We Stand With CDC, Part One

The Assertion Conundrum

Is Formal Ready To Displace Simulation?

Big Shift In SoC Verification

Is Verification At A Crossroads?