New Market Drivers

Experts at the Table, part 3: Verification, designer requirements, changing business models and immediate user pressures.

popularity

Semiconductor Engineering sat down to discuss changing market dynamics with Steve Mensor, vice president of marketing for Achronix; Apurva Kalia, vice president of R&D in the System and Verification group of Cadence; Mohammed Kassem, CTO for efabless; Matthew Ballance, product engineer and technologist at Mentor, a Siemens Business; Tom Anderson, technical marketing consultant for OneSpin Solutions; and Andrew Dauman, vice president of engineering for Tortuga Logic. Part one and part two can be found here and here. What follows are excerpts of that conversation.

SE: What about the impact on the verification flow. What will happen with the bifurcation of the new drivers?

Kalia: The immediate impact is scaling. Consider the size of these devices, especially the confluence of AI and compute power required by AI and sensor fusion. When you have 20 sensors all feeding in high-speed data, that implies a scale that is significantly more than what we have today.

Kassem: Scale, capacity and power to be able to observe the complexity in a realistic way.

Kalia: Yes, scale both in terms of capacity as well as scale in terms of the complex scenarios that need to be modeled. We already are seeing this with customers, where the kind of scenarios that they need to model is very different from what they had to model for a mobile device. That automatically implies the need for different paradigms for verification than in the past.

Ballance: That is interesting because at the high end you need a lot more verification for reliability and security at the full system level, and on the IoT side you need a lot more agility and speed where we have been used to fairly heavy-weight verification processes.

Anderson: It is not just the traditional space of verification that becomes much harder, but the idea of what verification encompasses is changing. People can expect their EDA vendors to offer solutions for verifying post-silicon reliability. To meet standards such as ISO 26262, you have to encompass that.

Kassem: One of the things implied by off-the-shelf components is the implied structure of container reliability, so your methodology and tools make certain assumptions of what is good. When you start doing very high optimization of the disciplines, then good is a function of all of them.

SE: With ISO 26262, you can’t assume it is good.

Kassem: The tools have to comprehend some of the code rules, such as the rails. In order to do a vertical for high reliability and high safety it is going to have to go back to the people who have the ability and capital to do it. You are unlikely to find a small company that can do this. You need a large company with a suitable insurance policy around the liability.

Anderson: That’s the legal aspect behind the electronics.

Kassem: Nobody today is doing a fully integrated system that checks all the impacts of thermal. There are only point solutions that address these today. There needs to be the ability to model all of these together. Today, you have to move a lot of data around.

Dauman: You cannot have one tool do all of that.

Kassem: For smaller devices, there is the possibility of a platform that would enable a category of devices.

Ballance: Even the largest companies are not there yet.

Dauman: Look what happened when Meltdown and Spectre were announced. A dozen years of chips are out there and only in the past year has it come to light that they are vulnerable. Is that the last vulnerability in these chips? Before you can get to testing silicon for reliability you need something in the verification flow, something that can detect these things.

SE: Are you arguing for a fully integrated tool?

Kassem: One that allows you to integrate this for small-scale devices.

Kalia: For specific applications.

SE: How will EDA tailor something that is tailored for every vertical?

Kassem: I am not saying that. If you look at it, the communications is the same. The methodology is the key, not necessarily the tools.

Mensor: Methodology is the first step. Verification is overly biased toward functional verification today, so first the methodology has to include security, fault tolerance and reliability. These are of equal weight. Once that happens, then you need the tools. You build the technologies that address each of those components and that becomes part of the design and verification methodology. People are investing today in system-architecture, chip-architecture for things like security. But it is all up-front. By the time you get to design review it looks great, but for implementation you don’t know. We all know what happens in chip design. The last couple of weeks before tape-out, a designer finds a functional error, fixes it quickly, and reruns place-and-route. Nobody goes back and asks if they opened a security hole. This is the pragmatic aspect of chip design and it has to change.

Kalia: The first phase of EDA solutions that solve it all will come from the methodology side. As a tool developer, economically, I am driven by the same things that we were talking about earlier, which is that I want to build my core technology, my core engines, in a way that they are applicable everywhere.

Kassem: So you end up with the common denominator.

Kalia: Yes. It is not economically viable to produce an engine that is capable for only for one application.

Ballance: The business model is changing.

Kassem: EDA is not growing because of the business model, and it is not compatible with the product. If you change the business model and make it product-related or something that is scalable, you might be able to do that. A per-use business model might work.

Dauman: But you still have to introduce the technology in an incremental way. If you come out with a new methodology that disrupts how people do design and verification today, nobody will take that leap. They need to integrate new technology into their existing methodologies if they want to adopt it.

Ballance: Many of the drivers of IoT, for agility, could drive people to adopt something that is a fundamentally different experience. It would not fit the mobile players, but the requirements may be different enough that it would catapult them into a different use model.

Anderson: The ability to do rapid turns with minimal customization on an existing platform is very different. While phones do a turnover in 9-month or 12-month cycles, IoT may want to do that in 1 month.

Ballance: You only have to retest what is different.

SE: What is the biggest pressure that you are seeing from these changes?

Ballance: From a reliability and security aspect, people are looking at how to take the next step both in terms of traceability around testing and comprehensiveness. That is probably the biggest area. I am also seeing pressure for agility for new models of deploying verification where the teams are not responsible for developing a custom in-house methodology over a long period of time. They are looking for more targeted solutions.

Kassem: There is a lack of off-the-shelf capability to meet the cost or the feature balance for these needs. We are trying to find a business model that allows a community to deliver these solutions in a flexible way. You need a wide community of people to respond to the demand, as if it is natural selection. Tomorrow, you will be facing a different problem and you don’t want the same team of people, so the community model allows for this.

Kalia: I am looking specifically at automotive, and the biggest challenge I see is that the requirements of the space are so fundamentally different from what people have done in the past. That is the biggest challenge. People who have solved the problem in the mobile space want to take their capabilities and adapt their skills and knowledge and quickly come up with a solution in the automotive space. It is a chasm that a lot of people are finding very difficult to cross.

Dauman: We are primarily focused on hardware security at the silicon level and making sure that the chips themselves cannot be hacked. What we have seen change this year is the focus that if you think about cyber security, and you look at the past couple of years, all of the investment has been in the software side. But if the silicon is not secure, there always will be a way in. There have been a lot of bugs exposed over the past year that have changed how people are looking at this, and there are real costs and liability when these things fail. They are not easily patched. So a lot of chip companies are working out how to prevent that prior to tapeout. They are asking how much they need to invest, how do we best run the tools and methodologies, how are we going to make that change. We are feeling pressure from companies to provide solutions to solve these problems. What we will see as an artifact of that is a lot more investment in hardware security tools and methodologies.

Mensor: When many people think about embedded FPGA, they think – wouldn’t it be nice to have a programmable ASIC there. But it doesn’t work that way. There is a die-size consequence. Or they put an FPGA on the chip but find it is on the wrong side and can’t talk to it. The business we are in is hardware acceleration, and that is where we are seeing the world change. There is AI, there is compute, the proliferation that is driving demand on compute infrastructure, storage infrastructure end-points and they are looking to build platforms that will allow for acceleration of these environments. Our challenge, even when building a full chip with an embedded FPGA, is that it is not a piece of IP. We are building GDS II. Our challenge is that people say that want the IP. They want it to go through all verification methodologies and they want it delivered in two months. How do you take a methodology, a technology, and turn it into a chip in extremely short order so that someone can apply it as a useful engine – a hardware accelerator? Today, we are at four to six months for an implementation, within a year, we will be at two months. Within two years we will be within four weeks. That opens up how people can leverage the methodology.

Anderson: That adds more pressure on traditional verification—more capacity, more complex problems. There is an extension of verification beyond pre-silicon verification into safety and reliability. People are expecting EDA vendors to step up to the bar and provide solutions. We are making progress. And specifically with safety, people are expecting the vendors, be they silicon or software vendors, to get themselves certified so that they can then build a path toward their own certification. You can’t assume that the pieces you are assembling, be it tools or hardware components, are safe. You have to prove that as part of your own certification process.

Related Stories
New Market Drivers
Experts at the Table, part 1: The industry is changing. Who is driving the market today and what new requirements do they have?
New Market Drivers
Experts at the Table, part 2: How and why automotive designs are changing existing methodologies, volumes and system-level safety.



Leave a Reply


(Note: This name will be displayed publicly)