New Market Drivers

Experts at the Table, part 2: How and why automotive designs are changing existing methodologies, volumes and system-level safety.


Semiconductor Engineering sat down to discuss changing market dynamics with Steve Mensor, vice president of marketing for Achronix; Apurva Kalia, vice president of R&D in the System and Verification group of Cadence; Mohammed Kassem, CTO for efabless; Matthew Ballance, product engineer and technologist at Mentor, a Siemens Business; Tom Anderson, technical marketing consultant for OneSpin Solutions; and Andrew Dauman, vice president of engineering for Tortuga Logic. Part one can be found here. What follows are excerpts of that conversation.

SE: The nature of our industry is to make incremental changes rather than changing the foundational elements of a system. This includes architectures and design philosophies. Will that put companies who previously designed mobile phones at a disadvantage?

Kalia: Risk is an issue. It also brings into question things such as reliability, which is driving a lot of behaviors. If an automotive OEM goes to their tier 1 or 2 and asked for a system with a Failures In Time (FIT) rate of 1 per billion hours of operation, and you take an existing 100 million gate chip and place the same requirement on that supplier – we don’t even really know what that means. How do I take a 100 million gate chip and determine that is has a FIT of 1 per billion hours of operation? When I first found out how this is being done today, I tried to turn off all of the electronics in my car. If that was how they determined that it was safe, then I was better off without the electronics.

Mensor: When we talk about automotive, there is another horizontal, not just safety that comes into this. This is machine learning. Handsets will eventually have their own forms of machine learning, but these will be vastly different from what happens in sensor fusion. That brings in a different domain in terms of compute power. The bandwidth and the amount of data and what has to be done with that is massive. From an architectural point of view it is not just that ADAS is this problem and part of the solution is 5G, part of the solution will be sensor fusion, compute bandwidth, how much is done at the edge or the endpoint, how much is done in a centralized fashion. Some of these are not a matter of which do I choose, but which are feasible. It will be challenging to find an end solution.

Anderson: When you consider a self-driving car, you have to consider how much is being done in the cloud, how much in the car. California has some new laws that will require truly autonomous cars to have someone, somewhere monitoring it. What happens when the communications connection drops? Even this cannot be 100% reliable.

Kalia: Reliability is going to become very important.

Dauman: But the rate of change in automotive design and field upgrade has to model what was happening in mobile because of safety and reliability. Consider a traditional automotive tier 1/tier 2 model and there is a global chipset that was available to anyone that had an ADAS system. It would make it into this car in 5 to 7 years, which isn’t sustainable if you are doing ADAS, especially if you are having reliability and fault tolerance problems. That is why you see new companies developing these system, such as Tesla and Google, and not GM or Mercedes. They just don’t have the methodology and they haven’t adopted the practices that allow that. I am not saying that Google and Tesla have it yet, but they are structured that way.

Anderson: But they are making progress and back to the transition that we talked about before – the previous generation drivers were compute servers before cell phones came along. The processor vendors were not the ones that did the integration with cell phones and we are seeing that repeated where the people that were the experts in smart phones and not necessarily moving on to IoT or automotive.

Kassem: The amount of innovation that had to happen during the period in which cellphones have been driving has been the largest.

Ballance: Both from a design perspective and a verification perspective. It pushed us forward to thinking about the system-level, but as we move to self-driving cars, and ADAS, we have to move up yet another level. Hopefully some of the experience from mobile translates.

Dauman: In the early 2000s, before cell phones, verification became stalled. sales had been flat as well. Then smart phones drove it back into grown. You had to run software to verify chips. Test vectors were dead. To run software, you needed bigger capacity, high performance platforms but all biased towards functional verification. That is where we are today. Today, there aren’t the things we need for security or for functional safety. All effort is implementation and then functional verification. If we want systems that have this evolution and product lifecycle, then we need to build in a secure development lifecycle from start to finish, not just the architecture of the chip, but how do you go through the chain from verification to delivery through field upgrades, that you still have a secure system. This is not just the silicon, this is just the foundation.

Kassem: In a system as complex as an automobile, there has to be a move towards system verification. We started that trend with mobile phones and started to talk about scenarios and transaction-based verification, but I think there has to be step beyond that as well. The only way certain companies are verifying autonomous cars is with a test track. That is not scalable. How many scenarios can you really capture with a physical test track? It is extremely limited.

Anderson: Even building a test track does not help with reliability, unless you can start throwing alpha particles at it. The things you care about are will it last for 10 years under the full range of conditions that it can experience – there is no way to replicate that today. It is just a rigorous, painful process and a bunch of standards that you have to try and meet and demonstrate to your customer that you have in fact matched the requirements of being robust enough.

SE: Applications such as ADAS will have volumes less than phones. IoT may have higher volume, but they will be smaller. This would suggest fundamental changes at the silicon level because there is no one driver on the process technology. We may need new nodes for ADAS, but who will pay for it.

Kassem: From the IoT perspective, you will look at the process technologies that are bi-modal. One is around 180/130nm. People are building Bluetooth on 180nm. 55ish is another mode. The difference is volume, features and the cost of getting the product to market. 180nm is under $10,000 for a prototype in terms of cost of the design. If you go a little higher, you see that increase exponentially. We get off the 7nm business and kick back for low volume IoT. For the datacenter you will need a large number of consolidated resources – even Amazon now has an FPGA machine. Our bet for the company is low volume. Some of them don’t even want a whole wafer per year.

Mensor: When you say that IoT is low volume, I think you are talking about that long tail. Ring was just bought by Amazon. Is this what you are talking about? Everyone who has an idea and wants to build their own IoT-type product. Eventually some of them become Ring.

Kassem: Statistics. You don’t know what the next successful IoT product will be.

Mensor: Isn’t this the same models as Apps. Apps assumed you had a platform and the ability to distribute it socially and anybody can do their own App. I have this App for my garage door and when I open it, I can ask if anyone else has opened it. That is a low volume, perhaps 100s of thousands and that is standard for IoT. It is not 100s of millions, but if you aggregate all of the folks who are saying they want this number, you get very close to your volume curve and there will be commonality across those products.

Kalia: This only happens if aggregation is possible. If the people designing these solutions are wanting a few 10s or 100s of thousand copies, they want to do it in a manner where they can make a profit from this volume, and the only way to do that is if they stay away from the silicon aspect of it. IoT devices or solutions will continue to be built from off-the-shelf platforms.

Kassem: The cost to get a chip is two things: the engineering cost to get the prototype and the costs of the materials and masks. In a community model, there are times when a company said they will do it for almost zero engineering cost but you pay for the prototype ($10,000) and then you pay me a lot of money when the prototype is working and $1 for every chip. They see this as a way to share the risk.

Ballance: Part of what allows that is lowering the risk. Lowering the cost of the engineering.

Kassem: Once you have the prototype, the risk goes down, but not to zero.

Mensor: Isn’t standardizing solutions a way to reduce cost? If you do have standard solutions, then that would provide the lowest engineering cost. All I have to do is put my software stack on it.

Leave a Reply

(Note: This name will be displayed publicly)