Executive Insight: K. Charles Janac

Arteris’ CEO talks about connected cars, heterogeneous coherency and finFET timing challenges.

popularity

K. Charles Janac, chairman and CEO of Arteris, sat down with Semiconductor Engineering to talk about what’s changing in the automotive market, the impact of big data, and heterogeneous cache coherency. What follows are excerpts of that discussion.

SE: What are the big changes you’re seeing in semiconductor design?

Janac: There are a lot of changes right now. Mobility is slowing down and becoming more concentrated. Intel’s exit from that market underlines that trend, and Intel may not be the last. Automotive is growing quickly. There are deep submicron issues, too. Second-tier SoC vendors are going to stay on 28nm for a very long time. Only the highest-volume, leading-edge designs are going to 16/14nm and on to 10/7nm.

K-Charles-Janac-web

SE: Where do you see the challenges?

Janac: We’re seeing them everywhere. The one we’re addressing is how to get cache coherency into more of the SoC. The issue there is heterogeneous coherency, and the need to simplify the software. There just isn’t enough of a base of programmers in the world to build hardware-aware software. Another problem we see is that timing closure in SoCs is becoming more and more difficult, which is causing them to fail. And we’re also seeing a big need for resilience in designs.

SE: In automotive, there are more sensors, greater connectivity, and there is a push to replace ECUs with SoCs. What are the biggest problems carmakers are wrestling with from an electronics standpoint?

Janac: Right now the average car has 145 microcontrollers. It doesn’t make sense to go to 200, and it certainly doesn’t make sense to go to 300. The problem is fundamentally power and cost. You’re starting to exceed the power of a 12-volt battery and it’s making the architectures too complex. SoCs will take over, and each of the major subsystems in the car will have one to four SoCs controlling a particular system instead of 10 to 40 microcontrollers. You’ll have 10 to 20 SoCs per car, and between 60 to 80 million cars per year. That’s a very interesting market, with about 800 million to 1 billion SoCs per year. This was a very sleepy market until Tesla arrived.

SE: Why?

Janac: Tesla has essentially turned the car into an IoT device. There’s a modem connection, a full-time Internet connection, and there’s a data center to analyze the state of major systems in the car. It knows your location. It’s able to sequence the key code to improve performance. That’s moving the entire industry to increase the electronic content of the car. So the nature of the silicon is changing, and there’s more silicon being put into the car.

SE: Is there enough commonality in those SoCs to get economies of scale, or is each one very different?

Janac: They’re all pretty different. There is an infotainment and dashboard SoC. Those typically look like smartphone application processors. Then there are the ADAS (advanced driver assistance systems) chips and deep-learning SoCs, which are related to self-driving capabilities. Those are much more safety-related. So a mobility design is not going to cut it for that application. Then there are the real mission-critical pieces, which are the body control and the engine control. Those have to be ASIL C and ASIL D. Those are very specialized chips. There’s a gamut of chips, from the modem to the infotainment to the dashboard control all the way to the mission-critical chips controlling the engine and the body.

SE: Do you expect this market to consolidate?

Janac: Right now the market is very segmented. There are are hard-core engine guys, and then there are soft application guys. The question is whether there will be one major SoC supplier that can supply 10 to 15 SoCs. We don’t know that yet. Right now it’s segmented.

SE: Does that matter to your business if it’s one or a bunch of suppliers?

Janac: No, it absolutely doesn’t matter to us. We follow the customer. We can provide functionally safe interconnects for whoever chooses to participate in that market.

SE: We’ve talked in the past about IoT being a bunch of connected vertical markets. Are you starting to see that kind of verticalization across multiple markets?

Janac: Not yet. The car is becoming the first useful IoT device, but there certainly will be others. There will be a hierarchy of health care, robotics, industrial manufacturing. I’m not a huge believer in wearables. There is a lot of activity there, but so far there aren’t a lot of valuable applications. Maybe there will be in the future. But robotics, industrial, medical and automotive look very interesting.

SE: One of the approaches we’re hearing about more and more for the IoT is heterogeneous computing. How real is this beyond just the supercomputer world?

Janac: ARM has captured the bulk of the processor IP market. Heterogeneous computing is about making accelerators, which are often proprietary to individual customers or provided by different IP suppliers, coherent to ARM processors.

SE: What’s the problem that has to be solved?

Janac: They all work differently. One of the enablers is that the ARM interconnect standards have become fairly pervasive. That includes the AXI, AHB and ACE protocols, and maybe the CHI interface in the future. That has allowed other people using the ARM standards to essentially speak their vernacular, which makes it relatively simple for an interconnect to make these things communicate together efficiently even if they’re not all made by the same vendor.

SE: Does the data have to be in the same format?

Janac: No. We do the same thing we do in a non-coherent world. We translate the protocol of the processor or the accelerator into an internal packet format, and then convert it back out. Potentially we could have other coherency protocols, should that be required. At the moment, all we see out there in terms of market share is ARM protocols. But there will be others.

SE: Cache coherency has a reputation for significant overhead in terms of performance and power. Is it becoming less of an issue than in the past?

Janac: Coherency does have a cost. But there are very clever ways to minimize that impact. For example, if there is a giant directory, you can have configurable snoop filters. You can have very small local caches in certain cases and one big cache. You can minimize the overhead by using these kinds of techniques. Obviously you only want to use coherency where you need it. But there are good coherency implementations and there are bad coherency implementations.

SE: Is this process-specific?

Janac: No, but the really advanced nodes are more sophisticated and larger. They typically have higher performance. So for those kinds of chips there is a great use for coherency. But timing closure becomes much more difficult at those nodes. You have great distances between IP sockets, lower voltage thresholds, and you typically are trying to run at faster frequencies. Those three things make it more difficult to close timing.

SE: We’re seeing a lot more emphasis on fan-outs and 2.5D. Does that affect any of this?

Janac: 2.5D is still a planar interconnect that is connected to an interposer. In the future, you could see 3D where you have a sensor or an analog chip or a modem connected to a digital processing SoC that is made in the latest nanometer process, which in turn is connected to a memory in a package. There are a number of issues, such as how you make a known good die in a package. Eventually the interconnect becomes 3D, and that creates additional opportunities to deliver on those requirements. The interconnect becomes more important over time as these systems become more complicated.

SE: How do you keep an increasing amount of data coherent across all of these cores and potentially even across different chips?

Janac: One of the keys is to convert them into one common interconnect format so you can process that in a uniform fashion. In a coherent world, you really need a directory approach rather than a broadcast approach. Those directories have to be made efficient through snoop filters and other things that lower the cost of using coherency. There are a lot of technical details about how to make all of this efficient. ARM basically has pioneered coherency within the compute subsystem. What we’re trying to do is bring cache coherency to the rest of the SoC.

SE: What are you hearing in the market?

Janac: There are two classes of customers. One has a very efficient form of coherency for existing applications in mobility application processors. Then there is another class of customers that has use cases for workloads and they want to make other types of accelerators coherent to the main CPU. Those are primarily applications that have to deal with tremendous amounts of data and very high-bandwidth and low latency processing. This is another class of application. So people want to make video traffic and graphics traffic coherent to the main CPU. What’s interesting here is that for all of these new IoT endpoints, like 60 million new cars, there is a ratio of server blades. In mobility, it’s about 600 to 1. For cars, it may be 40 to 1 or 60 to 1. The growth at the edge of the network also will translate into very large growth in the data center.

SE: And the edge devices are not just dumb sensors, right?

Janac: No, when endpoints are useful, they’re very complicated and sophisticated. ARM CTO Mike Muller gave a presentation about what it took to make a coffee maker into an IoT device. It was a lot of work. Putting everything into the cloud doesn’t make sense. That means you’re putting everything into the hands of somebody else. So you’re going to see a hierarchy of storage. There will be mini-clouds inside companies behind the firewalls, with certain classes of data going out beyond the firewall.

SE: Does it matter for you if the software defines the hardware?

Janac: Not at all. And we are seeing more and more hardware defined by software use cases.



Leave a Reply


(Note: This name will be displayed publicly)