The Race To Multi-Domain SoCs

Arteris IP’s CEO looks at how automotive and AI are altering chip design.

popularity

K. Charles Janac, president and CEO of Arteris IP, sat down with Semiconductor Engineering to discuss the impact of automotive and AI on chip design. What follows are excerpts of that conversation.

SE: What do you see as the biggest changes over the next 12 to 24 months?

Janac: There are segments of the semiconductor market that are shrinking, such as DTV and simple IoT. Others are going through an investment phase, including automotive, AI/machine learning and China. You really want to be focused on those segments.

SE: Let’s drill into some of these. Automotive has gone through a massive hype cycle. Where are we with self-driving and assisted-driving vehicles?

Janac: The highway scenario is starting to be realistic for automated vehicles. If you look at the Tesla’s integration of mapping and ADAS, you’re starting to be able to go from exit to exit in an automated fashion.

SE: But you still can’t take your hands off the wheel yet.

Janac: If you look at the Mobileye test cars, they don’t require hands on the wheel. The car on the highway is pretty autonomous. The hands on wheel is to keep people from doing things like sleeping in the back seat. The city driving scenario is much more difficult, and those technologies are still under development. But the highway driving scenario is extremely useful.

SE: Where are we in terms of a system that really understands what’s going on around it?

Janac: My best guess is around 2025. There are a bunch of Level 4 chip developments coming to fruition from companies like Mobileye, Nvidia, NXP and Toshiba. You’ll start to see those making their way into cars. The difficulty isn’t the Level 4, Level 5 driving. The problem is that the roads haven’t been designed for automated driving, and mixing humans and automated traffic is extremely difficult. The first Level 4/5 scenarios are geo-fenced automated driving where you have segments of the road where only automated traffic will be allowed. Those scenarios will be realistic first.

SE: Where do you see geofencing happening?

Janac: China is designing a city for 8 million people south of Beijing. People traffic will be on the lower level, and on the second level it will be automated driving. If you think about how many vans will be needed to transport 8 million people, that’s a lot of chips. In 2012, when we started working on automotive interconnect features, there was probably about $200 to $300 worth of chips in a car. By 2025, you will see somewhere between $3,000 and $5,000 worth of semiconductors in a car. That will include infotainment, ADAS, radar, LiDAR, cameras, chips, SoCs for chassis and motor control. You will have three modems. You’ll have a WiFi system for car-to-car communication, a low-grade LTE or 5G modem to the infrastructure, and there will be a fairly capable broadband modem to connect to the data center. There’s going to be a lot more silicon that will be needed to make those systems aware of their environment.

SE: What comes after that?

Janac: In the Tokyo Olympics, there will be a 5G connection to allow automated driving to the Olympic Village. You also see this in cities like Florence, Italy, where the pollution is damaging the old buildings. In places like that—central London, central Paris, congested cities like New York—there will be a strong push for automated cabs where people have to park their cars on the outskirts of the city. You’ll see that by 2025. But the whole transportation revolution will unfold over the next 20 or 30 years. There will be new business models, additional deployments, and eventually you’ll see cities designed for the horse redesigned for the autonomous car, and cities designed for the internal combustion engine designed for electrified driving. Once revolutions get going, they’re pretty unstoppable. It’s now clear that the electric car will replace the gas-power car, and autonomous vehicle economics—particularly in terms of safety and reduction of accident rates—will be a major shift.

SE: Is there enough electricity being generated to do that?

Janac: Power plants are most efficient if they run 24/7. Demand is lowest at night. If you charge electric cars at night, you can charge a lot of electric cars. We’re betting this happens–that there is a transportation revolution, that trucks get electrified, and even that there are drones involved. How long it takes is anybody’s guess, but the hardware decisions are made much, much earlier.

SE: So does IP that’s being developed today look radically different than it did five years ago?

Janac: Yes, everything is getting amazingly complex. What people are building right now are multi-domain SoCs. The CPU, which used to do all the work, does relatively less work. There are accelerators for vision and data analysis outside of the CPU subsystem. There are machine learning sections, some general-purpose, some very specific, all on-chip. There is a memory subsystem with very high-bandwidth memory and low latency. There also is functional safety. You need tremendous performance because a car is a supercomputer on wheels. The car has to be very efficient, because you need to deliver that compute power without water cooling. Power management becomes very sophisticated. And then there are functional safety and security subsystems to keep these safe from environmental and man-made issues.

SE: Where does the network on chip (NoC) fit into all of this?

Janac: All data goes through the NoC of the chip. There are opportunities for generating value from that. But the increase in complexity is increasing the number and sophistication of the interconnect parts of the chip. Before, you may have had networks on chip. Now you may have 20 or 30.

SE: Is there a super-NoC that runs the other NoCs?

Janac: There is a main system NoC. But there also are high-level controllers that manage the traffic, including things like power controllers.

SE: Some companies make their own NoCs. How does a commercial NoC compare?

Janac: We have much faster learning than internal designs because we see so many more designs. We also have more funding to apply to some of these very gnarly technical problems. The interconnect is very complicated, and it’s becoming even more complicated. So you need the scale to keep investing in those solutions customers need.

SE: As part of this change, AI and machine learning are booming. What impact will that have on chip design?

Janac: The interconnect for machine learning is very different than for an SoC. Generally, the heterogeneous SoC has a tree structure where you have initiators and targets, and request and response networks. With machine learning you have more homogeneous structures. In some sense, those can be thought of as neurons and the interconnect. They also are quite regular structures. So you have a lot of peer-to-peer communications between the various neurons or processors, which requires a different kind of interconnect. There are mesh structures, ring structures, and toroid structures, and it’s possible that additional architectures beyond the CNN will be developed that require another type of interconnect.

SE: The main challenge there is moving the data in a variety of different ways very quickly, right?

Janac: Correct. Throughput is the key.

SE: One of the problems with these chips is that all of these processors need to be active at all times to be efficient. Is it possible to manage the data flow efficiently, given that these algorithms and workloads are changing so frequently?

Janac: Yes, but one of the big issues is that the meshes are very difficult to generate manually. The meshes and rings and toroids are not as regular as you would expect. You also have to be able to get out to DRAM, and there are all kinds of deadlock issues, which are much more important in peer-to-peer communications. So you have to have algorithms for dealing with that. You need editing capabilities to make exactly the mesh you need for your particular software.

SE: Does this have to be dynamically configurable software?

Janac: Yes. And you need very powerful tools to give you the ability to make those changes. And as the data moves between the CNN levels, you need to be able to do broadcast and multicast to update the CNN weights concurrently. You also need very high-bandwidth interconnects. We’re shipping end-to-end 1,024-or-higher interconnects, which can run at 2GHz. If you multiply that by 1,024 bits, that gives you 2 terabits per second of bandwidth. HBM2 memory becomes important with all of this, because there’s also a multiple caching hierarchy. Last-level cache becomes important, too.

SE: Can that be done without a 2.5D package?

Janac: Yes, it can all be on a planar chip. But as these things become huge, you may have to divide them up into chiplets, so the interconnect becomes 3D. So there are a lot of specific features you need to develop for machine learning. You need a plan so that all of this stuff works together—coherent and non-coherent interconnect, mesh interconnect, functional safety with power management. All of these permutations have to work together.



Leave a Reply


(Note: This name will be displayed publicly)