Where it works, where it doesn’t, and where the choices get fuzzy.
Programmable logic in automotive applications is essential, given the parade of almost constant updates and shifts in direction, but exactly where the technology will be used has become a moving target.
This isn’t entirely surprising in the automotive industry. Carmakers are moving into electrification and increasing levels of automation in fits and starts, sometimes with dramatic swings in direction. What kind of processor or accelerator works best, and how much companies are willing to trust to AI or over-the-air updates can vary by the day, by company, and by whatever the current competitive climate demands.
“This field is in continuous motion,” said Megha Daga, Cadence’s director of product management for AI inference at the edge. “Every day you will hear about a new AI network that’s coming up and that it’s the most apt for a particular performance and accuracy profile. The layers are also adapting. There was a time when it was all about fully connected layers. Then there was a move to more convolution layers with minimization of the fully connected layer. As we go further, we see there is an adaptability toward having some time series-based layers coming in, as well, which is the recurrent neural networks. But at the bottom of all of this is some kind of a matrix multiply.”
AI has received a lot of attention as having the potential to move the world closer to fully autonomous vehicles. And with the AI algorithms undergoing almost constant change, it seems obvious that some part of the autonomous system should be implemented in a programmable device. But which part isn’t clear because as carmakers are discovering, AI isn’t always perfect.
“AI is very promising, and it has its own magic and secrets in regard to accuracy,” said Burkhard Huhnke, vice president of automotive at Synopsys. “When it’s just about embedded vision, we know that the recognition rate is better than 97%, which is great. But what about the 3%? How can you judge where you are if you have no additional information? That’s the reason why OEMs usually prefer more deterministic solutions instead of the AI solutions as the basis for the recognition algorithm. AI plus a combination of both is actually the best solution, because AI gives you a very fast hint that something is not going well compared to the previous data. That’s a health monitor that is installed on board, continuously comparing between production data and current field data.”
AI, at least in its current state of development, also is something of a black box. “You cannot define the failure rate exactly for AI and the precision or accuracy,” Huhnke said. “What’s important is to recognize new sceneries that you deal with in situations that are unknown. If that is not embedded onboard, and you cannot learn from your installed system, you need to upload the scene into the cloud and check if your computers have recognized that before. Using the power of the cloud is helpful to add knowledge into the current situation.”
Programmable logic has been used in automotive engine control units for a couple decades, particularly for engine control. Software bugs are identified and the product is stabilized, after which the real scale-up begins.
“After six months, the OEM would change that to fixed electronics and chip design on ASIC, which couldn’t be programmed anymore,” Huhnke said. “But the car gets older and older over time. So how do you adjust to the new opportunities with updated software? Compared to the smartphone, it is limited.”
David Fritz, senior autonomous vehicle SoC leader at Mentor, a Siemens Business, agrees. He said programmable logic is primarily used today for automotive prototyping, very early on in research vehicles. “This is when the engineering team is trying to understand machine learning and trying to understand how artificial intelligence inferencing works under the premise that we don’t really know what we need, so let’s have something that’s reprogrammable and use that to figure out where we’re going. That process has mostly been replaced by using GPUs, meaning that instead of programming an FPGA we can get some very high performance by just changing the software and running the algorithms on a GPU. While GPUs are very high performance, they’re also very high power consumption, and most OEMs and Tier Ones are getting to the point where they want a solution that doesn’t require a GPU. Almost everybody we’re talking to now is looking at using accelerators. It could be a network processing unit, or it could be something that’s custom.”
But cars also are becoming more like consumer devices in that they are subject to almost constant over-the-air updates to add new features and fix bugs. And with open source moving into automotive, vulnerabilities need to be protected and doors closed securely in case there is an online connection that it is not hackable, he said.
And this is where FPGAs begin to look attractive. “Autonomous vehicles rely a great deal on machine learning, and every new vehicle in every new situation may contribute to the shared knowledge base,” said Tobias Welp, product manager at OneSpin Solutions. “FPGAs offer flexibility for many applications because both the hardware and the software can be reprogrammed. Reprogramming FPGAs when knowledge or algorithms are enhanced has the potential to keep autonomous driving in a state of continuous improvement.”
But there are tradeoffs. Verification in this case becomes a continuous process. “Every time the design changes, the full verification suite (static, formal, and simulation) must be run,” Welp said. “Formal equivalence checking also must be run to ensure that the FPGAs have no implementation errors, security vulnerabilities, or lurking hardware Trojans. Finally, the reprogrammed FPGAs must be extensively validated on test vehicles before updates are sent to the field.”
Stuck at a crossroads
These kinds of decisions are on hold for autonomous driving at the moment. The biggest carmakers have pushed back the rollout schedule for autonomous vehicles until at least the next decade.
“When we started in this space and we were talking to automotive customers a year ago, everybody was going straight to Level 4 and Level 5 autonomous,” said Geoff Tate, CEO of Flex Logix. “They were all going to do their own custom chips. They were all looking to license IP for inference acceleration. That’s changed dramatically. I don’t know of anybody who’s looking to do an ASIC in the automotive space right now. Everybody that was telling us they’re going do their own chips has changed to buying off-the-shelf chips, and almost all the major car companies are focused more on driver assist.”
If there is going to be inference in the car today, it might be used more for something like an assist for the automatic braking system, Tate suggested. “If you see a pedestrian that’s coming in the path of the car, it puts on the brakes, which is different from autonomous driving. It’s aiding the driver, and that makes for a simpler solution. Autonomous vehicles are going to happen, but the initial enthusiasm has waned and full autonomous is a long way off, with a lot of effort shifted to doing something more practical. Additionally, the car companies figured they could just license some IP and slap a chip together, and it would work just fine. They’re finding that inference is a very challenging thing just by itself, and why can any car company figure out how to make a better inference chip than people who are focused and specialized on it? The AI part is really just a subset. You have to be able to detect and recognize objects, and that’s a complex task in itself. What do you do when you detected an object? That’s maybe a more complex task. There’s a whole decision process that has to take place when you’re tracking this object. Suppose you’re just driving down the street and you see somebody on the sidewalk coming up to a crosswalk. At what point do you decide that you need to stop?”
Building in increasing amounts of driver assist features is moving towards autonomous vehicles, just at a more reasonable pace. “You’re getting everybody comfortable with the technology and comfortable with the feasibility of the technology,” he said. “Looking at Tesla, for example, when you put in enough features, such as lane departure where you can sense whether you’re in the lane or not, whether you’re leaving the lane, and you can detect objects, you can start becoming autonomous like Tesla. But you can’t let people actually sit in the backseat like some people have done. It’s not fully baked yet.”
Automotive AI accelerators
Technically, there are certain operations that are constant, and what matters is how they are manipulated. Data manipulation is a very important aspect in automotive applications, and building blocks already exist so programmable logic device probably is not needed.
“How you arrange the data for the particular layer in the most efficient form is the programmability aspect of this,” said Cadence’s Daga. “That’s the way you can take care of these very prominent layers, just by single or a couple of formations of these building blocks. That’s also where, in the last 12 to 18 months, there’s a big trend in accelerators. Engineering teams are going toward making accelerators that are certified from the security and safety perspective, and they understand there are certain core functions that can be made in a non-programmable way and put on the side. There is also the bigger picture of programmability. There must be certainly have some integers sitting next to that hardware logic, which is programmable. For certain functions where the performance-per-watt and constant absolute power are critical, then you go with an embedded platform because that’s where you get the optimal power utilization, power efficiency, and optimal energy perspective.”
Companies such as Toshiba, which is a supplier in the automotive ecosystem, have publicly stated they are using IPs that fall under this umbrella to address vision applications.
Programmable logic’s role in automotive
So where exactly does programmability fit into the automotive picture? The answer isn’t clear at the moment.
“Re-programmable means higher cost,” said Synopsys’ Huhnke. “It also means a higher risk of vulnerabilities. If you look at all the security aspects, I’m not sure if that’s the smartest thing or the most secure way to get hardware into your car, especially when we are talking about an autopilot taking over. I would be scared to add an additional variable, reprogrammable hardware, into this equation. At least for now, it’s so complicated that I would prefer to have the hardware platform fixed for a couple of years and then add my software updates on top of that. If I have flexibility on both sides, it adds a lot of additional complexity. Even within the organizations, working with this complexity, it’s really hard.”
Not everyone agrees. Flex Logix’s Tate maintains that FPGAs are going into cars. “Along with the Nvidia GPUs, Xilinx and Intel/Altera chips, the car companies have to pick some programmable solution,” he said. “All the driving algorithms are changing so rapidly you can’t use a hard-wired solution. Whether you’re using Mobileye or Nvidia or Xilinx, the focus is on having something programmable, because by the time the car actually gets into production, you’ve probably come up with much better algorithms.”
It’s possible OEMs may have to freeze the hardware, Tate said, but the software perhaps can still evolve until a later time. “There can be always updates, like Tesla updates their software while you have the car. That will probably be an increasing trend. For now, the market for a car is going to be programmable off-the-shelf chips. Nvidia uses GPU technology because that’s what they’re good at. NXP uses microprocessor technology because that’s what they’re good at. Xilinx uses FPGA technology because that’s what they’re good at. And they’re all fighting to get designed into cars. If an ASIC market develops, it will probably be down the road when designs become more stable, when the radical innovation stuff has passed and the algorithms are well understood. Then it’s an easier target for a car company to build their own chips, and for the volumes to get higher, as well.”
On the other hand, Kurt Shuler, vice president of marketing at Arteris IP, has seen that development is quickly moving in the direction of optimization with a lot of custom ASIC activity.
“Some companies are getting beyond the bounds of what can be done even in single die, looking at multidie solutions, but everything’s around optimization for power, bandwidth, latency, and functional safety,” Shuler said. “When you go to FPGA, the biggest issue is probably on the power side. Compared to a similar set of logic in ASIC versus doing an FPGA, you’ve got to basically turn on and off more transistors. That’s the underlying technical issue. We do see people doing things in FPGA, validating architectures, but not as much on the big processing that most of our customers do. They’re taking the sensor stuff and doing a whole bunch of object detection and classification, and then converting it into an intermediate format, data format, XML or otherwise. They’re sending it to a sensor fusion brain that’s taking all of this in. Those sensor fusion brains, as well as the sensors themselves, all have their own little brains, and there aren’t a ton of these big chips that they’re trying to optimize the heck out of. FPGAs are good for trying stuff out.”
In the data center, it’s a different story, he said. “There are a lot of AI processing companion chips that have a big FPGA component to them like Microsoft talked about at Hot Chips. Even then, those things eventually become ASICs as they narrow down what they’re trying to do. When you’re in the exploratory stages, the FPGAs makes sense. But even in the data center, what we’re seeing now is they really care about power. What they do is, for instance, you have a whole bunch of processing engines and data flow engines within this hardware chip. Thousands, maybe millions. And there’s different permutations and combinations, different types. And some of this is programmable, some of this stuff is almost stochastic — you flip some control and status registers to set things up how you want it. You pipe the data in and the data comes out, and the hardware manages as much of that data flow as possible because every time you deal with software managing the data flow, you slow things down.”
When it comes to cars, decisions tend to be much more conservative. “When they’re putting in those AI algorithms, there are multiple running at one time in any of these things, and they verify the heck out of them,” Shuler said. “That’s why there are standards like SOTIF and UL 4600. Let’s say you have a system that has a software build running on the hardware, and you are validating that to make sure that car will work. Yes, there may be over-the-air updates. But before those go out into the field, somebody has to test and validate them at the system level, not just at the software test level and hardware verification/re-verification level. There are not yet industry-standard best practices for how to do this, and there isn’t a checklist where you can say you passed. Each company or each player in the value chain has to do their own thing. But again, that’s a software update.”
Conclusion
At the end of the day, for autonomous functions, programmable logic is not power-efficient, cost-efficient, area-efficient or thermal-efficient, said Mentor’s Fritz. “All of those are reasons not to use them in high-volume production.”
But one area where programmable logic really begins to make sense is in neural networks, he said. “They have sort of a fixed structure, but what can change with additional training of the network is the weights in each of the nodes. So the question is, ‘If I want to do an over-the-air update of my neural network weights, how would I do that?’ One option is flash, but flash has problems in that environment. Using FPGA to essentially store the weights of the inferencing engine as a result of training, that makes some sense. You can start to see this embedded FPGA being implemented in a lot of neural network computer chips. That way they have some flexibility with additional training down the road, but they don’t have to pay the cost of trying to do all of the calculations in the FPGA, which is not the most efficient way to do it.”
Leave a Reply