FPGAs Drive Deeper Into Cars

Automotive OEMs are leveraging programmability for algorithms, evolving safety standards, and market differentiating features.

popularity

FPGAs are reaching deeper and wider inside of automobiles, playing an increasingly important role across more systems within a vehicle as the electronic content continues to grow.

The role of FPGAs in automotive cameras and sensors is already well established. But they also are winning sockets inside of a raft of new technologies, ranging from the AI systems that will become the central logic in autonomous vehicles, to new types of sensing and communications technologies.

“There are lots of concepts around feet-off to hands-off to eyes-off to brain-off type of driving assistance applications,” observed Stuart Clubb, senior product marketing manager for Catapult HLS synthesis and verification at Mentor, a Siemens Business. “There have been various articles talking about how, first of all, it’s too darned expensive. You can’t put a $12,000 liquid-cooled Nvidia GPU-based box in a $20,000 car. Ford isn’t going to be able to write enough checks to do that.”

Automotive is a relatively low-margin, high-volume business. While volumes certainly don’t compare to those of smart phones, which has sustained Moore’s Law for the past decade, automakers have been diligent about squeezing costs out of their supply chain for decades. And as more electronics are added into vehicles, that price pressure has extended to chips and electronic subsystems, as well.

But the automotive world adds some major hurdles for chipmakers. In addition to trimming costs wherever possible, they also have to comply with rigorous standards such as ISO 26262 and ASIL A, B, C and D, and satisfy requirements for resilience, aging and reliability over lifetimes of a decade or more. And this is where the problems really begin, because the technology and the standards are in an almost constant state of evolution. It’s also why automotive companies have come to rely on FPGAs as the processor architecture of choice.

“It’s not just, ‘We ran it for 30 minutes, it looks good, ship it.’ It’s is a very different side of things,” said Clubb. “If we look at what’s happening in AI, people are talking about convolutional neural networks (CNNs) being the big thing right now in machine learning. There is the traditional ADAS, which is pedestrian detection, radar processing, and such, but CNNs are a huge area of experimentation because nobody really understands how they work. There is no mathematical proof as to why they work, how they work. They just do. It includes convolution pooling, and training the network. You train a network for one thing and it looks like it’s good, then you throw it a couple of things, and it doesn’t work. Everybody at one time thought that the solution was going to be lots and lots of floating point, which may be why Intel went with all the floating point units on its Stratix 10 device, because this was going to be the machine learning [platform]. It was either going to be inferencing or training, and this was going to be fantastic.”

That was before GPUs won the algorithm training market. GPUs have proven to be an inexpensive architecture for training because they are easily parallelized and familiar to most algorithm developers. That makes them ideal for data centers, which is where the training algorithms are developed. But it’s not the best architecture for inferencing, where power, performance and area are much more critical than for training.

The challenge now is quantization, said Clubb. “What kind of network? How do I build that network? What’s the memory architecture? You start off with networks where, even if you just have a few layers and you’ve got a lot of data going in and a few coefficients, it very quickly spins around to millions of coefficients. The memory bandwidth there is becoming quite frightening, and nobody really knows what the right architecture is.”

These issues are resonating loudly with users, as many tool providers across the EDA space are reporting strong demand and attendance at seminars and events on topics related to AI/machine learning/deep learning. When answers aren’t obvious, it’s too expensive to design a custom ASIC.

“The only thing you might do is buy a CPU with a whole bunch of accelerators,” he said. “But nobody has really figured out the right answer. Ford and GM have said they want the entire self driving subsystem to be 100 watts or less, and right now the demonstrators are the equivalent of driving around with 100 laptops on in the back of your trunk. So there’s a long way to go, and the solution isn’t going to be a whole bunch of GPUs. Somebody will crack a solution either for a generic solution, or very specific bespoke things that have some updatability. This is why we’re starting to see actually a resurgence in the embedded FPGA side of things.”


Fig. 1: Intel’s FPGA and acceleration stack. Source: Intel.

Growing role for eFPGAs
The problem with discrete FPGAs is automotive companies can’t get data into and out of those chips fast enough. “FPGAs have a lot of SerDes on them to communicate, and they’re very high performance, but when you look at how much data you can transfer on chip on a 128-bit bus, a SerDes isn’t really that fast,” said Geoff Tate, CEO of Flex Logix. “So anytime you go into any chip, and out of any chip, this is generally a bottleneck. For the FPGA to be useful, it often has to be talking to something other than FPGA. This is why Xilinx and Altera developed their SoC chips. It’s a step towards mitigating that. But the Zynq-type chips are quite big and expensive. So there’s a class of customers that would like more cost-effective solutions with FPGAs, but not necessarily millions of LUTs or hundreds of thousands of LUTs.”

Based on market observations, Tate believes there are a lot of SoC and microcontroller companies that would like to integrate FPGAs. “They see there’s value. We publish app notes showing how accelerators based on reconfigurable FPGA can be faster than the Arm processor. But the challenge currently is that most microcontroller companies are used to programming in C. They generally do not know how to program in Verilog. Another challenge is that if you look at an FPGA, the programming models for those things are generally one customer writes all the code. Today, with computers or time sharing and multicore, there are lots of programs running simultaneously under an operating system. We’re being asked how to move to a modular or multicore FPGA architecture, where there can be a library of applications that can run on multiple different SoCs and microcontrollers without people having to learn RTL. How do you make FPGAs look more like processors, where they can run chunks of code and several different chunks of code from different people simultaneously and get around the current model where it’s one big chunk of RTL, written by one person, at one time? That would make the embedded FPGA value proposition much more accessible to people who are not RTL experts.”

Discrete FPGAs have long been used in the auto segment, starting with the instrument console and entertainment features (collectively called infotainment), and moving into driver assistance. “Altera was winning a huge number of sockets in sensor fusion for LiDAR, sonar and radar, and FPGAs are perfect for that,” said Ty Garibay, CTO at Arteris IP. “You can bring the data on each interface in a different format, merge it, and spit it out on the other end. So FPGAs are almost universally used in every high end car for things like 360-degree view. When you get too close to parking lines, there is no dedicated SoC that is able to do that. These run at almost 30 frames per second. Altera and Xilinx had almost all of the rear-view camera market.”

He noted that FPGAs also are cost-efficient for automakers because while the sensor technology may evolve, the entire FPGA doesn’t need to be requalified.

But discrete FPGAs aren’t ideal for everything. “As long as driver assistance meant providing guidance or warnings to the driver, traditional FPGAs were sufficient,” said Kent Orthner, systems architect at Achronix. “The term ‘driver assistance’ is changing to mean operating the vehicle on the driver’s behalf, by applying brakes or throttle for adaptive cruise, automating lane changes and self-parallel-parking. With the FPGA being responsible for actual operation of the automobile, functional safety requirements become much more stringent and difficult to meet.”

A design team that has experience developing ASICs or SoCs that meet automotive functional safety requirements can implement the device, treating the eFPGA as just one component of the entire solution, he said. “That team can then apply their functional safety expertise in the design, verification, documentation and characterization of the automotive eFPGA SoC, leading to a device that can meet safety requirements much easier than a traditional standalone FPGA. Furthermore, eFPGAs offer much more opportunity to customize the core of the FPGA to the application at hand. For automotive, this means that there can be specialized hardened circuitry to take care of resilience requirements and redundancy.”

The other kind of power
The automotive space is no stranger to power-related issues. Power budgets can affect mileage per gallon or per charge, particularly as cars are increasingly electrified. Automakers still have to meet all of the government mandates for mileage, in addition to all of this new electronic technology.

“There are the three cooling systems in a Tesla,” said Mentor’s Clubb. “There’s the regular cooling system, HVAC and whatnot. There’s a cooling system for the battery. And then there’s a cooling system for that big GPU-based display. It’s fine in a $100,000 car, but I don’t see that working well in a Honda Civic. By the same token, I’m not going to be putting a $5,000 FPGA in it, either. It used to be a rule of thumb that if the product you’re selling to the end customer cost less than $1,000, you were not putting an FPGA in it. Still, there is a greater resurgence in engineering teams saying, ‘I have this really complex algorithm to accelerate. I can’t do it in software. I certainly can’t do it on an Arm processor, a GPU is out of the question, so I need some custom hardware. I’m really not sure I have the right answer yet, so I need a halfway programmable solution.’ This is where these SoCs come into their own. The danger with the FPGA SoCs is that it’s very easy to get locked into the provided IP.”


Fig. 2: Tesla’s ribbon-shaped cooling tube. Source: Teslarati.com

That IP doesn’t necessarily play well anywhere else.

“Suddenly the barrier to leaving your locked in FPGA platform becomes quite considerable, even though you may have your secret sauce in RTL that you’ve written for the FPGA or even HLS that you’ve done with the FPGA tools, which again is locking you in,” he said. “You’re not going anywhere with that design methodology. This is okay if you’re doing a proof of concept and you hope to get bought by Amazon or Facebook, but if you intend to produce a real product, the effort to get out of that environment is considerable because you’ve actually got to go and design the IP or go buy it. This suddenly makes a $5,000 FPGA not seem so expensive because you aren’t paying for that IP; it’s a question of how you’re paying for it. People make a great differentiation between production budgets and development budgets.”

The automotive market drives very specific requirements, so the good news/bad news with automotive is that while they have all these stringent requirements going in, they also want the chips to be working for at least 10 years, said Piyush Sancheti, senior director of marketing at Synopsys. “Until we all start using disposable cars, we’re going to at least expect our cars to work for 10 years.”

This dynamic itself brings new challenges to semiconductor companies that have been focused on consumer products.

For others, flexibility comes with a different approach, according to Avinash Ghirnikar, director of technical marketing for the Connectivity Business Group at Marvell. Since 2005, the company has played in automotive and its approach to changing automotive requirements by purpose-building devices for automotive, such as its 88Q9098 wireless SoC. “This device is not something that we designed for the mobile phone and just slapped into automotive. As we have talked to Tier Ones and OEMs, a lot of them have expressed the desire to make customizable solutions, so we are providing a firmware SDK, which will allow them to customize their WiFi solution. What that means is that if a GM wants a solution on the Cadillac and a Chevy Cruze, they can create some customization that will be there for a Cadillac, and it could be a different customization for the Chevy Cruze. Because their environments are different, the use cases might be slightly different.”

At the end of the day, flexibility from design approaches, as well as technologies like FPGAs, give automotive OEMs more choices than ever to tailor and customize vehicles, and to adapt to changing requirements as automotive requirements mature.



3 comments

Shamim Malik says:

Nice article

Tenda Support Number says:

FPGA is used in the electronics devices and the given information in this article is properly described with the appropriate diagram which is clearly understandable.

Anmol says:

Really Nicely Written! I have worked with FPGA and now working on SOC in automobile. Yes safety and security does make life complex ,which would be of utmost interest while using FPGA for complex automobile applications. Looking forward for FPGA industry to eveolve.

Leave a Reply


(Note: This name will be displayed publicly)