Decentralized in-car network architectures lack the ability to react in real-time to the vast amount of data required by machine learning.
By some estimates, there are now more than 260 startups and established companies around the world scrambling to develop, qualify and bring to market chips and technologies for new ADAS (advanced driver-assistance systems) and autonomous driving applications.
Accordingly, venture capitalists, technology companies, carmakers, Tier 1 automotive suppliers and others are sharply ratcheting up their investments in this area. Venture capital investments alone in automotive and other AI-based applications grew to some $1.6 billion last year, up from $1.3 billion in 2016 and $820 million in 2015, according to research firm CB Insights.
What’s more, this activity is taking place globally. Among notable recent announcements was the news that Shenzhen, China-based self-driving start-up Roadstar.ai raised $128 million in Series A venture funding. It’s reportedly the single largest investment to date in a Chinese autonomous driving company, eclipsing the $112 million in funding announced earlier this year by another self-driving start-up, Guangzhou-based Pony.ai.
Why is interest in this space growing so strongly? On a consumer level, many drivers appreciate ADAS features like collision avoidance, blind spot warnings, adaptive cruise control and so on, and because carmakers want to satisfy their clients they are working to make these systems more sophisticated and increasingly available in cars at all price points.
On a societal level, driver-assistance/self-driving features have much more to offer. For example, there are about 40,000 deaths from motor vehicle accidents annually in the U.S. and over a million worldwide, with an additional 20-50 million people injured or disabled. Vehicles with greater autonomous capabilities have the potential to significantly reduce these numbers.
They also open up entirely new business opportunities, such as self-driving taxis.
A brain on wheels
The standards-setting organization SAE International has established a five-level classification system to describe the level of automation in cars, going from Level 1 (the system gives warnings but the driver drives the car) to Level 5 (fully autonomous operation with no human intervention required).
As the industry moves toward Level 5, sensors such as cameras, lidar and radar will generate torrents of data which must be processed, integrated and transmitted in real-time so that sophisticated deep neural-net-based machine learning algorithms can make use of it to recognize objects in the environment, predict their actions, communicate with other vehicles, and make vehicle-control decisions.
Sources: NHTSA, GROM Audio, various industry and commercial sources, and GF internal assessments
Some argue that this can be best accomplished with decentralized in-car network architecture, because it would be an evolution of existing ADAS systems and therefore would have the least impact on the design of automotive computing systems. It also would accommodate the use of specialized processors and allow new features to be added in a stepwise fashion.
The problems with this approach, according to Mark Granger, GlobalFoundries’ Vice President of Automotive, are that while local processors and limited network bandwidth might be adequate for Level 2 (partial, or “hands-off”) or perhaps Level 3 (conditional, or “feet off”) operation, they lack the ability to handle in real-time the vast amount of data required by AI-based machine learning algorithms to enable truly autonomous operation.
“Decentralized architectures might provide up to 5 TOPs (trillion operations per second) and about 10 Mbits/s of in-car bandwidth,” he said. “But to operate at Levels 3-5, a centralized network architecture with powerful, efficient processors that provide 50-100 TOPs and in-car data rates of 100 Gbits/s is needed. To put that in perspective, in the year 2000 the world’s most powerful supercomputer had the ability to do only 1 TOPs. So autonomous vehicles really will have to be brains on wheels, and a centralized architecture is the best way to achieve that.”
Until now the semiconductor technologies at the heart of ADAS/autonomous system development have been graphics processors (GPUs) and microprocessors (CPUs). But as developers move toward Level 5 automation, the proliferation of these chips in automotive systems becomes increasingly problematic, because while they are powerful they are also power-hungry.
“Self-driving cars are in their infancy, and unless something is done to reduce the power consumption of the processors in their AI-based systems, maybe they’ll never grow up,” said Granger. “The chips powering today’s versions of self-driving cars essentially require racks of server-class chips that draw maybe 7,000-10,000 watts of power. While that’s OK for development and testing purposes, it’s impractical for commercial products. Plus, you also have to consider the challenges and expense of cooling them, and they are relatively large physically. Everyone has a goal to get the power budget as low as possible for a given function.”
ASICs (application-specific integrated circuits) specifically designed to meet the needs of automotive systems not only can be both powerful and extremely energy efficient, but they also allow an automotive client to differentiate itself from the rest of the pack. They provide design flexibility and allow for designs that are much more powerful than current GPUs.
Large CPU clusters and hundreds of thousands of multiply and accumulate (MAC) circuits on each die meet the heavy computational requirements of AI algorithms, while gigabits of embedded SRAM and gigabytes of off-chip DRAM interfaces feed the hungry compute engine.
GF offers 14nm and 7nm FinFET ASIC system-on-chip (SoC) devices which deliver optimum combinations of power, size and energy efficiency versus both GPUs and competing ASIC technologies, while meeting automotive quality standards such as the functional safety standard ISO26262.
FX-14 ASICs allow users to take advantage of an array of 64-bit and 32-bit ARM cores for system design, along with a 56Gbps high-speed SERDES (HSS); an embedded TCAM memory capable of billions of searches per second; density- and performance-optimized embedded SRAM; and 2.5D packaging options that maximize application flexibility. FX-7 ASICs extend the offering with up to 112G HSS, the densest on-chip SRAM and a large number of off-chip DRAM interfaces (LPDDR, GDDR, HBM), including 2.5/3D packaging options that maximize application flexibility.
Leave a Reply