中文 English

AiMotive Is EDA For Self-Driving Cars

Staying alive in the automated vehicle biz.

popularity

The team at aiMotive, a tool and IP company for OEMs making automated vehicles, isn’t waiting for smart infrastructure or 5G to make self-driving cars possible. The four-year-old startup based in Budapest, Hungary, is taking a self-sustainable route for the foreseeable future. The key to staying in business is not to compete with Waymo, Cruise or automotive companies, but to build the software, create training datasets and neural network IP that other companies can use to implement their self-driving cars.

“We are not competing with them [Waymo, Cruise, Tier1s]. We sell technologies and tools to help with accelerating their development,” said László Kishonti, CEO, aiMotive in an interview with Semiconductor Engineering. Just as outfitters in a gold rush may ultimately make more money than the miners, aiMotive is hoping to provide essential tools for others.

Think Cadence and Synopsys. “We are very similar to them,” said Kishonti.

AiMotive makes tools that other companies can use to design the self-driving component of the car. The tools include aiDrive, aiSIM and aiWare, all interrelated parts of a system for creating the smarts that will drive cars. aiDrive has a full self-driving and ADAS software stack. aiSIM is a simulator for testing and serves as back end to see how the software reacts under a dataset of real road situations. aiWare is a neural network accelerator IP that is scalable (aiMotive says “from embedded systems to data centers”), with customizable architecture. The IP comes with an SDK, tools and NNEF (Neural Network Exchange Format) support, which means it supports Caffe, Pytorch and Tensorflow.

aiMotive plugs the data it collects from the test drives back into its simulator (aiSIM) and code base (aiDrive). (Source: aiMotive)

The aiSIM module is inspired by aviation simulators. “We started with the aiSim because we saw that in the aviation industry everybody’s using simulators exactly for the same reasons why we needed them. It’s cheaper it’s safer. It’s repeatable. Of course, you can run the same thing 100 thousand times and it saves time,” said Kishonti. The aiSIM trains the chips and the algorithms. “[Simulators] are one of the reasons why they could test all the aircraft and pilots many times before going into production or into a real flying situation.”

The automotive industry sobered up after an Uber self-driving test drive killed a woman pedestrian in Arizona last year. “Everyone had recognized that they cannot solve everything. Probably Tesla is the only company that still thinks they can solve everything alone,” said Kishonti. AiMotive specializes in helping other companies get to their goals of self-driving cars.

Safety in numbers
The day before he was to speak at the Embedded Vision Summit about how to stay viable in the business while the technology evolves, Kishonti and team led the press through aiMotive’s unassuming Mountain View headquarters tucked in a mixed use residential neighborhood. The enticement was a ride in a self-driving car, while the theme of the day was how aiMotive was carefully staying relevant. Catering to the mass market is a key part of that business plan.

“We see a lot of parallels with Tesla because our focus is to build mass market products most efficiently and that’s why we focus on a vision first. Not LiDAR first like many of the robotics companies because we are not necessarily developing robo taxis, but technologies which you can use immediately on the mass market cars.”

Not that there’s anything wrong with LiDAR, but with enough cameras, LiDAR might not be necessary. “We would be happy to use LiDAR,” said Kishonti. “but it’s so expensive at the moment.” Instead of investing in a $10,000 LiDAR, buy more $15 cameras. “For $100, you can buy six cameras, which is good for most of the use cases.”

Modularity and ecosystem agnosticism are among aiMotive’s other survival strategies. aiMotive’s modular software tools and IP need to run any chips in any ecosystem, said Kishonti. On a macro level, the company is building a training dataset on three continents by driving test cars on real roads in California, Japan and Europe. This gives the worldwide road experience. On the micro level, an OEM is not locked into a particular semiconductor ecosystem, “we don’t think that’s important for us,” said Kishonti.

The modularity means all or parts of aiMotive’s three-pronged system can be licensed. Even the code stack is cut up into modules. “We have a freeway module and just added an exit module,” said Kishonti, who admits to having a favorite exit on the freeways in the vicinity. Many of the older freeway exits around Mountain View are examples of how not to design an exit. They are short and dangerous, becoming blocked easily with cars waiting at cross-street lights at the end of the exit. In other words, a good test for any driver, let alone a self-driving car.

Again the simulator (aiSim) is key in the process. “We have a strong rule in our company. You cannot test anything on the road until you show that to your customers, colleagues and, of course, me that it runs perfectly in the simulator,” said Kishonti. “Obviously this simulator is always a little bit of a limited environment than the real world. It may be simulated test can get you to 90 percent success. You still have 10 percent.”

It helps to have customers. Kishonti says aiMotive is different from other automotive AI companies because it had customers from day one. “We have revenue. This is quite unique in the space,” said Kishonti. Customers include Volvo, BSA Group and Kyocera. The company also does not deal with creating an automobile so much as it helps other companies create better chips for their automobiles.

But doing the dirty work for the customers is really the product: writing the software (which is mostly done by the R&D team in Hungary), simulating it, road testing it and plugging the important road test data back into the aiMotive system. Essentially, they are collecting data.

Road training
For testing purposes, the automobiles outfitted with aiMotive’s IP, software and system are average mass market cars. Not that this is an aftermarket system, however.

Inside the car a relatively bulky computer/server box sits on the ledge at the back window. Three screens are set up in the car, one large one in the front passenger seat and two smaller for the back seats.

A test pilot sits in the driver’s seat and drives; when the car is self-driving mode, he holds his hands about an inch from the wheel as if he were driving, so he can quickly grab the wheel and take over. He is mentally driving alongside the self-driving modes, which turns on only on the freeway under certain conditions and on exits. The system is also trained to relinquish control in unsafe situations even if it could handle the situation.

The car will beep warnings, also. “We have a lot of warnings like if one of the computers’ chips slows down, for example, or one of the sensors has strange data or simply the camera is dirty,” said Kishonti. “These are the things we recognize and then [the car] starts beeping at our operator and driver, [who] recognizes that something is a fault. And again, getting the camera dirty…. that’s gonna happen. We need to recognize if something is malfunctioning.”

One trick is to have the cameras inside the car, which aiMotive does for the front windshield camera.

The operator sits in the front passenger seat and looks at the large screen, which is filled with data coming from the cameras and sensors, some of which is in a graphical format. The operator and test pilot talk back and forth, rather like pilots in an airplane. They confirm with each other when they are changing lanes, exiting, and so forth. The system records everything that happens during self-driving and the operator marks any issues into the recording when he sees an issue that needs to be examined further.

“The lane changes are handled by the car, but the test pilot and the operator always checks if everything is fine for safety,” said Kishonti from the real passenger seat. At one point during the drive, as the car approached a short exit where cars were backed up, the test pilot braked suddenly. He explained that the car sensed the danger and relinquished control.

aiMotive self-driving car, Semiengineering.com

aiMotive self-driving car, 2019

Inside aiMotive’s self-driving mode: The operator in the front seat has configured the screen to his preferences and watches how the car performs in self-driving mode. He can see the code and the views from the front cameras with an overlay of the self-driving car’s AI software, rear cameras and graphical representation of the traffic flow around the car. (Source: Semiconductor Engineering/Susan Rambo)

At this point, aiMotive’s systems are not connected to the outside world, so hacking is not a concern just yet. “Software delivery, MAC delivery and getting the data out of the cars will require real time. That’s just like in the Teslas. So, this car can upload the data when it connects back to the garage WiFi, but during driving we have no connection and so no one can hack it,” said Kishonti. “I think Cisco invested into us because they want to be part of that [developing security modules]. Obviously this is about safety and you need to make sure that no one can get access to control of your car because that’s super dangerous.”

The U.S. team goes out every day and clocks about 200 miles of driving. “The purpose is to collect data for AI training. The operator marks any incidents “Only those pieces that are noted by the operator are uploaded because we couldn’t handle so much data,” said Kishonti.

Super computers need not apply
After the ride, Kishonti opened the back hatch to reveal the box in the back. “This is like a large computer,” he said. The data generated by one drive is so large that they only download the incidents marked by the operator.

The computers that runs the aiDrive stack in the car will get smaller. Quanta just agreed to help aiMotive slim down the computer box sitting in the back window well. “We will use their next generation hardware to replace these big bulk computer.” An even smaller box is in the works about the size of the Nvidia Jetson.

“aiDrive is built to be completely modular so that’s why you can scale very well to the different hardware architectures we have here on display,” explained Daniel Michael Seager-Smith, marketing manager, aiMotive, who demonstrated the software. “This Quanta V32 system has an Xeon class processor and two 2080 GPUs. These can effectively replace the three 1080s TIs we have in the current vehicles, which are very large. The form factor is already a lot smaller and we’ll be moving to an even smaller Quanta system than this,” said Seager-Smith.

The purpose of the demos is to show off aiMotive’s systems and demonstrate that regular laptops and desktop computers are sufficient depending on the use. Modules can be run separately or in full simulation. The full simulation runs a six-camera view and needs the more powerful computer running aiDrive stack. The lower-functionality simulations run one or two camera views from the aiDrive stack and can run on lower-performance computers, at lower pixel counts.

Visualization is a bottleneck for automotive-grade hardware. “Automotive-grade hardware isn’t actually built to visualize things. Because when you have it in a car, it doesn’t have to show the driver so much what’s happening or if there is visualization that will often be a different ECU, part of the entertainment system,” said Seager-Smith. A slightly lower-performance platform is used for single-camera use cases and reverse cameras, which do not need have a multi-megapixel resolution because of the low speeds involved in using it. “You don’t need such high resolution because you don’t want to see that far into the distance for safety,” said Seager-Smith. High speed driving, such as freeway driving, however, does need high resolution.

aiMotive 2019, Semiengineering.com

aiMotive 2019, Semiengineering.com

aiMotive 2019, Semiengineering.com

Above, the aiSIM and an external graphical view can be run on a regular desktop computer, in this case on a single 1080 TI GPU. The full aiDrive software stack is running a scenario on aiSIM in a full simulation with six cameras. The car’s computer box in the foreground is the Quanta V32, a development platform made by Quanta and which are smaller than those used in the car now. The processors, GPUs and memory are in the car’s computer box are automotive grade, running the full aiDrive stack. The in-car computer system can be cooled by connecting its liquid port to the car’s air conditioning system (but for demo purposes, it is connected to a fan). BMW is already doing this type of cooling to keep the ECUs cool, said Seager-Smith.

“And the two computers are communicating with each other over a server architecture basically as a server would communicate with PCs around it,” said Seager-Smith. “It’s the same sort of architecture going on here. The simulator will provide the sensory inputs—it simulates all the sensors. You can see these in different sensors that it is simulating. And then aiDrive would provide the control inputs, which will then control the vehicle and the simulator itself.”

Modules, such as the freeway and exit modules, can run separately and work with laptop.

The little black box pictured is a Nvidia Jetson development platform, running a limited functionality aiDrive and aiSIM module. In a similar size, a more powerful Quanta system is in the works. “This is everything that can work on just one or two cameras facing in a single direction— basically a highway autopilot implementation… [such as] adaptive cruise control with the lane-keeping functionality,” said Seager-Smith. It runs lane recognition, segmentation and bounding boxing. This implementation can run on the Nvidia Jetson hardware but the Quanta system will have higher performance than this and be able to run more functionality, he said.

In another implementation of aiDrive, aiMotive uses the Renesas R-Car V3M, shown above. “This is completely automotive grade hardware product by Renesas and it’s already built into some production vehicles by Japanese OEMs. It can run networks on about one camera, so it’s more than an ADAS functionality than anything else. For example, this is like maybe a collision warning system or something like that, or as an unintentional reverse camera. Something similar. This is the resolution of the camera that would be seen by the Renesas system. So it’s actually a visualization imitation. That’s why the resolution is a lot lower on this screen not because this is the quality of the simulator itself, but because of the visualization limitation of this chipset,” said Seager-Smith.

aiMotive was founded in 2014 in Budapest, Hungary. It has 220 employees worldwide, 140 of which are engineers working in Budapest where the R&D and programming takes place. As the website says, 30 of these engineers are specialized artificial intelligence researchers. aiMotive’s other offices are in Mountain View, California, Europe (Hungary), and Yokohama, Japan, where the company tests and collects data about their self-driving car interface by driving them on the roads. “The Tokyo and Mountain View offices are more about testing and development and supporting the customers,” said Kishonti.

CEO Kishonti’s first company focused on the mobile and graphic chip sectors. “We helped the large tech companies like Google, Samsung, Nvidia, Qualcomm, Intel and even Nintendo to build better chips.” Partners for aiMotive are FEV, Samsung, Nvidia, Globalfoundries, Quanta Computer, Roysan, Ryoyo, Intland Software, SAIC, Kyocera, PSA Groupe, Wipro, WindRiver, VeriSilicon. Among the investors are Bosch, Cisco, B Capital, BC Fund, and Tim Draper.



Leave a Reply


(Note: This name will be displayed publicly)