How Do Robots Navigate?

SLAM and AI ISPs help improve navigation in unfamiliar environments and adverse conditions.

popularity

Have you ever been amazed by the graceful movement of robots and self-driving vehicles in unfamiliar surroundings? The latest technological advancements have introduced self-cleaning robots, autonomous vehicles with incredible navigation abilities. This entails navigating through unfamiliar surroundings, capturing clear video footage, and performing processing at the edge. What’s truly remarkable is their ability to navigate perfectly in low ambient conditions and even with poor GPS signals. Read more to delve deeper into how these robots and self-driving vehicles move effortlessly and respond without any delay!

How do we navigate? We see, sense, and rely on our memory to reach the known destination. But what about navigating in an unknown environment with lousy lighting conditions and poor GPS connectivity? In such situations, we look for landmarks or some markers to reach the destination. Likewise, to ensure accurate navigation, robots and self-driving vehicles rely on their vision and senses. Robots and autonomous vehicles leverage embedded vision and digital imaging to create or update maps of unfamiliar or indoor spaces for precise inside-out tracking.

Autonomous vehicles and robots heavily rely on simultaneous localization and mapping (SLAM) and digital imaging to effectively navigate through unfamiliar environments. SLAM is the computational technique of constructing a map in an unknown environment while keeping track of the device’s position (location and orientation).

With this technology, these machines can navigate seamlessly and with the same level of accuracy as humans. Design teams must consider use cases such as delays, bad weather, and poor GPS connectivity to enable the seamless movement of these machines. For instance, imagine missing the exit on a runway while driving due to a delayed navigation response and realizing the next one is several miles away. Or realizing navigation is a fair-weather friend that ditches you and leaves you stranded in bad weather or lousy lighting conditions.

To avoid such situations and overcome these issues, we need to have artificial intelligence (AI) in image signal processors (ISP), as the existing processors/specialized processors suffer from issues such as:

  • The hardware blocks with ISP for image/video functionality lack flexibility or adaptability.
  • The performance capabilities and high power dissipation restrict the usage of specialized processors (GPU) with capabilities like real-time processing and faster response.
  • Also, hardware solutions with algorithms result in the loss of data captured by sensors.

ISPs with AI are used in various applications, from smartphones to automotive and beyond.

How a partner ecosystem helps automotive, robots, IoT, and mobile

Cadence collaborates with Kudan and Visionary.ai for its Tensilica software partner ecosystem and helps achieve the best performance in various segments, such as advanced automotive, mobile, consumer, IoT, and drones. Cadence Tensilica IP-based devices help to run cutting-edge SLAM and AI ISP solutions efficiently and offer the best power-performance envelope.

“We’re very excited about our partnership with Cadence and the opportunity to work with the Tensilica platform to accelerate Kudan’s SLAM pipeline. Cadence’s Tensilica Vision DSPs provide specialized instructions that optimize various stages of the SLAM algorithm, delivering significant gains with power savings to the end customer. We look forward to improving the accessibility and adoption of our SLAM solution together,” said Juan Wee, CEO at Kudan USA.

The ongoing innovations in Tensilica IP and architecture are critical for smartphone manufacturers and providers of IoT systems and next-generation connected vehicles. Cadence partnership with industry leaders such as Kudan and Visionary.ai helps customers improve performance. Various benefits of these collaborations are as below:

  • Tensilica Vision Q7 DSP offers a 10X performance improvement and 15% speedup compared to CPU-based implementations of Kudan’s proprietary SLAM implementation.
  • Visonary.ai’s novel approach leverages AI to replace traditional hardwired ISP functions and enables real-time, high-quality video production, even in the most challenging lighting conditions.
  • Customers could implement a camera pipeline with a much higher resolution than full HD while working with over 30fps using Visionary.ai’s efficient AI ISP over Tensilica NNA110.

“At Visionary.ai, we have developed a method of using AI to dramatically improve image quality in real-time, particularly in the most challenging lighting conditions. For this technology to reach its true potential, there is a need for fast and efficient neural network computations. Joining Cadence’s Tensilica ecosystem will help ensure that our customers have a very competitive solution that runs on some of the most efficient vision and AI platforms out of the box,” said Oren Debbi, CEO at Visionary.ai.

The proof is in the pudding: Cadence strengthens Tensilica Vision and AI software partner ecosystem for advanced automotive, mobile, consumer, and IoT applications. While Tensilica Vision Q7 DSP helped achieve a nearly 15% speedup of Kudan’s proprietary SLAM implementation pipeline compared to CPU-based implementation, Tensilica NNA110 accelerator helps customers implement a camera pipeline with a resolution of more than full HD at over 30fps.



1 comments

Adibhatla krishna Rao says:

Wonderful, knowledge sharing Article.

Leave a Reply


(Note: This name will be displayed publicly)