10x faster embedded processing in advanced AI for autonomous systems.
As the working population decreases due to falling birthrates and a growing proportion of the population being elderly, advanced artificial intelligence (AI) processing, such as recognition of the surrounding environment, decision of actions, and motion control, will be required in various aspects of society, including factories, logistics, medical care, service robots operating in the city, and security cameras. Systems will need to handle advanced artificial intelligence (AI) processing in real time in various types of programs. In particular, the system must be embedded within the device to enable a quick response to its constantly changing environment. AI chips at the same time consuming less power while performing advanced AI processing in embedded devices with strict limitations on heat generation.
To meet these market needs, Renesas developed DRP-AI (Dynamically Reconfigurable Processor for AI) as an AI accelerator for high-speed AI inference processing combining low power and flexibility required by the edge devices. This reconfigurable AI accelerator processor technology, cultivated over many years, is embedded in the RZ/V series of MPUs targeted at AI applications. The DRP-AI3 is the next-generation of the DRP-AI, achieving power efficiency approximately 10 times higher than that of the previous generation. The DRP-AI3 is able to respond to the further evolution of AI and the sophisticated requirements of applications such as robots. This white paper introduces the key technologies developed for DRP-AI3 and demonstrates how the DRP-AI3 solves heat generation challenges, enables high real-time processing speed, and realizes higher performance and lower power consumption for AI-equipped products.
Click here to read more.
Leave a Reply