Systems & Design

Embedded AI On L-Series Cores

Neural networks empowered by customer instructions.


Over the last few years there has been an important shift from cloud-level to device-level AI processing. The ability to run AI/ML tasks becomes a must-have when selecting an SoC or MCU for IoT and IIoT applications.

Embedded devices are typically resource-constrained, making it difficult to run AI algorithms on embedded platforms. This paper looks at what could make it easier from a software and hardware point of view and how Codasip tools and IP help with that.

This whitepaper focuses on:

  • How TensorFlow Lite for Microcontrollers (TFLite-Micro), as a dedicated AI framework, helps developers and how its support for domain-specific optimization aligns perfectly with Codasip design tools.
  • Examples based on the Codasip L31 processor core (which we announced in this press release) with both standard and custom extensions.
  • The benefits of custom instructions for neural networks.

By Alexey Shchekin, Solutions Engineer

Read more here.

Leave a Reply

(Note: This name will be displayed publicly)