Edge AI And Chiplets

Why chiplets are a better choice for many applications than monolithic designs.

popularity

In the near future, more edge artificial intelligence (AI) solutions will find their way into our lives. This will be especially true in the private sector for applications in the field of voice input and analysis of camera data, which will become well-established. These application areas require powerful AI hardware to be able to process the corresponding continuously accumulating data volumes. The demands on the hardware are growing, due to ever-more optimized AI algorithms, as well as the increased quality of the sensor data. This includes everything from 4K camera images and high-definition audio, where AI is needed for better recognition of patterns.

Data processing, and in particular the application of AI algorithms, requires a great amount of computing power. But other components are also needed in current and future edge AI devices. They must be connected to a central server structure, either by wireless or by wired connection. This connection must be very high-performance, i.e., with the latest generation of mobile communications or with wire-based methods such as 100G or 400G Ethernet. And it will require the implementation of the corresponding interfaces on the edge-Al devices.

Such systems can be built as monolithic circuits, but a chiplet approach would be the better choice. Here, the overall system is divided into smaller parts, and these are manufactured as individual monolithic circuits and reunited on the overall system to form the overall functionality. The advantage of this approach is the possibility of selecting the best process technology for each subsystem (circuit). For example, for the radio part — in particular for the high-frequency part — a technology with special suitability for high frequency can be used. The same applies to all central components of the system. Especially for the part with the AI hardware, the latest technologies such as 3nm or 5nm IC technologies can then be used. This ensures that the AI hardware can achieve the correspondingly required computing performance with available power budget and space constraints.

When new hardware for the AI part becomes available, this also provides an option to replace only this part, while keeping the rest of the components. Since the development of the other components also requires a lot of resources, a lot of time and money can be saved by re-using them. This applies especially to the components for the 100/400G Ethernet interface, but also to those for the radio interface, and thus also leads to resource-efficient use.

Due to the platform idea, the chiplet approach also allows scaling of computing power. If more power is needed to process AI algorithms, then several of these circuits can be installed on the chiplet system. This increases the compute power almost linearly to the number of implemented AI devices on the chiplet, resulting in a solution where the circuit with the AI hardware only needs to be developed once and no further special adaptations for derivatives are required. These customizations are all done on the system through the appropriate selection of components that are installed there. Even a larger number of interfaces, such as additional special interfaces for radio or wired interfaces, easily can be implemented, and the system can be expanded as needed. The only important thing here is that all components for the overall system have uniform interfaces through which the individual chiplets communicate with each other. The BoW (Bunch of wires) and UCIe standards, among others, are currently being developed for this purpose.



Leave a Reply


(Note: This name will be displayed publicly)