Trends shaping chip design and what might be ahead in the next year.
As the holiday season is in full swing, it’s retrospection and prediction time! Let’s look at what I thought 2023 would look like, review how it turned out, and take a first stab at 2024 predictions. As a spoiler, my biggest surprise was the intensity with which artificial intelligence and machine learning (AI/ML) accelerated since Generative AI was put on the mainstream adoption map last year, making 2023 a pivotal year for AI/ML and generating even more advanced requirements for optimized on-chip and chip-to-chip/die-to-day transport architectures.
When preparing predictions for 2023 last year, I suggested we would witness a significant shift in computing, emphasizing high-performance, power-efficient technologies, evident across data centers, networks, and devices, driving advancements in semiconductor materials, designs, and manufacturing. Specifically, we would see the emergence of complex, integrated device architectures for the high-end computing realm, blending various components into systems of chiplets and systems on chips. The rise of AI and machine learning technologies would spur even more specialized, workload-optimized semiconductor devices. In automotive design, 2023 would bring innovations in electrification, autonomy, and personalization, with sustainability playing a crucial role. Specifically, networks-on-chips would face heightened scrutiny to meet low-power demands, putting additional pressure on optimizing the power contribution of network-on-chips (NoCs) by reducing elements like wires, registers, and base components such as switches.
By mid-year, I described how the Design Automation Conference (DAC) 2023 showcased three pivotal trends reshaping chip design. First, Artificial Intelligence (AI) is revolutionizing Electronic Design Automation (EDA), enabling AI/ML chip and system development and enhancing EDA productivity. Second, chiplets emerged as a key focus, with heterogeneous integration and various technologies like UCIe gaining attention, reshaping design methodologies and highlighting the importance of network-on-chip (NoC) protocols for inter-chip communication. Lastly, the integration trend, evolving from system-on-chip (SoC) to heterogeneous integration using chiplets, addresses challenges beyond electronics, including thermal and mechanical aspects.
Given my day-to-day focus on SoC integration automation and network-on-chip (NoC) IP with Arteris, the data I live by is visualized in the following two charts, illustrating how the number of building blocks for systems-on-chips (SoCs) and systems-of-chiplets (SoC^2!) and the complexity of NoC protocols have developed over time.
From an integration perspective, the number of IP blocks in SoCs has grown from 10s to 100s over two decades, requiring complex integration of the hardware blocks and integration automation, especially in the context of hardware/software interfaces. Disaggregation using chiplets offers additional design version variety and will likely drive standards like IP-XACT to extend, too.
In parallel, the complexity of NoC protocols has grown 10x when counting the number of pages design teams need to read to understand AMBA, for instance, and in addition, the variety of relevant protocols has increased significantly, too. In the future, NoCs will have to evolve into Super-NoCs when distributed across chiplets.
(Source: Arteris, Inc.)
In line with the predictions, we strengthened our SoC integration automation capabilities this year by adding control and status register management technology. We also focused on low-power and cost optimization of NoCs by making them more physically aware.
Arteris technologies are critical to enabling automotive applications like ADAS and autonomy in general. We showed strong momentum with customers like ASICLAND, BOS, Alchip, ensuring functional safety and compliance with ISO9001 company certification and ISO 26262 certification for our coherent NoC and our Magillem SoC integration offerings, winning the Stevie and Autonomous Vehicle Technology of the Year awards along the way.
The following chart visualizes the data to understand one of the critical challenges for building AI accelerators, and it conveniently also indicates what the semiconductor industry refers to as the memory wall.
(Source: Arteris, Inc.)
Overlayed over the transistor count progress, the graph visualizes how single-threaded CPU performance has flattened for over a decade. GPU-computing performance has doubled yearly, creating a 1000x improvement in a decade over single-threaded CPUs. Conveniently, they also work for AI acceleration, but more dedicated implementations for multiply-accumulate and multiply-add computations at the core of AI have emerged.
However, like for GPUs, feeding its computations with data has become critical as the memory bandwidth has not progressed accordingly, as shown at the bottom of the graph. External DRAM accesses limit performance and power consumption. As a result, architects must balance local data reuse vs. sharing within the chip and into the system and use closely coupled memories and internal SRAMs, as well as standard buffer RAMs managed at the software level. Early DRAM architecture analysis will become more and more critical, too.
Data transport architectures are the critical choice that makes or breaks AI acceleration. This year’s public announcements by Arteris show NoCs’ necessity in this context. They include Alchip, Neurality, Tenstorrent, Axelera, ASICLAND, SiFive, and Semidynamics.
2023 has been pivotal for AI, and given the end-user demand, 2024 will likely bring much more innovation in AI-related data-transport architectures. The trend towards chiplets will further intensify, and the path to a truly open ecosystem will require the standardization of the underlying transport protocols. The industry looks at protocol options like CXL and CHI and physical connections like UCIe, BoW, and XSR. 2024 will likely get us closer to clarity on which standards. End-user requirements will further dictate horizontal aspects like safety and security for automotive and industrial applications (learn your ISO numbers beyond 9001 and 26262!) and reliability for data centers and others.
One aspect I didn’t yet touch on abvove. The “Game of Ecosystems,” driven by processor instruction architectures (ISAs), will only intensify further. As an ISA-neutral provider of NoC IP, you can find Arteris everywhere. We are a close partner with Arm. Recent examples include customers like SiMa.ai and Microchip. We also enable RISC-V-based designs, unifying the NoC protocol salad, connecting RISC-V clusters, and derisking projects with reliable top-level SoC connectivity. See our 2023 announcements with Tenstorrent, Axelera, SiFive, and Semidynamics.
There has never been a more exciting time to be in semiconductors! Happy Holidays!
Leave a Reply