Dramatic Changes Ahead For Chips And Systems

In 2024, the industry will focus on new developments AI/ML, RISC-V, quantum, security, and much more.

popularity

Early this year, most people had never heard of generative AI. Now the entire world is racing to capitalize on it, and that’s just the beginning. New markets, such as spatial computing, quantum computing, 6G, smart infrastructure, sustainability, and many more are accelerating the need to process more data faster, more efficiently, and with much more domain specificity.

Compared to the days of waiting every couple years for the next process node, the events over the past year, and in coming years, will be every bit as significant as the introduction of the telephone or the car. But there won’t be just one innovative technology. There will be many, and they will intersect in ways that will continue to surprise the tech world.

We are entering an age of bespoke hardware, heterogenous integration, software-defined systems, and all of them rely on semiconductors. But even chips are changing. They are becoming more targeted, more complex, and potentially much more of a security threat. And all of these trends will force designers to re-think workflows, architectures, and business models, some of which became apparent in 2023, but which will begin to really accelerate in 2024.

AI/ML
2023 artificial intelligence/machine learning (AI/ML) wrapped up with the announcement of Google’s Gemini AI, both a catch-up to ChatGPT and a breakthrough push into multi-modal AI. Google’s new techniques should lead to even more design advances, as other companies also seek to incorporate images and videos into their generative AI efforts.

“Gemini is compelling for a couple reasons,” said Steve Roddy, chief marketing officer, Quadric. “First, it comes in numerous versions that scale from the data center (Gemini Ultra) all the way down to memory constrained, battery-operated devices (Gemini Nano at 1.8B parameters). Second, Google offers pre-quantized versions of Gemini that are ready-made for deployment in edge devices. Instead of forcing embedded device developers to do the float-to-int conversions, Google has taken care to make it deployment-ready, and thus doesn’t require users to play data scientist to do the conversion. Gemini Nano will be much simpler for edge device and mobile phone developers to deploy than many previous GenAI models, likely spurring more widespread adoption and integration in applications.”

Google also has a bespoke ecosystem for AI, as do other hyperscalers, a trend that is expected to accelerate, with more companies across multiple domains developing their own AI chips.

“Where you’re going to see the big, exciting, new applications and breakthroughs is from companies developing their own custom AI chips,” said Tony Chan Carusone, CTO of Alphawave Semi. “There are a variety of use cases that are motivating people. For example, even companies like Tesla are developing their own custom AI chip to help enable training of autonomous driving. In the next five years, the most exciting breakthroughs and applications will be coming from people that are running on this tailored hardware.”

In fact, along with the usual mix of large corporations, many start-ups are taking advantage of the cloud to build AI chips that address one specific problem in fields as wide-ranging as automotive, photonics, space, and medical devices, noted Vikram Bhatia, head of cloud go-to-market and product strategy at Synopsys. He noted that Synopsys uses AI internally to help cut costs in the cloud spot market by alerting customers, for instance, that a job may be terminated by the cloud provider. It then automatically moves that job to a different virtual machine.

“AI is fundamentally changing the way we live and work, and this transformation will only accelerate over the next 12 months — and well beyond that,” said Gary Campbell, executive vice president for central engineering at Arm. “In 2024, the conversation around AI will become more nuanced, focused on the different types of AI, use cases and — crucially — what technological foundation we need to put in place to make the AI-powered world of the future a reality. Advanced, purpose-built chips will play a critical role in allowing today’s AI technologies to scale and in fueling further advancements in their deployment. The CPU is already vital in all AI systems, whether it is handling the AI workload entirely or in combination with a co-processor, such as a GPU or an NPU. As a result, there will be a heightened emphasis on the low power acceleration of these algorithms and the chips that run AI workloads in areas of intense compute power, like large language models, generative AI, and autonomous driving.”

Additionally, many are looking to AI to enhance existing processes and boost productivity. “In the chip design flow, there are several opportunities to optimize productivity by bringing in AI,” said Arvind Narayanan, senior director for product line management at Synopsys. “Decreasing process geometry is bringing its own set of challenges, but the time to design your chip either remains the same or is even shorter. The projection is that by 2030 there’s going to be a huge shortage in the workforce, to the order of 20% to 30 percent fewer designers. A transformative technology like AI can help fill the gap.”

Digital twins and data centers
There are many other examples of AI’s potential value to chip design engineers, such as integrating large language models with digital twins. “With the exponential growth of digital twins, a market set to reach $11.12 billion by 2030, more people will have access to, and want to use, digital twins,” said Dave King, senior product marketing manager at Cadence. “This is why we’re likely to see a greater desire to incorporate LLMs into digital twin technology — to make them an even more crucial element of workplace decision-making, because LLMs enable operators to ask questions of digital twins in a natural way.”

In spite of the current headlong rush to AI, its use in the data center may stabilize. “AI will continue to be used in data centers to help solve smaller problems that it has been proven effective at. These include tasks like filling manual labor gaps, advising on energy management, or automating capacity management,” King said. “Still, AI will not yet be considered the primary solution when it comes to addressing large challenges, such as running data centers instead of human operators in light of current skills gaps. As a result of this, we’re likely to see investment in the areas of AI that come with more immediate small-scale gains.”

Data centers also are beginning to share training and inference with the edge, leading to more distributed intelligence and real-time responses. “AI inference on the edge means the smart edge is absolutely taking a step forward in enterprise operations, as well as across the board,” noted Michal Siwinski, chief marketing officer at Arteris. “For example, we’re going to see a lot more intersecting with automotive. Consumer and industrial are basically moving from unsophisticated electronics to fairly advanced electronics. That’s a huge disruption.”

AI’s influence is spreading
The types of chips that can run AI also are changing, which is especially important at the edge. DSPs, for example, are very efficient at doing one specific type of processing in vertical markets such as vision, audio, and lidar. Now, with the rollout of AI everywhere, there is a push to expand that capability.

“The motivations here include the need to reduce SoC area and keep a lid on overall power consumption,” observed Prakash Madhvapathy, director of product marketing, Tensilica audio/voice DSPs group at Cadence. “In edge-based and on-device AI applications, a standalone DSP, or a DSP in conjunction with an efficient accelerator, is highly desirable for a range of applications from tiny earbuds to autonomous driving. While the AI accelerator may work more or less independently, the trend is to pair it with an AI-capable and highly programmable DSP to act as an efficient fallback for future-proofing, in case the ever-evolving AI workloads introduce novel neural networks.”

Customization
At the same time the drive toward customization, particularly for bespoke chips, is boosting interest in software-defined architectures (SDAs), where functionality is defined by the software. “The product is really a software product,” noted Simon Davidmann, CEO of Imperas [now part of Synopsys]. “Just look at a Tesla. There’s tons of silicon, hundreds of processors, but it’s the software that is defining it all. The software is designed and architected up front, and then the silicon executes it. That means you have to do a lot of simulation upfront.”

This need for custom chips due to SDAs dovetails with RISC-V, which received a boost at the recent RISC-V Summit, when Meta announced it would use RISC-V for all the products in its roadmap.

“We see RISC-V becoming of more interest to chip designers because of the freedom that it gives them,” Davidmann said. “For example, we’re working with a company that’s building a new lidar chip that has 512 small cores. They couldn’t make it with what they could license, so they have to build it themselves. They don’t want to invent their own, so they went with RISC-V.”

Mark Himelstein, CTO of RISC-V International, is bullish in his outlook. “The RISC-V software ecosystem continues to grow with a number of milestones, including the creation of the RISE effort to help expedite RISC-V open source software, the RISC-V International landscape and exchange, which enables developers to advertise the availability of software.”

But it’s not just about RISC-V. “People are building mixed-architecture SoCs, and that creates a whole different set of challenges, said Arteris’ Siwinski. “All of a sudden you’re moving from a slightly more closed ecosystem to something that must be interoperable across all standards. How do we stitch it all together? That’s one of the key challenges.”

The answer may be chiplets, which eventually will become more standardized. “We’re going to see a lot more standardization about how chiplets will work together,” Siwniski said. “A chiplet is just going to become another type of IP that will have to be integrated.”

Others agree. “With foundry technologies advancing, and Moore’s Law slowing down, the semiconductor industry needs to find new ways of realizing performance gains, cost reductions, and yield improvements. This is why chiplets will be a focus across the industry in 2024,” said Richard Grisenthwaite, executive vice president and chief architect at Arm. As the proliferation of this technology increases and the chiplet marketplace becomes more diverse, the focus will shift toward standardization and interoperability to ensure the quickest path to market for these more customized chips, enabling reuse in different markets. In 2024, we expect to see the industry come together to more clearly define the system-level capabilities and foundational standards that will enable chiplets to be used in a wider variety of systems without risk of fragmentation.”

Keep shifting left
One of the key reasons why chiplets and RISC-V are taking root is that performance and/or power improvements no longer are guaranteed at new process nodes. SoCs are being decomposed into various parts, and all of them need to behave as a system. That requires more customization, more co-design, and a better understanding of the overall architecture earlier in the flow.

“As software-defined architectures come into play, you have to focus more on systems design upfront,” said Nilesh Kamdar, senior director and portfolio manager for the RF/microwave, power electronics & device modeling EDA businesses at Keysight. “You cannot leave it to chance later. The entire work stream, from verification all the way to tape-out, has to be figured out upfront, defined, and designed for. You cannot just hope and pray that it’ll come together at the end.”

That doesn’t mean things get simpler, though. The trend of steadily increasing complexity will remain, noted Benjamin Prautsch, group manager for mixed-signal automation in Fraunhofer IIS’ Engineering of Adaptive Systems Division. “As a result, any activity that helps with ‘shift left’ will be pursued. Here, a key pillar is EDA, which has a variety of challenges to work on, such as EDA for system partitioning — including chiplet-based systems — EDA for high-level digital synthesis, and EDA for analog design and verification.”

Fusing together all these pieces is a non-trivial affair, requiring a focus on true system-level design and how all the pieces fit together. “Users know they need to do true model-based system design, especially for software-defined, silicon-enabled systems,” said Neil Hand, director of strategy for IC verification solutions at Siemens EDA. “They want to know the starting point, because it’s very different than optimizing locally. When you’re trying to do system design and global optimization, it’s a whole new set of challenges.”

That will require new tools. “One of the challenges of true system-level design is how you communicate between domains,” Hand said. “How do you abstract a detailed model in a way that makes it usable in another domain? This is especially true when you start looking at shift left. How do I take process-level information and make it usable by a system design engineer? A lot of that is going to be enabled by AI/ML creating models that allow other areas of the system design to work. For example, it could be used to create an abstract system model of a complex SoC, so you can use it in your digital twin.”

Quantum computing
At the end of 2023 there were two significant breakthroughs in quantum computing. First, IBM debuted “IBM Quantum Heron,” the first in a new series of utility-scale quantum processors. The company also unveiled IBM Quantum System Two, IBM’s first modular quantum computer and a cornerstone of its quantum-centric supercomputing architecture. Second, a DARPA team created the first-ever quantum circuit with logical qubits, a critical missing piece in the puzzle to realize fault-tolerant quantum computing. Logical qubits are error-corrected to maintain their quantum state, making them useful for solving a diverse set of complex problems. IBM’s achievements were exciting enough to attract a 60 Minutes reporter, who gushed about biomedical breakthroughs enabled by never-before-possible calculations.

Quantum computer closeup, Source: Adobe Stock/ Bartek WróblewskiBut the quantum age comes with a giant caveat. “Practically speaking, in our lifetimes, someone’s going to come up with a cryptographically relevant quantum computer,” said Scott Best, senior principal engineer at Rambus. “With such a machine, mathematicians have already figured out how to break all of the digital signature algorithms and the key exchange algorithm.”

Thus, anyone designing chips in 2024 has to address the likelihood that traditional encryption schemes will be broken by the time their new silicon hits the market. “Some of the simulations that people have done on hacking certain cryptography algorithms is legitimate. They could be in place by 2025,” said Jayson Bethurem, vice president marketing and business development of Flex Logix. “Decisions around security that are made today will inevitably be wrong. The only way to keep them from being wrong is to keep some kind of dynamic crypto, or as we call it, ‘crypto-agility’ in your device, such as an AES algorithm.”

More data, higher performance, and sustainability
Data doesn’t just move through wires. It also moves through the atmosphere, and big improvements technology are needed both on the transmitting and receiving side to handle the massive increase in data. This is where 5G millimeter wave and 6G fit into the picture.

“In the RF and the millimeter wave space, there’s a huge jump happening, primarily driven by 6G,” said Keysight’s Kamdar. “As we look at some of the later stages of 5G, and then at the development of 6G, it’s going to be a huge jump. Essentially you’re going from carrier frequencies that are 6GHz or lower, to carrier frequencies of anywhere from 28GHz to 100GHz. That means the core semiconductor technology, the core modulation on your signals, and the figures of merit that you apply all may have to completely change. It’s such a huge step change in the market that most companies cannot continue using design techniques and workstreams from the past. They will have to come up with brand new ways of working things.”

This includes RF, and the basic workhorse — the mixed signal chip. There will likely be design revamps, perhaps as soon as 2024. Here, DARPA said it plans to announce an initiative to establish an RF heterogenous integration standard. Given all the interest in quantum, photonics, and WiFi, RF design is becoming high on the list of most “poachable” employee skillsets, said Kamdar. “I would really encourage more students and young career professionals to think about this space. Many companies, as soon as they find you have the right background in RF, will grab you.”

This is just one of the pieces in play. The demands of AI also will keep memory, plus power and performance, at the forefront of engineering concerns. “With the ongoing rollout of new generative AI, which is based on ever greater complexity, the need for high-performance processors is also increasing,” said Andy Heinig, head of department for efficient electronics at Fraunhofer IIS/EAS. “A key trend here is the need for more memory at all hierarchy levels. The need for highly specific hardware implementations is also being accelerated by the requirements of generative AI. As a result, the power density for the entire system will be drastically increased, with problems in terms of power delivery and heat removal.”

This need to solve these power challenges is being driven by both financial and environmental concerns. “Going into 2024, AI is an accelerant, where sustainability and efficiency will be the main obstacles to growing compute capacity,” said Jeff Wittich, Ampere’s chief product officer. “The lack of efficiency could halt growth if not solved, so companies will be prioritizing it more than ever.”

Sustainability initiatives are also being driven by regulatory concerns. “As we’re getting closer to some of the regulatory requirements, such as only electric vehicles moving forward in California or other specific carbon goals, systems companies are looking at the clock winding down and making sure that everything is built to sustainability standards,” said Arteris’ Siwinski. “A lot of innovation is accelerating because the deadlines are approaching.”

It’s possible that sustainability concerns will challenge, or even doom, one of the favorite AI scenarios, said Bob Beachler, vice president of product at Untether AI. “If the world moved to 100% autonomous vehicles today, the greenhouse impact would be larger than the current impact of all of the computers in all of the datacenters worldwide. If we want autonomous vehicles on the road, the traditional approach to AVs does not have the ‘AI horsepower’ required to make them work. We need a significantly more energy-efficient way of deploying AI to get autonomous vehicles on the road.”

Beyond 2024—reliability and rad-hardness
More electronics has a multi-faceted price tag. For example, shrinking digital devices costs more in dollars, in design time, and in reliability, and each one of those categories can be broken down further. Higher density means a greater likelihood that a radioactive particle can strike a vital component. The solution is both radiation hardening and redundancy, both of which could translate to 25% to 50% higher component costs.

“At lower nodes, reliability can fall victim to alpha particles, which can lead to developers requesting more expensive, rad-hardened chips,” said Geoff Tate, CEO of Flex Logix. “We’re probably going to see more robust rad-hard design techniques for all types of storage elements and advanced nodes. You just can’t keep making the memory elements smaller and smaller. We’re not getting rid of alpha particles. They’re in our solar system.”

While this may not be an immediate need for many applications, Tate predicts requests will grow. “For most commercial customers, making components rad hard probably is not going to be an issue until the end of the decade. We’re just starting to see commercial customers requesting rad-hard storage elements for certain super-high reliability commercial applications.”

Siemens’s Hand agrees. “Reliability is going to become more important as we go into 2024. You’ll see more companies looking at what is needed,” he said. “Whether it’s rad-hard or functional safety or other techniques, it will come down to whichever mechanism is appropriate for the system at hand.”

Conclusion
Despite all the breakthroughs and excitement, the fundamentals of both engineering and business remain. “Everyone has to become more efficient in their design and implementation processes to get the job done,” Imperas’s Davidmann said. “If you don’t get there fast enough, somebody else is going to beat you to it.”

Corporate consolidation is another trend likely to continue as new problems arise, markets flood with startups seeking to solve them, and big companies acquired them either for their technology, their talent, and both. What’s changing, though, is the pace at which all of this is occurring. It’s accelerating with the rollout of new technologies, and that will only continue throughout the decade and well into the next. Technology is here to stay, and semiconductors are the engines that will make it all work.



Leave a Reply


(Note: This name will be displayed publicly)