What have you been reading over the past year? This often indicates the problems you most desperately need to be solved.
At the end of each year, I look back over the stories published and those that top the charts in terms of readership. I concentrate on those stories that are about the EDA tools and flows and the factors that are influencing them. These are good indicators of the problems designers and verification teams are facing today, and where they are looking for answers.
This year’s leading categories are fundamental changes within EDA; power, thermal, and efficiency issues; chiplets; and memory. Within those categories, stories that deal with the design side of the issue tend to get more readers than those that deal with verification by about 30%. I have never quite worked out why this is the case, given that verification engineers outnumber designers in many companies. Perhaps the verification engineers are just too busy. (I would love to hear your views about this.)
Over the past few years, EDA has been stretched thin. Not only have they been racing to deal with new technology nodes and the introduction of new packaging options, but at the high end, a more systems centric set of demands. I wrote the top story of the year just a couple of months ago – Shift Left Is The Tip Of The Iceberg. On the surface, shift left may look like a minor change involving optimization strategies, or the introduction of new metrics, but this is a fundamental move from an insular semiconductor perspective to the start of the systems’ age of design.
A surprising top story was Is There Any Hope For Asynchronous Design? In an era when power has become a fundamental design constraint, questions persist about whether asynchronous logic has a role to play. It is a design style said to have significant benefits and yet has never resulted in more than a few experiments.
Looking at the industry in general, Ed Sperling wrote EDA Looks Beyond Chips. Top EDA executives have been talking about expanding into adjacent markets for more than a decade, but the broader markets were largely closed to them. In fact, the only significant step in that space happened in the reverse direction, when Siemens bought Mentor Graphics in 2016 for $4.5 billion. Three things have changed fundamentally since then.
There have been several announcements about the usage of AI within EDA. Karen Heyman wrote AI’s Role In Chip Design Widens, Drawing In New Startups. Using AI in EDA is reinvigorating the whole tools industry, prompting established players to upgrade their tool offerings with AI/ML features, while drawing in startups trying to carve out differentiated approaches to fill unaddressed gaps with new tools and methodologies.
Finally in this category, I wrote Will AI Disrupt EDA? Generative AI has disrupted search, it is transforming the computing landscape, and now it’s threatening to disrupt EDA. But despite the buzz and the broad pronouncements of radical changes ahead, it remains unclear where it will have an impact and how deep any changes will be.
An increasing number of designs are power-conscious today. This is no longer just the domain of battery-operated devices. Ed Sperling topped this category writing about The Rising Price Of Power In Chips. Transistor density has reached a point where these tiny digital switches are generating more heat than can be removed through traditional means. That may sound manageable enough, except it has created a slew of new problems that require the entire industry to solve — EDA companies, process equipment makers, fabs, packaging houses, field-level monitoring and analytics providers, materials suppliers, research groups, etc.
A lot of people are concerned about designs for data centers. Ann Mutschler wrote about Architecting Chips For High-Performance Computing. The world’s leading hyperscaler cloud data center companies are launching heterogeneous, multi-core architectures specifically for the cloud, and the impact is being felt in high-performance CPU development across the chip industry.
Performance per watt is becoming the new optimization criteria. Ed Sperling wrote New AI Processors Architectures Balance Speed With Efficiency. Leading AI systems designs are migrating away from building the fastest AI processor possible, adopting a more balanced approach that involves highly specialized, heterogeneous compute elements, faster data movement, and significantly lower power.
Heterogeneous integration is becoming more commonplace, but the industry has still not solved all of the problems necessary for a third-party chiplet market to develop. Ann Mutschler wrote the top-read story in this category with Chiplet IP Standards Are Just The Beginning. Data and protocol interoperability standards are needed for EDA tools, and there are more hurdles ahead. Customized chiplets will be required for AI applications. This was part of an experts at the table series.
Chiplets cannot exist without an interposer, and the industry is still grappling with 2.5D Integration: Big Chip Or Small PCB? I tackled the issue of whether a 2.5D device is a printed circuit board shrunk down to fit into a package, or is a chip that extends beyond the limits of a single die. While that may seem like hair-splitting semantics, it can have significant consequences for the overall success of a design.
Another experts at the table series looked at Defining The Chiplet Socket and What Comes After HBM For Chiplets. The industry may have started with the wrong approach for enabling a third-party chiplet ecosystem, but who will step in and fix it? This was a lively discussion that brought together ideas from systems, foundry, EDA, and interconnect companies.
While memory may seem like a sleepy backwater, it is an area that is hitting all kinds of limitations. That means advances are more likely to be out-of-the-box. Karen Heyman conducted an experts’ discussion that looked at The Future Of Memory. From attempts to resolve thermal and power issues to the roles of CXL and UCIe, the future holds a number of opportunities for memory.
She also tackled the issue of SRAM Scaling Issues, And What Comes Next. The inability of SRAM to scale has challenged power and performance goals forcing the design ecosystem to come up with strategies that range from hardware innovations to re-thinking design layouts. At the same time, despite the age of its initial design and its current scaling limitations, SRAM has become the workhorse memory for AI.
Responding to the importance of this area, Semiconductor Engineering released the eBook Memory Fundamentals For Engineers. Nearly everything you need to know about memory. That includes detailed explanations of the different types of memory; how and where these are used today; what’s changing, which memories are successful, and which ones might be in the future; and the limitations of each memory type.
Well, that’s a wrap for 2024. Thank you to everyone who has contributed to all of the stories we have prepared this year. Without you, none of it would be possible, and it is you who make us look good. Well, maybe “good” is a stretch. Looking forward to another exciting year in 2025, which might have many surprises in store for all of us.
Leave a Reply