Automatically developing the most appropriate model and its fidelity for a given task remains a challenge.
The EDA industry has advanced by leaps and bounds with innovation. Every time we approach a new technology node, many algorithms have to be re-imagined. As the late Jim Ready often pointed out to me, compared to the world of software development, these semiconductor technology changes are the hardware equivalent to what Fred Brooks, in his seminal article “No Silver Bullet—Essence and Accident in Software Engineering,” referred to as “essential” changes. They break development methods, and we need to address them. Of course, semiconductor advancements have enabled the meteoric growth of complexity and the cost reduction of our day-to-day electronics gadgets.
The latest innovative development methods include the use of machine learning (ML) to increase productivity further. And they are pervasive throughout the flow, from ML-optimized simulation and formal verification, through implementation, library characterization, design for manufacturing (DFM), and PCB board design. While the initial efforts were somewhat hidden inside, recent advancements capture the know-how of the engineers operating the design flows and make their work much more productive using the available compute power in the cloud. We often refer to these as ML-inside and ML-outside, respectively. Paul McLellan has summarized some of the recent advances nicely here and here.
We are just at the beginning of the advances that ML in EDA will enable. The process reminded me of what we referred to as parameter sweeps, back in the ‘90s, for architecture optimization, with the twist of cleverness not simply to run all options in a brute-force fashion but to use reinforcement learning to understand when to stop an approach and how to optimize the parameter set for the next run.
So, what about architecture design and the underlying models it requires?
I find myself recently in more and more discussions about digital twins. And what becomes very clear, very fast in conversations is how much the scope of the problem matters. For instance, with our partners in aerospace and defense, it sometimes takes significant time to align a discussion on the difference between considering an entire airplane, its subsystems, or the underlying chips and software.
I often revert to the classic V-diagram to discuss the complexity of arguably one of the most complex monolithic “systems of systems,” the F-35. It comprises 200,000 parts made by 1,600 suppliers, houses 3,500 integrated circuits with 200 distinct chips, and has well over >20,000,000 lines of software code. As a result, designers face hugely intricate hardware/software interaction and interactions at the system level across multiple domains—mechanical, electronic, thermal, etc. It’s mindboggling!
Ericsson’s Anders Forsen frequently pointed out to me in the late ’90s during discussions on the Felix System-Design Initiative: “Remember, your seemingly complex design only becomes a part of a much bigger system.” Anders was a mobile network specialist, correcting my assumption that a phone was complex. As complex as the F-35 is, it becomes “only” a component in its much more complex system environment.
This effect is illustrated in the V-diagram above with the successive loops of “design, partition, refine” and “verify, validate, integrate.” As an industry, we are building digital twins for various purposes. We often lose track of their scope and the fidelity required to answer specific questions. We may not have enough computing power available yet to model the entire plane at the level of necessary accuracy for tasks at the sub-system and chip level, let’s say the chip itself. Emulation, virtual and physical prototyping are great vehicles to enable software development for chips and systems. Modeling the entire plane at that level of accuracy is not feasible for various reasons. The overall complexity is one, but also, the availability of the required Verilog or VHDL hinders things.
That’s where model abstraction comes in. Mesh models enabling computational fluid dynamics (CFD) work well for the entire plane but can abstract the electronics inside. Transaction-level models of hardware blocks in a chip often “abstract away” detailed timing information. What is fascinating is that so far, all attempts have failed to automate the creation of more abstract models from more detailed abstraction. Conversely, the synthesis part has only been “solved” for constrained cases. Higher level models in C/C++ suitable for high-level synthesis are still fundamentally different from models that execute fast in a system-simulation environment. And the tragic consequence is that when two models for the same “object” exist, there are almost certainly defects in one but not present in the other.
Consequently, hybrid setups are the best approach today, mixing levels of fidelity for system simulation and digital twins. They only keep the parts of a design at a more accurate level that impacts the functionality while representing the rest of the system as more abstract models. Developing and choosing the most appropriate model and its fidelity for a given task, whether in a digital twin or for hardware-software verification, for thermal analysis or computational fluid dynamics is a task not automated yet.
By the way, the industry also has tried to apply ML techniques to the challenge of “abstraction of hardware models for software development,” so far with mediocre success, as far as I know from discussions with partners. The models created in this fashion will work for cases within the realm of their training datasets but will have substantial outlying behavior for non-trained cases.
Let’s celebrate the breakthroughs the industry has made with applying ML to EDA. It’s just the beginning, and there are not yet discovered territories, like the creation of abstracted models. Here’s to job security for us 21st-century engineers!
Leave a Reply