Chip design complexity is overwhelming them, and they are prone to errors. However, they’re still useful for some jobs.
Spreadsheets have been an invaluable engineering tool for many aspects of semiconductor design and verification, but their inability to handle complexity is squeezing them out of an increasing number of applications.
This is raising questions about whether they still have a role, and if so, how large that role will be. There are two sides to this issue. On one side are the users who see them as providing a quick, easy, and cheap way to capture, store and perform limited amounts of analysis on data. On the other side are EDA vendors, which have developed an extensive array of specialized tools capable of taking over when spreadsheets are no longer up to the task.
In many cases, legacy keeps the old solution being used longer than perhaps it should be. “You’d be surprised how many people still use spreadsheets,” says Ian Rippke, system simulation segment manager at Keysight. “They just try to hide it. The status quo has been maintained in industries like aerospace and defense for a little bit longer than in the commercial, wireless, or consumer industries. The latter are trying to move faster than their competitors and get products out to market first. The challenge we’ve seen is that complexity of systems has outgrown what you can efficiently model using a spreadsheet. At the same time, the alternatives have become much more accepted by the industry.”
Many of us continue to use spreadsheets personally. “Spreadsheets are a fundamental tool in the engineer’s toolbox,” says Simon Davidmann, founder and CEO for Imperas Software. “I see them used in many different ways, from simple things like a checklists, all the way to being utilized as a verification plan. For example, when someone builds a RISC-V core, we want to know how their model is configured. It’s a way of storing data and conveying information. The tabular, columnar format is perfect for static types of engineering information.”
Frank Schirrmeister, senior group director for solutions and ecosystem at Cadence, says that in addition to verification, “the other major area is budgeting, architecture, prediction, and analysis. This is broad stroke type of analysis of your design. The real value is in the data being captured.”
Checklists are used in many ways. “A lot of people use them as enhanced to do lists,” says Paul Graykowski, senior technical marketing manager for Arteris IP. “This is especially true at the block level, but not usually for a full system. I’ve seen things like covered goals, testing goals, things that we want to make sure that we’ve exercised. But where they seem to lose steam is when you’re getting into the real nitty gritty details. To really get at the information you have to peel that onion and go deeper, and that’s when the spreadsheet loses steam.”
Keysight’s Rippke agrees. “If you’re my customer and I’m your provider, it’s a great mechanism to make sure that we’re all on the same page of what we expect of each other. It is a place to define what it is meant to do, how I know if I have met the performance requirements. The spreadsheet does make a very nice way to tabulate that, and makes it easy to see. But you’ve got to quickly transition into a tool where you can start doing ‘what if’ scenarios, and start exploring different ways to partition the design so you can allocate gain and noise and other parameters, such as power.”
For verification planning, the level of abstraction remains high enough. “It enables them to start documenting things,” says Imperas’ Davidmann. “It’s quite a nice, structured format, and this can be a living document. It becomes a set of requirements at the beginning — these are the things we’ve got to cover, and then you can use its calculating facilities to show you the metrics of where you are. This is not detailed coverage like functional coverage tools provide. But if you’ve got these 171 things to do, have you got 140 ticks or 150? It is an engineering management tool.”
System analysis
Spreadsheets have long been used by architects and systems analysts, providing quick estimates for things such as area, power, and making many tradeoffs. While it was understood they were not perfect, delaying such decision-making would be expensive. “It’s the easiest way to get started,” says Tim Kogel, principal applications engineer at Synopsys. “With some experience, they can be developed into an analytical performance model to explore theoretical boundaries and high-level tradeoffs. Later, this data can be used for the calibration of simulation models, as well as for plausibility checks and initial validation of simulation results.”
This approach has limitations, however. “You want to start out with some simple estimates to do your design, but then refine those estimates throughout the entire design process,” says Tony Mastroianni, advanced packaging solutions director at Siemens EDA. “If it’s a spreadsheet, that is harder to do. Who’s going to update that data? It would make sense to have tools that, as you have more accurate data and more information, just update those models. And then, having a convenient mechanism to look at that data throughout the whole process.”
Floor-planning used spreadsheets in the past. “Area estimation is really important,” says Priyank Shukla, senior staff product manager for Interface IP at Synopsys. “A 10% error in total area is a significant cost. A spreadsheet will just assume a block is a hard boundary, but there is some significant space — in some cases about 20% — where you can overlay digital logic and reduce the total area. That is just with one hard macro and digital logic. In a modern SoC, with a mobile application processor, USB, MIPI, DDR, PCIe etc., the errors with the spreadsheet calculations are reaching about 40% to 50%, plus or minus.”
And performance analysis is now dependent on many factors. “Automotive ECUs used to be deeply embedded systems with a restricted set of functions,” says Synopsys’ Kogel. “Now, a growing number of functions are integrated at zonal and central compute nodes. It becomes very hard to predict the KPIs of heterogeneous computer architectures with data-dependent timing, like distributed NoCs, DDR memories, cache hierarchies, cache coherency, etc. These application and architecture trends lead to a high degree of uncertainty during the design specification process. Relying only on static spreadsheets, the choices are to compensate uncertainty with over-design, or to accept the risk of missing KPI goals due to under-design. The solution is to use architecture modeling to validate the design specification before committing to an implementation.”
Even one model cannot deal with all of it. “Why do we have dynamic simulators?” asks Davidmann. “Why do we need instruction accurate simulators, and performance simulators, and RTL simulators, and gate level simulators? Why don’t you do it on a spreadsheet? There are certain things you can’t use spreadsheets for. Why do we have Verilog? Because you need that to describe what you’re doing. With performance analysis for complex multi-processor systems, you can say that there is contention here, or a resource constraint there, but it is difficult to model how it’s all going to interact. That’s why you need sophisticated tools.”
The dependencies have become so intricate that it’s no longer possible to model all of them and maintain necessary performance. “Power is clearly something where the intricacies and the dependencies have become so complex that you will not do all the calculation within your spreadsheet,” says Cadence’s Schirrmeister. “But it’s a valid start to look at the budget. Then, you refine as you get more accurate data, but you won’t simulate all the activity data. You get data out of tools, and you may still have a spreadsheet somewhere that will keep the min and max values, but it doesn’t do the calculations for you. There remains a role for spreadsheet, but it’s role has been changing over the couple of decades in terms of what it holds.”
Packaging
Packaging is one area where spreadsheets are used extensively. “In a prior design services company that I worked for, we used spreadsheets for package planning,” says Siemens’ Mastroianni. “They served us well for the first 15 or 16 years. We went from simple wire bonds to flip chip. That was kind of a jump. You can get away with spreadsheets, even if it’s using an advanced packaging technology, but once you start dealing with hundreds of thousands of terminals, the spreadsheet becomes unwieldy. Once we hit 2.5D, you can’t do it in a spreadsheet.”
The spreadsheet worked in the past because it was primarily a data transfer function. “The disciplines of packaging and ASIC were so different, used totally different tools, had different geometries, and there really was not a huge amount of collaboration required,” says Mastroianni. “It was a convenient, readable mechanism. We used it to track I/O type of information, capacitive loading, and test mapping. This pin is a functional pin in one mode, but in a test mode it may have a different role.”
Multiple dies seem to create the roadblock. “As more package designs become multi-chip(let) systems in a package (SiP), relying on spreadsheets for connectivity is failing at many levels,” says John Park, product management group director for IC packaging and cross-platform solutions at Cadence. “For designers that already have moved to multi-chip(let) packaging, the old days of error-prone manual checks based on multiple spreadsheets have been replaced with top-level planning and optimization tools. These tools provide the ‘golden’ netlist needed to sign off the package design, along with providing capabilities to optimize and manage multi-chip(let) connectivity, floorplan and stack. The most advanced planning tools also provide early-stage thermal and power modeling, before detailed place-and-route.”
There are increasing numbers of interactions within the package, as well. “It’s not enough to bring in a model of a piece of the system, of the package,” says Al Lorona, solution engineer for Keysight. “You have to model the interaction between that thing that you just brought in, and other things nearby that are listening to that guy. You have to be looking at the EM effects in the package. As frequency goes up, this becomes more challenging. Nothing is passive at those frequencies. Everything is finding ways to radiate and propagate. Those are all pushing modern tools toward doing stuff they didn’t do before, but a spreadsheet may not be able to do at all.”
Spreadsheet limitations
There are a few fundamental issues with spreadsheets that are not easy to fix. “If you don’t use revision control, then you’re looking for trouble,” says Mastroianni. “If you have somebody not using the latest revision, or if somebody makes a change and doesn’t notify everyone that they changed the pin name, then nobody updates the spreadsheet. That’s another type of error where you are relying on human intervention. Whenever you have to rely on somebody to update information manually, that’s where those human type of errors can sneak their way in. The more complex the design, the higher probability you’re going to have this kind of problems.”
This has been apparent for some time. “In a prior life, there were collaborative projects, and we would use spreadsheets as a common denominator way of sharing issues,” says Arteris IP’s Graykowski. “It was effective, but it was ugly. We maintained a lot of information through there. You had all the connections, the memory map information, and it was cool to have all that defined in a given spot. But the reality was, something would change, and somebody inevitably would not make that update, or not look to see if it had been updated. There were too many versions of that document floating around. You never knew which one was which, and so the data would get stale. It was just very painful.”
And that leads to the next problem. Debug. “One of the biggest problems with spreadsheets for complex problems is it is very hard to debug,” says Davidmann. “How do you do regression testing on a spreadsheet? How do you even interactively test it? In one of my previous companies, we used spreadsheets to plot the finances of the company and do the cash flow forecasts. There were always bugs in it. There were always holes, because you can’t really test it and check the evolution of it with regression testing.”
Mastroianni provides a simple example that almost led to a chip failure. “There was a chip where one column would be the pin name in the chip, one would be for the package. If it had the same name, it was implied that those were connected. We had one case where everything looked good and we manually checked it. We wrote some Visual Basic to do some rudimentary checking. We taped out the design and we found out, before manufacturing, that we had a problem. One pin was called ABC_XYZ, and that was the pin name in the package and the chip — or so we thought. It turns out there was a space in one of them. In the spreadsheet, one was ABC and the other one was ABC space, which you couldn’t even see. We ended up with a pin that was floating.”
Build or buy
When something gets to be too complicated, the choice is essentially build or buy. “At some point everyone decides there is a better way, but is it worth the trouble of building a tool?” asks Mastroianni. “Thankfully, the CSV format provides a transition opportunity. I’d rather generate the data in a format rather than having someone type it. Or, if you have a tool, you can generate and export that data and run the analysis externally. If you can automate that, then it’s a convenient format that is pretty agnostic, so different tools can use that for transferring information conveniently.”
That be highly valuable. “In an ecosystem, you have component vendors who produce blocks, and they’ve done this deep design and characterized it well, and built it, and taken it to the lab and measured it over all these different parameters, and then provided their customers with a table or a simple graph,” says Rippke. “There hasn’t been a nice way to take all of that painstakingly extracted data and provide it to the next person in the design chain. That necessitates a data translation, which spreadsheets are fantastically good at. You want to take a huge amount of data, dump it somewhere very quickly, and be able to look at it and correct it.”
Spreadsheets are often built into tools, as well. “Some of our tools have spreadsheet wizards that allow you to perform things like RF budget analysis in a spreadsheet-like manner,” says Schirrmeister. “You can add RF blocks, and then do the calculations of noise, cascaded gain, and things like that. It is a contained problem, but it’s about putting the right data in, so that’s why we have a wizard to create it.”
Scripting is often a middle ground. “One of the things that we’ve been seeing recently is a lot of people are dumping stuff into Python, and using Python algorithms to operate on data,” says Davidmann. “One reason for that is Python frameworks have lots of libraries, and you can do really sophisticated analysis and graphics on datasets. If you capture your design datasets to do with testing or to do with performance, you can get good visualization of it using Python libraries. You could write programs to do it, but most of the time it is unnecessary.”
Conclusion
The spreadsheet was one of the primary tools that ushered in the age of the PC. It is a tool that has a lot of hooks, and it is capable of creating a lot of functionality. Many people are familiar with it, and it is intuitive enough to perform simple functions very quickly.
But the reality is that it’s a general-purpose tool, one not necessarily targeted exactly at what you are trying to achieve. It is not possible to extend or stretch it into something that satisfies everyone or every task.
For every task where the spreadsheet is replaced by something more sophisticated, there is probably a new task for which the spreadsheet will be the ideal tool. The spreadsheet is not dead.
Why do people still use spreadsheets? Because it is easy to customise and add new features. Both with other tools and spreadsheets the garbage-in garbage-out rule is key.
E.g. the mentioned disadvantages of version control can be easily solved. The ABC_XYZ example is not related to spreadsheets but by poor implementation (conditional format on “not equal” will note the difference of adding an “invisible space”).