The informal structural hierarchy used in semiconductor design is imperfect but adequate for most tasks, yet other hierarchies are needed.
For decades, a form of structural hierarchy has been the principal means of handling complexity in chip design. It’s not always perfect, and there is no ideal way in which to divide and conquer because that would need to focus on the analysis being performed. In fact, most systems can be viewed from a variety of different hierarchies, equally correct, and together forming a heterarchy.
The easiest way to think about the form of hierarchy in use today is to ask an engineer to conceptually design a system. They likely will start to draw a block diagram containing big blocks with labels such as CPU, encoder, display subsystem, etc. This is not a functional hierarchy, even though many of the divided blocks are considered to provide a function. Neither is it a pure structural decomposition, because within a chip everything becomes an amorphous sea of transistors.
That diagram roughly follows the hierarchies developed for printed circuit board design, which was a pin hierarchy. At the lowest level you probably used a logic library, such as the Texas Instruments 7400 series. Those devices had pin maps. The next level of hierarchy was the board level, and the pins that connected to the backplane. There was rarely any hierarchy in-between, with schematics just being spread across multiple ‘sheets’. Later on, when systems became more complex, structural hierarchies also were supported.
Fig. 1: A typical block diagram for a complex chip, circa 2013. Source: Texas Instruments
This form of hierarchy provides an encapsulation that allows development of each of the blocks to become somewhat isolated and the interdependences minimized. The top level becomes the means in which those blocks are interconnected. Each of the divided blocks can then go through a similar decomposition.
There are a number of reasons why hierarchies are used. “Capacity is one,” says Marc Swinnen, director of product marketing at Ansys. “The problem gets too big, and you have to break it up into pieces. Another is concurrent engineering. You have multiple teams that want to work on the design at the same time, so you break it up and work on the parts separately. Third is reuse. You want to re-use blocks that others have designed. A subtle form of that is the standard cell library, which itself is a form of hierarchy. The fourth reason is manageability of data volume. A fifth reasons is repetitive structures, such as memories, multiple cores, where you just have natural reuse. A sixth reason is mixed domain, such as analog/digital, where you have different design styles in each block. You will be using different tools and so you break them up hierarchically.”
Capacity
As designs get larger, many algorithms take increasing amounts of time to run. Breaking them down can result in faster execution times and require fewer resources. “For large designs, when you go into physical implementation, it can take days or even weeks for these very large designs to finish,” says Jim Schultz, senior product manager for the Synopsys digital implementation group. “If you mess something up, the costs can be significant. Hierarchy is a way to divide and conquer. It allows us to know we can close one section of the design at a time. As we keep adding functionality and chips keep getting bigger and bigger, it’s not only about runtime, but also about running out of resources — the amount of memory that it takes to place all these reusable instances exceeds capacity.”
In order to be able to do hierarchical analysis, you have to make sure that boundary conditions are correctly set for each block. Then, you need to do an analysis across the boundary at the top level.
“Flat analysis takes a lot of time but provides full accuracy,” says Rimpy Chugh, staff product manager for Synopsys. “Taking a black-box approach across the hierarchy gets you more speed, but you’re losing accuracy. This calls for a specialized solution (see figure 2) where interface logic to a block may get preserved so you are getting the benefit of speed and accuracy at the same time. It is possible to generate an abstract model at the IP level, and then use it at the SoC level.”
Fig. 2: Hierarchical flow using abstract models. Source: Synopsys
Concurrent engineering
Design teams are no longer contained with one building. “A lot of designs are being partitioned so that one team will work on one hierarchy and another team work on another hierarchy, and they could be in the same building, or they could be spread around the World,” says Simon Rance, director of product management for data & IP management at Keysight EDA. “This can create challenges because teams now work at different rates. It becomes important to have a stable hierarchy because it’s difficult to make changes later. This usually happens only as a last resort, but instead we see teams gluing or fudging to make it work. It can be ugly, but we’re seeing more of this challenge with chiplets.”
Reuse
Hierarchy needs to be useful in both a top-down and a bottom-up manner. “In the human body, specialized cells organize into systems and organs which make up the building blocks that form a person,” says Brian LaBorde, senior product marketing manager at Cadence. “Similarly, groups of transistors form circuits or logic gates that make up macros that are assembled into a system. Over the last few decades we have seen larger and larger ICs with many different specialized circuits all integrated on a single chip. The partitioning of these layouts is virtual and represented by hierarchy in a layout database.”
Data management
All designs create a lot of data and that has to be managed. “With engineering lifecycle management and the need to meet functional safety standards, whether it’s ISO 26262 for automotive or MIL882 standard for military aerospace, you have many assets, from documents to verification, test criteria, verification test plans, and results,” says Keysight’s Rance. “All of that needs to be retained with the hierarchy so there’s full traceability. Tracking everything in a hierarchy is difficult, anyway, but you’ve also got everything outside of the design, like verification and test plans. When something fails in test, or even worse in the field, you go back and do discovery to find what might have failed. If you haven’t got all of that data and metadata attached to the hierarchy, you’re never going to find it.”
Repetitive structures
Many designs contain repetitive structures, be they memory cells, small processor arrays or interfaces. But there are dangers hidden in these arrays. “Let’s say you have 16 CPU cores arranged in a 4X4 grid,” says Ansys’ Swinnen. “In principle, these are all the same, but in fact they have different boundary conditions. The environment around the ones on the edge is different for each one. If you want to do optimization, you need to uniquify each one, because they all have unique parasitics at the borders. There’s always this trade off. How do you preserve the reusability, and yet find the ones that are unique? It gets worse when you look at things like thermal.”
Multiple domains
While analog and digital are very different, there are other aspects of the development flow that also utilize separation of tools. “The whole idea of EDA is to take this complex problem, simplify it into a structural problem, cut it up, and make the problem really simple,” says Ron Press, director for technology enablement at Siemens Digital Industries Software. “That’s what scan does for DFT. It used to be, even if they had separate cores, they tried to do everything in one big flat image. Then you have to wait until later in the design and you have a much bigger problem. Now, with distributed design teams and cores that are re-used, people need as much plug-and-play as possible. They’ll finish their design for the DFT, they’ll make their patterns for that core, and then that just gets plugged in at the top level. As long as it’s got some type of isolation such as wrapper chains, they can work on that piece separately, finish the DFT design, and finish their patterns. That makes these teams independent, and it makes the whole process much easier.”
Problems with structural hierarchy
No system is perfect, and this form of hierarchy does create some problems. “There’s definitely an overhead related to the design of constraints at those boundaries,” says Synopsys’ Schultz. “You have to break up the constraints and define them correctly, and push them down to the block boundaries. Making sure that you’ve defined those boundaries correctly is a big issue. The other knock against it is that by design, when you physically break something up into pieces and you say these are my partitions, when I go to physically implement it, I am not going to be optimizing across that boundary. You cannot optimize – that boundary now is fixed. If there’s an optimization that needs to take place across that hierarchy, you cannot do it. You’re limited yourself.”
This can affect several tools and flows. “If they do a top-level plan with hierarchical DFT, they might plan to have so many pins go to a core,” says Siemens’ Press. “Then it turns out that core doesn’t need too many patterns, and this other core for which they allocated a similar number of pins needs way more patterns. If they have frozen their design early on, from the top level, then their pattern delivery is not going to be that efficient.”
Establishing the wrong hierarchy can limit you in multiple ways. “One of the big problems, especially with large SoCs, is that networking and communication can create congestion,” adds Schultz. “We see congestion across the chip especially when the design has been poorly partitioned. I see blocks talking through other blocks, and you have to create feed throughs. That can cause a lot of congestion on the design. Plus, it’s much harder to meet your timing requirements when you do something like that, because you cannot easily optimize that full path. You have to optimize each block individually and hope that the paths all work out.”
Subtle changes can take place at the boundaries. “When you abut two blocks, you have a logical connection between them, but there’s nothing physically there,” says Swinnen. “The pins just touch each other but there’s no wire. But in your netlist, you’ve got a wire that’s supposed to be there. It is supposed to have a resistance, a capacitance. You have a logical wire but no physical wire. Then you have feed throughs, where wires come in one side of a block, runs through the block, and come out the other end. There are pins, there are physical wires, but no logical wires. Logically, it doesn’t exist. You don’t draw that on your schematics.”
Some tools can deal with bad hierarchies but fixing them creates other problems. “You have a logical hierarchy when you’re developing the RTL and you synthesize it,” says Schultz. “When you do your physical design, those logical hierarchies have to map one-to-one to a physical partition. What ends up happening is that, in my logical hierarchy, I may have a child that’s talking to the child underneath that logical hierarchy that really is talking a lot to something that’s under another parent, and those two parents become their own physical partitions. Those two parents can be physically placed on opposite sides of the chip. The logical hierarchy is not conducive to the physical implementation. The way this is handled is through RTL restructuring. Now we’re starting to move things and repair the logic, but that’s not something a pure RTL or logical designer is going to know about. That information only comes about when you take into account the physical hierarchy. There needs to be a communication between the two in order to really optimize that physical hierarchy.”
This happens at other places in the flow. “A NoC sits at the top cockpit level, where you have the integration aspects of the full hierarchy,” says Frank Schirrmeister, vice president of solutions and business development at Arteris. “When changes to the hierarchy are needed, perhaps because of two different non-functional properties between power domains, re-factoring the RTL can be straightforward. Having a higher level of integration for the hierarchy helps you to re-factor and restructure the RTL accordingly, and you really don’t want to do this by having to change all your modules manually.”
Keeping track of this can be a nightmare. “Think of revision control to a document or a file that is part of a hierarchy,” says Rance. “You may then have multiple versions or revisions of that hierarchy, depending on what you’re doing. You may have a verification team that does PPA analysis and find that if they tweak this a little bit and create another version of this hierarchy, it performs better. You need to keep track of that.”
As much as hierarchies help us divide and conquer, some things defy any attempt to make these simplifications. “Things like thermal analysis need to be done chip-wide,” says Arteris’ Schirrmeister. “But you need to be able to correlate this with what’s going on in the chip functionally, to the data that’s running through it. You want to be able to take a chip photo, where you would look at the hotspots and where each piece of functionality is located, potentially from a lifecycle perspective, and getting impacted the most. Correlating this back to the data is far from trivial.”
Other hierarchies
Other hierarchies do exist, such as a functional hierarchy. The closest we have to this today is a requirement tracking system that starts with high level definitions of what a system should do. These are broken down into simpler and simpler tasks, until eventually the logic or other circuitry that actually provides it is identified, along with the testbenches that verify that the appropriate requirement is met.
Some hierarchies come and go through the design process. “You may have a hierarchy for the clock tree,” says Schirrmeister. “There’s the hierarchy for the power distribution system. And then there’s a hierarchy seen by system analysis tools for how they connect everything together for a full chip perspective. What we thought about with ESL (electronic-system level), was this notion of an executable functional spec describing the whole thing. This is something that has still not emerged. Somehow, we seem to be getting away with it, which is surprising.”
Physical layout provides another hierarchy. At the highest level is floor-planning, which utilizes the structural hierarchy as a starting point. These blocks are placed, and the interconnect is routed between them. Each block is laid out using physical synthesis, which again deals with the local interconnect. 3D-IC will add a new dimension to this, where routing may now exist in the Z direction.
“As we start to see chiplet-based 2.5D and 3D systems replace system-on-a-chip (SoC) designs, hierarchy won’t be as much of a strategic construct as a representation of physical reality,” says Cadence’s LaBorde. “The macros in a schematic might represent chips in a system, each in their own unique process. The connections between them will be solder bumps rather than symbolic pins on a layout.”
Conclusion
While not perfect, the informal structural decomposition in use today has proven to be an adept hierarchy. Some aspects of a flow suffer because of it, but most are able to effectively use it, and tools can compensate for its inadequacies. While there is a certain amount of optimization potential that gets lost because of it, this is probably one of the small sacrifices made in the name of productivity.
We have a couple of engineers whose main role is to interface between partition owners and the full chip guys. They try to minimize feedthrus, floorplan thoughtfully, and optimize the boundaries. The drawbacks to such a nested hierarchy do present themselves but we try our best to circumvent them. I feel my legs are still fresh enough to maybe think of new ways to do some of these things