Structural Vs. Functional

System decomposition is necessary to be able to handle complexity, but thinking in the pure functional space is difficult.

popularity

When working on an article about PLM and semiconductors, I got to review a favorite topic from my days in EDA development – verification versus validation. I built extensive presentations around it and tried to persuade people within the EDA industry, as well as customers, about the advantages of doing a top-down functional modeling and analysis. The V diagram that everyone uses is flawed and leads to unnecessary amounts of simulation and emulation.

I also got heavily involved in many of the research efforts associated with system-level design, the creation of new levels of abstraction, and how to map between those and the abstractions currently in use by the industry.

One of the most difficult areas in both cases is the distinction between functional partitioning and structural partitioning. The EDA industry was born out of the PCB industry, where everything already had been structurally partitioned. You buy components that are building blocks and you assemble them to create the functionality you want. Designers of the day knew the Texas Instruments TTL 7400 series Databook by heart, and I can remember the day as a college student when I got my first copy. It was the bible.

That firmly established the notion of structural hierarchy in the industry, and it remains firmly in place. This also happens to be the preferred way that PLM systems work. That should also not be a surprise, given the nature of mechanical systems, which is where that technology emerged from.

The big problem that high-level design and verification tools had was they were functional in nature – not structural. Behaviors are not siloed in the same way. When you describe the behavior of, for example, a processing system, you do not partition the processor from the bus, from the memory, from the I/O. Those are structural pieces that may or may not exist in the implementation. You define the capabilities that a system should have. The first problem was that few people knew how to think functionally. Even most specifications, which should be the ultimate functional documents, often start by providing block diagrams that attempt to partition problems structurally.

The second problem with starting with a behavior is that somehow you have to map function onto structure at some point in the flow. This is, in essence, one aspect of high-level synthesis (HLS), and it is tough. Here you are defining custom silicon for a function. It is already assumed that the function has been partitioned, and what you synthesize is a leaf-level function. Ever wondered why HLS has problems scaling up to larger parts of the system?

If you were to take two functions and try to come up with an optimal solution, you would have to ascertain which parts of the functions are common and could be shared, and you have to work out all timing dependencies between them. This process does not scale.

In the ’90s, when much of this research was being conducted, there also were developing notions of platform-based design. This was essentially a predesigned platform onto which a small custom piece of logic could be added to gain the required performance. Again, the mapping problem was difficult. Some parts were mapped to fixed function, other parts became software that would run on one of the available processors, and yet other parts would be synthesized into custom logic.

Similar problems are faced with modern FPGA devices that contain a large array of hardened capabilities for CPUs, AI processors, various communications blocks, etc. Mapping a design onto those is either obvious because the input has been structurally partitioned to align to them, or the user is on their own to provide the mapping manually.

To get better verification than we have today, you have to add a functional view. Verification is fundamentally the comparison of two models. One of those is the design, and the other is the testbench developed by the verification team. You basically do not want any single point of failure between them. Otherwise, the same problem can get created in both models and bugs escape. Black-box verification means you cannot see what the hardware is doing. Any verification expert will tell you this is an inefficient means of verification and that for some aspects, white-box verification can be better.

It is too late for design to change from being structural to functional, but verification has been heading in that direction and that should be a goal. A functional specification — potentially one that comes from a PLM system — should be the thing that drives verification, and it becomes the focus for safety, reliability, quality. It is the thing that can be continuously tracked, and is the only thing that can provide definitive metrics for progress, risk, etc. The design flow cannot provide those.

The semiconductor industry is unique in several ways. The biggest divergence with the other parts of a system managed by PLM are the costs of failure. Going to manufacturing with a product that doesn’t work costs a lot of money, and more importantly, a lot of time. At the same time, we all know that getting to 100% confidence means you will never go to manufacturing.

The other difference is that what is expected of the electronics changes over time. It is not enough to verify that initially specified functionality works. It is expected that the electronics will work for any conceived function of the future. That means verification never can be fully functional. Part of it can be black-box, top-down functional verification, but if we expect hardware to be resilient to changes and adding of new demands after manufacturing, then there has to remain an aspect of white-box, bottom-up verification.

I am sure every engineering team thinks what they do is unique, special, and different, and thus they should be able to do things their own way. When I look at PLM, I see something that is inadequate to deal with the complexities of the semiconductor industry and what is asked of it. I am not sure that a simplified view the C-suite may desire really exists, but that will not stop them from driving slowly towards it.

If they do, in my opinion, they may achieve more first-pass silicon success. But they also will have deferred issues that will only be discovered when attempts are made to change anything in the future.



Leave a Reply


(Note: This name will be displayed publicly)