It’s All About The Data

When it comes to statistical calibration, it’s all about the data and what you do with it.

popularity

Ah, the wonders of math and physics — they do make the world go ‘round, do they not? And even in the world of chip design, of course, it is the fundamental understanding of the physics of electricity, materials, metals, etc., that it is all possible.

I just wanted to stop for a moment to marvel at how those fundamental understandings of the math and physics of things have been abstracted to such a degree that a human being can sit down at a workstation and design a chip with a set of software that will then be manufactured at an unfathomably small size. To that, I say, bravo.

It is in this vein that I have been researching uncertainty and risk in the context of chip design, as well as bigger systems such as automobiles. The technology to analyze and assess that risk is essentially the application of certain math and physics equations. At the same time, this is all colliding with massive amounts of data — the likes of which we’ve not had to contend with quite on this scale.

So how can data be combined in a statistically meaningful way, in order to reduce risk and uncertainty?

This is the world of Peter Qian, Chief Scientist at SmartUQ and professor in the Department of Statistics and the Department of Industrial and Systems Engineering at the University of Wisconsin-Madison who explained there are basically two issues in the space I specifically asked about: thermal analysis in 3D packaging. “One is taking many data points from simulation, less points from physical. But when you take points from both forces, you should be very careful because at the end of the day, you want to align simulation with testing and if we are only taking those data points randomly, nothing is going to happen. How can we collect good data sets and crunch numbers — how can we use simulation results that represents the underlying physics? Moreover, how can we use the underlying physical parameters to tune the simulation? At the end of the day, you want to tune the simulation more by using information from the physical process sets. This is the whole idea of statistical calibration.”

(As a side note, for more on statistical calibration, I found an interesting paper here.)

Part of this whole discussion of uncertainty and risk analysis and prediction involves the difference in thinking, from either a deterministic or probabilistic perspective.

According to an aerospace information report from SAE Aerospace that SmartUQ pointed me to, “Many people, if not most, never seriously reflect on matters of probability and statistics. Nonetheless, there is a general understanding on some level that there are risks involved with common activities such as driving and flying, but a comfortable belief that these risks are an ignorable low. There are two types of uncertainty. If 1,000 tickets are sold in a raffle, then the odds of a particular ticket being drawn are 1:1000, but it is a certainty that one ticket, and only one, will be drawn. one ticket will definitely win. Simple statistical methods compute the probabilities. By contrast, while there is a chance that airplanes will crash, it is not known for sure that one will, and one crash does not preclude a second or third. A quoted risk describes a potential situation. This type of uncertainty is often manifested in less dramatic contexts; for example, weather forecasting. Uncertainty in a forecast is due partly to uncertainties in the multitude of variables which affect the weather and partly to the complexity of the predictive models. Each variable has its own probability of playing an influential part in the final outcome, and that probability can be affected by other variables.

Trying to take a deterministic approach to weather prediction — i.e., assigning single values to the variables — is not realistic. Deterministic calculations can prejudice important decisions. The use of probabilistic methods has been gaining support over recent years to address situations which are influenced by multiple parameters, some of which may be interdependent and all of which are subject to some level of variability. In all cases, the probabilistic approaches have been developed as extensions of existing deterministic analyses, and caveats are inherited. However, if the deterministic models are inadequate, then neither deterministic nor derivative probabilistic methods can be expected to yield reliable answers.

Early applications of probabilistic methods typically assessed the risks associated with deterministically design hardware. In some cases, this was driven by problems encountered in manufacture or in service where the level of variability in component or system response was greater than could be accommodated by a deterministic model. This required a closer look at the sources of variability. In other cases it was impossible to validate a deterministic model by testing — e.g., nuclear power station pressure vessels — and hence the need for a failure risk analysis became apparent.

As familiarity was gained in using probabilistic methods for risk assessment it was natural to suggest that probabilistic approaches could be used in the design stage. In particular, it was recognized that the deterministic design philosophy could be excessively conservative due to the application of multiple safety factors, e.g., worst material properties, worst tolerances and most adverse loading conditions. probabilistic methods allow a realistic assessment to be performed of the probability of occurrence of combinations of these worst cases and when combined with a sensitivity analysis, indicate how a design should be adjusted to achieve robustness, I E, minimum sensitivity to variations in design input parameters.

There are a number of issues that are often raised concerning probabilistic analysis: can risk be computed accurately? Can the variables be correctly and accurately characterized? Is the methodology sound? These are good questions, and they are being addressed by a growing number of practitioners interested in using probabilistic methods in a wide variety of applications from the oil and solid fuels industries to design of buildings and gas turbine engines.”

Again, what does all of this have to do with chip design? Well, everything actually, particularly when you think about the cost of manufacturing SoCs today, you want to do everything possible to reduce the risk and uncertainty of your design.

For more on this, next week watch for my Special Report on finite element analysis. For now, consider also when we talk about data, we can’t help but think about the best way to process all of that data, and one technology being used today is a convolutional neural network type architecture.



Leave a Reply


(Note: This name will be displayed publicly)