5 Reasons Why In-Chip Monitoring Is Here To Stay

From identifying hot spots to individualizing optimization schemes, it’s important to know what’s going on inside a chip.


When the first car rolled off his production line in 1913, Henry Ford would have already envisioned just how prolific the automobile would become. However, would he have foreseen the extent to which monitors and sensors would become critical to the modern internal combustion engine?

The requirement for energy efficiency, power performance and reliability in high volume manufactured vehicles has caused monitoring and sensor systems to increase in number and complexity in order to manage dynamic conditions and understand how each engine has been made. By the same principle, in-chip monitors are here to stay.

Understanding dynamic conditions (voltage supply and junction temperature) as well as understanding how the chip has been made (process) has become a critical requirement for advanced node semiconductor design. So, we should not only get used to in-chip monitors and sensors but also understand the problems they solve and what the key attributes are for good in-chip monitors.

Here are five reasons why in-chip monitoring is here to stay for low geometry designs on technologies such as 40nm, 28nm, 16nm, 12mm and 7nm.

1. Gate density
The benefits of increased gate density drive the modern world by allowing for increased complexity of our electronics for a given area. However, there are drawbacks as increased gate density leads to greater power density and hence localized heating within the chip, or hot spots. Increased density also leads to greater drops to the supply voltage feeding the circuits. High accuracy temperature sensors and voltage supply monitors throughout the design of the chip will allow the system to manage and adapt to such conditions.

2. Product differentiation
Capitalizing on high accuracy monitors and sensors which are proliferated throughout the design will give products a leading edge in the marketplace. Sure, semiconductor design teams will be judged by product features and other ‘bells and whistles’ but what also counts is reliability and comparable performance to competitors.

3. Accepting greater chip process variability
The variability in how each semiconductor device is manufactured widens as geometries shrink. We’ve discussed how there are benefits to understanding how the dynamic conditions of voltage and temperature change on chip, but how about fixed conditions of how each device has been manufactured? Process monitoring, hence determining the speed of the digital circuits, how they react to dynamic changes and how they will age, will allow for optimization and compensation schemes, making the most of how each particular chip has been made.

4. Increased reliability
The innate intolerance to electronic faults within the automotive and telecoms sectors is an attitude now spreading to the enterprise and consumer sectors. Accurate monitoring allows for fault detection and lifetime prediction, primarily by sensing the main contributors to circuit stress such as prolonged high supply voltage and the consequences of localized thermal heating with respect to electromigration.

5. Generational design improvement
The knowledge of how your last 10 million semiconductor devices served their market applications through their lifetimes is key information for your next generation of product design. How do your end customers utilize your devices? This will allow designers to tolerance their designs appropriately ‘next-time-round’ by understanding the environmental conditions under which they are placed.

In summary, we should be prepared that our desire for more in-chip information, and not just simply data, to differentiate products will cause monitoring and sensing systems to evolve beyond what we can predict today. It is safe to say that in-chip monitoring, particularly for the advanced node technologies, is here to stay.

Leave a Reply

(Note: This name will be displayed publicly)