Current methodologies are inadequate to address increasingly complex thermal issues.
Thermal interface materials (TIMs) are becoming more important in all application areas and between different component parts. Any semiconductor, ranging from LEDs to high-power electronics, is becoming smaller, yet producing more power. In many ways the physical design limits have been reached for packaging, allowing entire components to have a total thermal resistance of less than 0.1 K/W. However, as components are added to PCBs, heatsinks, or other parts of the application, this thermal resistance may increase ten-fold!
Thermal interface materials are becoming all the more important – something like the last frontier to conquer thermal issues in applications. Well, not entirely, but it sounds epic.
Several factors concerning the testing of TIMs may be considered when looking at datasheets and applying them to the desired application:
Of course, more questions are raised, but these cover the most basic problems within the industry, which turn TIMs into something like a holy grail of application engineering. I will try to step through the problems one by one, to provide a basic overview.
How was the material tested?
This seems like the most obvious question to ask, but in the case of TIMs not necessarily the most important one. Although most TIM providers have their own internal standards, most of them lean on the ASTM D-5470 (see Figure 1). The principle of the device is quite simple:
Figure 1: ASTM D-5470 type device containing two metal contacts, with multiple thermocouples on each side to measure the heat flux. Source: 3M.
However, the standard is loose on the interpretation of how pressure is applied, what the sample size should be, and how the thermocouples are placed. Of course, there are best practices, but this is where companies can define their own methodology.
The interesting part about this is that the method of testing is the least important question, compared to the problems the method itself implies. Confused? Let me clarify.
The crux
The purpose of thermal interface materials is to fill the gaps on uneven surfaces. If two rough surfaces are placed on top of each other, they produce microscopic air gaps that can increase the thermal resistance significantly. So why not make them smooth? Manufacturability and cost are the driving factors here. The solution is the TIM. It is cheaper and can be selectively applied. This seems nice and dandy. However, when testing the material itself, the air gaps that it can’t fill are also being measured.
Figure 2: Added contact thermal resistance in test setup.
The solution is to wrap it up, send it to space, and test it in a vacuum. However, considering the probably of that application ever reaching the design sheet of a car or cell phone, this solution is as far from reality as galaxy EGS-zs8-1 is to Earth.
In addition, the test system in question presumes a 1D heat-flow path. So unless, you plan on using surface areas less than 1 cm2, this measurement may not come close to the application (which brings us to the second question). Not only is the size of the sample important because of the geometry of test vs. application, but also the applied pressure. The larger the sample, the more accurate the pressure has to be applied – perhaps even over multiple points.
This requirement presents multiple issues:
All these unanswered questions provide us with discrepancies between measured and vendor data, as seen in Figure 3.
Figure 3: Datasets comparing the ASTM D-5470 Method (Statim), with the Mentor Graphics DynTIM and vendor data from the same TIMs.
The solution
Naturally, new testing solutions are always on the horizon. And, while I was writing this article an ambitious engineer probably came up with a salient solution to solve one of my problems. Nonetheless, I would also like to present a novel approach to this dilemma.
By combining the test methodology, with in-situ characterization, we may not be able to compensate for every single issue above, but we can correlate test results with real-world applications. This is sometimes worth more when thinking of designs, than addressing all the problems individually. So how would it work?
1. I gather data from my TIM tester (Mentor Graphics DynTIM, which performs similar to the ASTM D-5470 but adds the proprietary structure function technology.
Figure 4: DynTIM, using a diode as heating source, measuring the thermal resistance of a material by using structure function analysis instead of thermocouple measurements.
2. Gather data on my desired materials.
Figure 5: DynTIM tests performed on different types of materials.
3. Perform structure function analysis.
Figure 6: Structure function analysis on the gathered data from DynTIM in Figure 5.
Benefit
Don’t fret if the benefit is not immediately obvious. The structure function is the common denominator between the application and the material testing. Using this common denominator, we can gather theoretical data on our material, place it in the application, and compare the performance in a new structure function. This will provide data on many different levels:
Overall, the point I am trying to bring across is that the current methodologies are insufficient. While each company may have its own benchmarks, the buyer of the materials has to benchmark the benchmarks. However, we can eliminate much work if we can compare the lab data (material only) with the in-situ performance.
Leave a Reply