Systems & Design
SPONSOR BLOG

Thermal Interface Materials: The Unknown Entity?

Current methodologies are inadequate to address increasingly complex thermal issues.

popularity

Thermal interface materials (TIMs) are becoming more important in all application areas and between different component parts. Any semiconductor, ranging from LEDs to high-power electronics, is becoming smaller, yet producing more power. In many ways the physical design limits have been reached for packaging, allowing entire components to have a total thermal resistance of less than 0.1 K/W. However, as components are added to PCBs, heatsinks, or other parts of the application, this thermal resistance may increase ten-fold!

Thermal interface materials are becoming all the more important – something like the last frontier to conquer thermal issues in applications. Well, not entirely, but it sounds epic.

Several factors concerning the testing of TIMs may be considered when looking at datasheets and applying them to the desired application:

  • How was the material tested?
  • What is the size of the test sample, compared to the application?
  • What type of material is being used?

Of course, more questions are raised, but these cover the most basic problems within the industry, which turn TIMs into something like a holy grail of application engineering. I will try to step through the problems one by one, to provide a basic overview.

How was the material tested?
This seems like the most obvious question to ask, but in the case of TIMs not necessarily the most important one. Although most TIM providers have their own internal standards, most of them lean on the ASTM D-5470 (see Figure 1). The principle of the device is quite simple:

  • Place the desired TIM between two metal arms;
  • Each metal arm should contain multiple thermocouples;
  • Apply a heat source to the top plate and let the heat flow from top to bottom;
  • Record the change in temperature between each thermocouple;
  • Repeat process by applying different pressure to the arms, and consequently the TIM.

Figure 1 - ASTM5470 Machine
Figure 1: ASTM D-5470 type device containing two metal contacts, with multiple thermocouples on each side to measure the heat flux. Source: 3M.

However, the standard is loose on the interpretation of how pressure is applied, what the sample size should be, and how the thermocouples are placed. Of course, there are best practices, but this is where companies can define their own methodology.

The interesting part about this is that the method of testing is the least important question, compared to the problems the method itself implies. Confused? Let me clarify.

The crux
The purpose of thermal interface materials is to fill the gaps on uneven surfaces. If two rough surfaces are placed on top of each other, they produce microscopic air gaps that can increase the thermal resistance significantly. So why not make them smooth? Manufacturability and cost are the driving factors here. The solution is the TIM. It is cheaper and can be selectively applied. This seems nice and dandy. However, when testing the material itself, the air gaps that it can’t fill are also being measured.

Figure 2 - 400px-Difference_between_thermal_conductivity_of_thermal_interface_materials_and_thermal_contact_resistance
Figure 2: Added contact thermal resistance in test setup.

The solution is to wrap it up, send it to space, and test it in a vacuum. However, considering the probably of that application ever reaching the design sheet of a car or cell phone, this solution is as far from reality as galaxy EGS-zs8-1 is to Earth.

In addition, the test system in question presumes a 1D heat-flow path. So unless, you plan on using surface areas less than 1 cm2, this measurement may not come close to the application (which brings us to the second question). Not only is the size of the sample important because of the geometry of test vs. application, but also the applied pressure. The larger the sample, the more accurate the pressure has to be applied – perhaps even over multiple points.

This requirement presents multiple issues:

  • A large surface area means a non-1D heat-flow path.
  • Different pressure means varying TIM performance over the same material.
  • At which boundary conditions will the TIM actually be applied in the application?
  • How will the different materials preform under the test conditions?
  • Is it soft material to fill the gaps?
  • Is it harder, so a more constant pressure is applied?

All these unanswered questions provide us with discrepancies between measured and vendor data, as seen in Figure 3.

Figures 3 - dyntim-graph
Figure 3: Datasets comparing the ASTM D-5470 Method (Statim), with the Mentor Graphics DynTIM and vendor data from the same TIMs.

The solution
Naturally, new testing solutions are always on the horizon. And, while I was writing this article an ambitious engineer probably came up with a salient solution to solve one of my problems. Nonetheless, I would also like to present a novel approach to this dilemma.

By combining the test methodology, with in-situ characterization, we may not be able to compensate for every single issue above, but we can correlate test results with real-world applications. This is sometimes worth more when thinking of designs, than addressing all the problems individually. So how would it work?

1. I gather data from my TIM tester (Mentor Graphics DynTIM, which performs similar to the ASTM D-5470 but adds the proprietary structure function technology.

Figure 4 - dyntim-closeup
Figure 4: DynTIM, using a diode as heating source, measuring the thermal resistance of a material by using structure function analysis instead of thermocouple measurements.

2. Gather data on my desired materials.

Figure 5 - dyntim-chart
Figure 5: DynTIM tests performed on different types of materials.

3. Perform structure function analysis.

Figure 6 - SF_DynTIM
Figure 6: Structure function analysis on the gathered data from DynTIM in Figure 5.

Benefit
Don’t fret if the benefit is not immediately obvious. The structure function is the common denominator between the application and the material testing. Using this common denominator, we can gather theoretical data on our material, place it in the application, and compare the performance in a new structure function. This will provide data on many different levels:

  • The relation between theoretical and practical performance;
  • The quality of the application (poor surfaces causing contact issues, or different pressures);
  • Self-verification of materials that are yet to be purchased;
  • Uncover material parameters that may not be immediately obvious from datasheets or simulations.

Overall, the point I am trying to bring across is that the current methodologies are insufficient. While each company may have its own benchmarks, the buyer of the materials has to benchmark the benchmarks. However, we can eliminate much work if we can compare the lab data (material only) with the in-situ performance.



Leave a Reply


(Note: This name will be displayed publicly)