3 Technologies That Will Challenge Test

It’s not entirely clear what those tests are supposed to do, but they could have a big impact on the whole testing process.

popularity

As chips are deployed in more complex systems and with new technologies, it’s not clear exactly what chipmakers and systems vendors will be testing.

The standard tests for voltage, temperature and electrical throughput still will be needed, of course. But that won’t be sufficient to ensure that sensor fusion, machine learning, or millimeter wave 5/6G will be functioning properly. Each of those raises some issues for which there currently is no clear roadmap.

Sensor fusion has come into focus most recently in the automotive market, where multiple functions will need to be combined in order to limit the amount of data that needs to be moved around a vehicle and processed centrally. If an object is in the road, it’s not feasible for three different live streaming data feeds to report that to a central logic command center in the vehicle. That would take too much energy, time, and it would require a much more expensive central computer.

While standard checks are needed to ensure a chip is functioning properly, the real challenge with sensor fusion is prioritization and partitioning. Under ideal conditions, that likely will work as expected. However, most accidents don’t happen under ideal conditions. And if sensors are combined into a single module, then those modules will require redundancy and regular health checks.

But even then, what exactly is a multi-sensor module being tested for? Is it the functioning of the hardware and software, or the ability to recognize an object? And how do you test whether one sensor’s input should be prioritized over another’s, particularly in fog or a whiteout?

AI plays a role here, and much has been written on the reliability of this technology. AI/ML/DL are supposed to adapt over time to optimize performance and power, but how that happens remains opaque, despite efforts to shed some light here. There is no way to look into an algorithm gone awry and troubleshoot exactly what happened, and so far there is no obvious way to test that to make sure nothing does go wrong. It’s possible to hit the reset button, but even that may not produce the expected results because some circuits may age faster than others under unique circumstances.

A third troubling technology for test is 5G/6G, which can vary greatly depending upon the wireless spectrum allotted by various regions and countries. Each has its own frequencies, which makes it difficult to determine whether a handset developed in one country will function equally well in another. A cell tower at the lower end of the mmWave band will have very different results than one at the higher end. At the highest frequencies, signals can be disrupted by weather. So when something fails to connect, is it due to the chips, the antennas, or something else. And given all of the variables, how does one go about testing for all of that?

While test is still critical to ensure that chips, packages and systems function properly to spec at manufacturing, it no longer can be viewed just as a final step in the fab and packaging house. It may take years before new technologies are stable enough to test them with certainty, and even then they will likely change and adapt with some form of AI and continual over-the-air updates. As a result, test will need to be looked at as a function as well as a process, and a competitive add-on/add-in. It will need to be factored into designs, both for power and performance reasons, and assuming privacy concerns can be worked out, leveraged to improve design and manufacturing processes.

Put simply, the limitations of test today may be the start of something much bigger.



Leave a Reply


(Note: This name will be displayed publicly)