How Secure Is The Package?

Making all the components work properly is only the beginning.

popularity

Advanced packaging is a viable way of extending the benefits of Moore’s Law without the excessive cost of shrinking everything to fit on a single die, but it also raises some issues about security for which there are no clear answers at the moment.

OSATs and foundries have been working the kinks out of how to put the pieces together in the most cost-effective and reliable way for the better part of a decade. The next challenge is making sure all of those pieces are secure, both together and individually.

There are three main security challenges that need to be addressed. The first involves heterogeneity. For the most part, multi-chip packages being used in products today are comprised of single-vendor IP blocks. The exception is the off-chip memory, which is made by one of several DRAM vendors, and which is unlikely to cause a security issue because those components are volatile. But when these parts are developed by different vendors, this widens the potential attack surface.

While the responsibility ultimately belongs with the system integrator, which may be a systems company, foundry, OSAT, or even an IDM, the problem is that no infrastructure exists to guarantee those dielets/chiplets do only what they’re supposed to do, and nothing more. This requires a leap of faith at the moment, because most of them will be sold as black boxes. At a minimum, they likely will have some sort of code or marking embedded in them to guarantee authenticity and point of origin. Hopefully, system architects also will require continual monitoring of thermal and electrical signals at the interfaces and across the chip to determine if there is unusual activity, or if there are problems in one chiplet or part of a chiplet that could create a problem, security or otherwise.

Second, because devices are expected to last longer in the market these days, they need to be updated throughout the lifetime of components. This means all of the components may need to be updated over time, and it’s highly unlikely that all of them will be updated equally or simultaneously. Done right, this can be accounted for with a flexible architecture. Done wrong, it could create mismatches on every level, which in turn could mask attacks on even the best designed and monitored devices.

Third, continual over-the-air updates can add complexity and incompatibilities into software and firmware, as everyone who has owned a smart phone or PC for more than a couple years can attest. The more elements that are updated unevenly and independently, the greater the possibility for unexpected vulnerabilities, let alone complete failures.

There is much work underway in standards bodies to make sure the various pieces in a heterogeneous design can work together in a plug-and-play manner. It’s a hard problem to solve, and a lot of work and engineering time is going into this effort. But it’s also important that these standards efforts drill down into how to build enough flexibility and resiliency into these multi-chip architectures from a security standpoint.



1 comments

Johann Knechtel says:

Thanks for that, great opinion article!

It’s a very good question how the system integrator should deal with commodity chiplets, which might have some security flaw in their firmware or hardware, but have to be used and trusted as black boxes. Also, the notion of monitoring is a great idea, for both functional signals and also thermal, power, or other channels.

We can argue to use active interposer technology to solve these issues, with the interposer being manufactured at a trusted facility and serving as root of trust or security backbone for integration of untrusted chiplets.

Leave a Reply


(Note: This name will be displayed publicly)