中文 English

Who Owns In-Chip Monitoring Data?

Rules are still being formulated even though the technology is already deployed.

popularity

In-chip monitors provide unprecedented visibility into the inner workings of complex integrated circuits for everything from process control to fine binning, preventive system maintenance, and failure analysis. But there may be many consumers of different slices of the data at very different phases of the chip lifecycle, raising questions about who controls and owns all of that data.

The answers are surprisingly complex.

“There’s this joint ownership of the data, with different people at different layers doing different things with the data for their own purposes,” said Kurt Shuler, vice president of marketing at Arteris IP. Contracts and regulations further complicate things.

Access to data is an important part of the business model that monitoring companies set up between themselves and the various data stakeholders. Nevertheless, subscriptions to that data may be independent. One company’s access to the data may not depend on the subscription status of any other company.

Opinions vary, in particular, when it comes to some of the delicate fab-related data that could be exposed with the right monitors.

Where the data originates
In-chip monitors start out under the control of one entity — the company building an SoC. Throughout this piece, we’ll refer to that company as the “chipmaker” (even though they don’t physically build the chip if they’re fabless). That company selects a monitoring partner — or more than one — and pays a fee to purchase the monitor IP. The design and monitors are then verified and go into manufacturing. Only when the first chips get to the test station do the monitors wake up.

That data is transported to some data center (ignoring for this discussion the fact that it also may be used on a tester). That data center may be under the control of the monitor vendor or, more likely, it will be a private cloud belonging to the chipmaker that’s running the analytics software for the monitoring company.

At this point, only two companies are involved — the chipmaker and the monitoring company. Of course, if more than one monitor is used, then each monitor company is involved. But other possible stakeholders might want access to the data. Foremost among these is the system builder, which buys the chips and installs them in a device.

That system could have multiple chips with monitors on them, meaning there could be multiple sources of data. And each source could be from a different chipmaker. And yet it’s the chipmakers, not the system maker, that decide what monitoring will be available based on the monitors they choose to include.

A chipmaker is free to consult with system-building customers to ensure that the monitors are of value. In all likelihood, though, the monitoring capability will be part of the value of the chip. Chips with the most attractive monitoring choices and data will be more attractive.

“At some point it will become competitive differentiation,” said Steve Pateras, senior director of marketing for test products at Synopsys. “You can sell data about your chips, as well as your chips, to your customer.”

Other possible interested parties include fabs — even though they have lots of other tools for measuring chip characteristics — as well as test houses and assembly houses. Within these companies, different departments may have an interest. Binning and performance selection, preventive maintenance, and failure analysis are classic applications for monitoring data.

But how do these teams get access? Does the chipmaker grant access to the data that already has been harvested? Does the monitor-maker, by virtue of the analytics software, grant that access? If the chipmaker controls things and ends a data subscription, does that end it for everyone? And do the chips stop gathering data then?

There is no single answer to these questions. Each monitoring company is free to build its own technical and business model. And those models are likely to evolve as the industry figures out what works best. But there are policies in place now with each company that provide answers, even if they are only temporary.

Subscribers mostly pull data
It’s easy to picture these chips out in deployment, busily generating data that accumulates somewhere. In such a scenario, each subscriber would be pulling from that data trove, not from the device itself. But that’s not how it works.

For one thing, pushing out data when no one explicitly is requesting it keeps the monitors active, which draws power and may take bus cycles. “We don’t take power from the chip and make it work more than it needs,” said Uzi Baruch, chief strategy officer at proteanTecs. “We are sensitive to the actual operation itself. We don’t want to consume energy. We don’t want to make any changes to the way that you operate.”

The timing can vary even with metrics that may need regular monitoring, making it easier to schedule data pull-in software rather than data push-in hardware. “If you’re interested in the aging of the chip, you don’t need to pull it every second and ask what’s going on,” noted Baruch. “Even if you use an interval of every few days or weeks, the ability to track aging does not fundamentally change.”

In a data-pull design, data is not pushed out by some schedule designed into the chip. Instead, data is always pulled, at least from the standpoint of the chip. However, that pull could come from a number of levels. The most common one would be requests built into the system software. That code would manage all of the “regularly scheduled” data the various stakeholders might want to have taken. So at the system level, this would look like data being pushed out, while at the chip level it’s the system pulling data.

This puts control at the level of software, which can be updated and changed if necessary, making it more flexible. It also means that if are no subscribers, the firmware can be updated to stop pulling the data and sending it to the data center.

With respect to one-off data — as opposed to the regularly scheduled data — the data pull would originate with the requester in the cloud, being communicated down into the chip for immediate data. Historical data, of course, would come from the existing archive instead of the device.

Fig. 1: An abstracted view of the system-level architecture for in-chip monitoring. Each chip may be from a different vendor. In addition, the chips may have a mix of monitors from different monitor vendors. Pre-scheduled data is simply delivered to the cloud. The API deals with one-off data requests. Some monitors may be able to initiate a data push in the form of an interrupt, typically for some system-critical situation. Source: Bryon Moyer/Semiconductor Engineering

Fig. 1: An abstracted view of the system-level architecture for in-chip monitoring. Each chip may be from a different vendor. In addition, the chips may have a mix of monitors from different monitor vendors. Pre-scheduled data is simply delivered to the cloud. The API deals with one-off data requests. Some monitors may be able to initiate a data push in the form of an interrupt, typically for some system-critical situation. Source: Bryon Moyer/Semiconductor Engineering

There may be situations that call for a data push, however. “You could have something like a catastrophic trip monitor that says, ‘Look, if the temperature ever gets above this point, send out an alert,’” said Randy Fish, director of marketing for silicon lifecycle management in the Digital Design Group at Synopsys. “You may want to pre-define it, which may mean that it’s fused during manufacturing, when you calibrate it. You may have other catastrophic monitors you’re putting in place that are programmable.”

Some companies allow for both push and pull models. “Whether the monitor is in pull or push mode is a runtime configurable feature,” said Gajinder Panesar, fellow at Siemens EDA, with respect to their monitors.

Given the pull nature of most of the data, the system designer is in the unique position of bringing all of that data out into the world. The system software is the ultimate arbiter of what data is pulled when — at least for any regularly scheduled data.

But even ad-hoc requests have to percolate through the system, and in theory that software has the ability to accept or reject a request. So as much as chip designers may want to be in control, it’s ultimately the system designer who makes the final decisions, since they may not query all of the available monitors in the firmware.

While the in-chip monitors can’t force the system to allow access, they can override decisions to access data based on who is requesting it even if the system approves it. “We have monitor controllers on the chip, and they control certain groups of sensors and monitors,” said Pateras. “And so we have ways of controlling access to each of those controllers as part of our infrastructure.”

The business side of things
Technical capabilities aside, much of this becomes an issue of business model and contracts. There are definitely different views of what might work best here.

In the case of proteanTecs, anyone with legitimate interest in the data can be a subscriber. But there’s a catch. Access must be approved. Where that approval comes from depends on the scenario. If a chipmaker is creating a chip that will be sold on the open market, then they are the approver. On the other hand, if a system builder specifies the design of a new chip that will include monitors and that chip is exclusively for them, then they become the approver.

If a company wants to subscribe, first they go to the approver and get the okay, and then they can go to proteanTecs to purchase access to the software portal. Assuming such approvals, there could be many subscribers.

That subscription eventually may lapse, too, due to the terms of the subscription end, or the approver could rescind access. If that happens, there are two implications. One is the ability to pull new data. That will end. The second is the ability to use the software to access existing data. That could continue, even if no new data is being delivered. In the event that the subscription to the software platform is ending, then the data can be exported to the subscriber’s own storage for future use.

As to who owns the data, proteanTecs says each subscriber owns the data to which they subscribed, which means the data may have many owners. Data already received – if stored elsewhere – can’t be pulled back if the subscription lapses.

There’s also a privacy issue regarding timing. An “owner” of the chip — and this applies mostly to the end user of the system containing the chip — gets data access (assuming this is approved) only for the time when they own the system. If they sell the system to another user on the secondary market, then the new user can’t go back in and look at what the previous user might have done.

Some data is particularly sensitive
Of particular concern is data that would normally be protected and restricted by fabs and test houses. “If you’re the fab, are you losing control over the perception of your product by somebody dinking around with the data?” asked Shuler.

This is where things get tricky. “Almost any fab data is a touchy subject,” said Fish. “Some foundries or fabs are touchier than others.”

In-circuit monitors also add security concerns. “When it comes to adding monitors and stuff like that, there is the potential that fabless people can put secret structures in that the foundry can’t access,” said Guy Cortez, staff product marketing manager for silicon lifecycle management in Synopsys’ Digital Design Group.

It’s a delicate subject, and some folks did not want to go on the record for this story. Others acknowledge there may be different restrictions that apply in different situations.

“Fabless companies don’t have the right to reverse engineer [the silicon process] and sell that information,” explained Fish. “They’re protected under all sorts of non-disclosures. But they have the right to build things in. If we started crafting monitors that were effectively pulling out process information, we’d have to defer to the agreement that the customer has with the foundries. You’re in a gray area.”

Who gets what data may depend to some extent on the size of the company requesting access. “Depending on the size of the fabless business, requests for fab data are met on a sliding scale,” said Mike McIntyre, director of software product management at Onto Innovation. “Largest customers may get a fairly complete data set, but only for specifically requested or individually named material. Significant customers may get partial data only for specifically identified wafers, and lower-order customers may not get any additional data beyond what was contractually obligated.”

This is business as usual for many companies. “There’s already data exchanged between foundries and their customers in both directions,” added Pateras. “We do that already for yield improvement and reliability improvement.”

Siemens views those partners as having some ownership of that data. “I would expect that yield data will still be owned by the fab companies,” said Panesar. “But it will be accessible via a secure API so that aging and preventive maintenance can be provided as a service.”

Nevertheless, not all data is available. “System-level and architectural data on how the system is behaving in deployment will be the province of the system provider,” Panesar added. “This also will be fed back, in a controlled way, to the chip manufacturer to architect next-generation products and possibly to monitor service-style agreements on uptime.”

Much of the data ends up in a trove that may include other data, like test or fab metrology results. That data may be restricted. Monitor data taken after the device is live typically isn’t, although there may be restrictions on which monitors can be accessed.

ProteanTecs has no blanket restrictions, and reports no issues with foundries being concerned that data is going to folks who shouldn’t see the data. “Since customers own the data, they can decide what and how much to reveal,” said Baruch.

It’s also possible that low-level software could see all of the raw data directly, but that data would be abstracted before being presented to the viewer. In that manner, viewers could draw overall conclusions without being able to dig into the confidential details.

“Massaging the data, creating some form of metadata that gives more trend information or more generalized information about what’s going on, may hide some of the dirty laundry underneath,” said Pateras.

Monitor companies get paid for the monitor IP, possibly for subscriptions, and possibly for storing data. But while it’s sometimes vague knowing who owns the data, they are all clear that it isn’t them. “We don’t own the data,” said Pateras. “We’re not in the data brokerage business. We give you a tool that creates a database, that manages a database of data, and that analyzes the data, but we don’t see the data — and we don’t want the data.”

Security is important
Because much of this data is sensitive, regardless of who owns it, security is critical. “Without the proper levels of security, it’s a non-starter,” said Keith Schaub, vice president of technology and strategy at Advantest.

Synopsys, for instance, lays down security infrastructure as a part of the IP for a monitoring system.

Part of this is about ensuring that only authorized parties can access the data. The data itself must also be protected while stored, although that’s more likely to be a data-center issue rather than a system issue.

Data in motion — the deliveries of monitor data — also must be protected. While encryption is the most secure approach, some companies may opt for watermarking or obfuscating as a lighter touch. This applies regardless of whether the data is moving within the system for operational decisions or being delivered up to the cloud for analytics.

It’s all tentative
Because we’re in early days of in-chip monitoring, business arrangements are often one-off agreements.

“The proper incentives need to be in place — many new business models are being negotiated,” said Advantest’s Schaub. Different customers may negotiate different deals when it comes to all of the details of how the data is used and paid for.

Siemens EDA’s Panesar sees the potential for an entire ecosystem evolving around the various kinds of available data. “Just as with the smartphone, an ecosystem will definitely build up around in-chip data,” he predicted.

So whatever the situation is now, it’s likely that things will change over time as the industry adapts to the availability of monitor data. “I’d challenge anyone to give definitive answers,” he said. “And the answers tomorrow almost certainly won’t be the same as they are today. Usage and business models are changing.”

While monitoring may be maturing from a technical standpoint, the business models are still evolving. It remains to be seen whether they simply stabilize with each individual company, or whether the industry as a whole settles on a standardized way of dealing with this.

Related
IC Data Hot Potato: Who Owns And Manages It?
Dealing with a deluge of data in IC inspection, metrology, and test.
Designing Chips For Test Data
Getting the data out is only part of the problem. Making sure it’s right is another challenge altogether.
Ins And Outs Of In-Circuit Monitoring
Techniques to predict failures and improve reliability.
In-Chip Monitoring Becoming Essential Below 10nm
Complex interactions and power-related effects require understanding of how chips behave in context of real-world use cases.



1 comments

Michael Kanellos says:

Fascinating. I think it will evolve similar to real estate law with concurrent (but different) interests. Testing companies will give their data to chip designers as part of their contract but retain the right to anonymize it for optimizing their own services. (Utility testing companies do this now.) Fabs will have primary rights on some data, but not non manufacturing data, and designers will have rights of refusal. etc.

Leave a Reply


(Note: This name will be displayed publicly)