More Semiconductor Data Moving To Cloud

A year ago many companies were unwilling to ship their data offsite. What changed?


The cloud is booming. After years of steady growth it has begun to spike, creating new options for design, test, analytics and AI, all of which have an impact on every segment of the semiconductor industry.

The initial idea behind the cloud is that it would supplement processing done on premises, adding extra processing power wherever necessary, such as in the verification and debug stages of design, and in processing large quantities of manufacturing process and test data. What’s changed is that increasingly it is becoming the primary location for that data, both for processing and storage. In addition, processed data is even being looped back into the supply chain post-production to improve reliability and fix problems that show up in real-world use of products.

Several things have changed to make this happen:

  • Confidence has grown in the capabilities and security of cloud providers to the point where more companies are trusting the cloud and leveraging it to process more data in more ways than in the past.
  • Coupled with that support, far more data is being collected by more sensors, both externally in manufacturing equipment and internally in devices as they are being used.
  • All of that data is being crunched in ways that matter to individual products and processes. In the past, massive amounts of data were distilled into reports for top executives, many of whom gave the reports only cursory reads. Today, some of that data is being fed directly back into machines as manufacturing recipes are refined and improved.

This represents a fundamental shift on many fronts. On the design side, chipmakers barely paid any attention to cloud offerings until last year, despite the best efforts by Synopsys, Cadence and Mentor, a Siemens Business, to promote cloud capabilities. But several things have occurred to make this more promising. First, TSMC has endorsed design within the cloud last October when it announced its Open Innovation Platform for RTL to GDSII would be available on Amazon Web Services (AWS), Microsoft Azure, Cadence and Synopsys clouds.

On top of that, a growing percentage of companies developing chips these days are either startups or systems companies, rather than traditional chipmakers. Those companies have less design infrastructure in-house, and in the case of startups they do not have the resources to buy emulators or simulators large enough for a reticle-size AI chip. In addition the growing complexity of chips and more heterogeneous architectures are adding to the difficulty of designing these chips at the most advanced nodes, where even large chipmakers are constrained by compute resources in their server farms.

“A major effort is underway to move everything to the cloud,” said Mike Gianfagna, vice president of marketing at eSilicon. “We will have everything in the cloud by the end of this month. We’ve been working with Google to move our heterogeneous design flow to the cloud. But what’s different is we need multiple tools from multiple vendors, so you may have synthesis tools on one cloud and CAD on another. What we’re doing is running an entire design environment on the cloud. The big question is whether EDA companies will adjust their licensing to time-based to provide that level of flexibility. The fine-grained model works better for everyone.”

More Data Moving To Cloud. Image by <a href="">Pete Linforth</a> from <a href="">Pixabay</a>
Image by Pete Linforth from Pixabay

IP licensing already has begun to shift in this direction. Arm changed its licensing scheme to allow companies to experiment with different processing options and only pay for IP after chips go into production.

“This is a new engagement model,” said Dipti Vachani, senior vice president of automotive at Arm. “What we’re seeing, particularly with machine learning and AI algorithms, is that decisions are being made at different points in the chain. You want to know how much data you need, how much power and how much performance. This enables experimentation and testing, and we’re seeing a lot of startups with new hardware, as well as traditional OEMs making silicon.”

Analytics in the cloud
That’s one piece of the picture. In addition to that, many of the data analytics for the design and manufacturing of those chips are moving to the cloud. This kind of analysis was never allowed off-premises in the past because it involves data that can be used for competitive purposes if it leaks out.

Some chipmakers and fabs still have a rule that data never leaves the site. USB slots are disabled, cell phones are prohibited because they can be used to take screen shots, and workers are barred from surfing the Internet. But even that is beginning to loosen up as the market consolidates, and TSMC’s endorsement of cloud-based design tools represents a major step in that direction.

“What’s changed is more people are starting to believe their data is much safer in the cloud,” said John O’Donnell, CEO of yieldHUB. “One of our largest customers had moved some financial systems to the cloud, and they decided that if they trusted their financial information to the cloud, then they could move everything else. Another company was a great believer in servers in its own facility until a year ago, when one of their employees left the company with all of the passwords. They believe that won’t happen with a trustworthy cloud vendor.”

Others are seeing the same trend.

“The cloud is taking off like a rocket,” said David Park, vice president of marketing at PDF Solutions. “Customers who a year ago said they would never go to the cloud have changed direction. They’re already looking to optimize licenses on the cloud. The problem is that everyone thinks the cloud should cost less. The reality is that it will be more expensive because there are significant capital investment and operation dollars involved. But there also are big advantages. For on-premises compute, you always had to plan for the worst-case scenario, and the rest of the time was just excess capacity. The cloud gives you flexibility. So you have more efficiency, even though you pay for that.”

That also puts much more pressure on cloud providers to get this model right.

“Cloud has evolved from a tool that leverages and optimizes infrastructure to a core component in companies’ growth strategies, offering agility, efficiency, cost-effectiveness and collaboration,” said Shai Cohen, CEO of proteanTecs. “This, in turn, raises the bar for cloud providers to maintain a hyper-reliable service, given that confidence remains a key selling point for customers. Cloud outages can be devastating, resulting in halted operations and financial loss for the companies running on them. Cloud service providers must continuously adopt leading-edge technologies to keep up with crucial uptime requirements.”

New use cases
In the past, much of the momentum for the cloud was driven by the largest companies. Those companies continue to broaden their cloud engagements, adding more pieces of their processes into the cloud and using that as their primary compute operation, or for extra compute capability for on-premise data processes.

In addition, there are more companies looking to the cloud that never considered that approach in the past.

“The Fortune 500 were late in moving to the cloud,” said Michael Schuldenfrei, corporate technology fellow at OptimalPlus. “But all of a sudden, in the last 12 to 18 months, there has been a massive shift to the cloud. A lot of this is massively parallel kind of computing, where you can have almost unlimited processing in the cloud. But it’s a big shift in where the industry was going, and the cloud will win. There is tremendous industry momentum to leverage.”

At least for the moment, there doesn’t seem to be any reluctance to move. “Now, even for sensitive applications like test data, the whole thing is running on the cloud,” Schuldenfrei said. “If you go back three to five years ago, there is no way they would have deployed those kinds of solutions on the cloud. They didn’t trust the security. But the reality is that cloud providers do better security than doing it all on-premise.

In fact, the recent breach at Capital One, was caused by how Capital One configured its firewall rather than the underlying cloud technology. That attack compromised about 100 million customers in the United States and another 6 million in Canada.

Cloud for storage
The cloud is becoming particularly important in the automotive world for a different reason. As more electronics go into cars, carmakers need to keep track of data for the lifetime of a vehicle. That way, if anything goes wrong, they can go back and figure out exactly what cause the problem by analyzing original data and understanding what may have slipped through the cracks.

In the case of automotive OEMs, liability is attached to accidents caused by technology. Having detailed information about the manufacturing process, the supply chain and the exact version of tools used to simulate and test are essential for determining the cause and averting other potential accidents through recall notices.

“Keeping data in the cloud is very cheap, and it’s very reliable,” said yieldHUB’s O’Donnell. “They’re looking for 11 nines (99.999999999%) reliability. So your data is going to be far more reliably held in the cloud than you could potentially hold onto on-premise. A tape could accidentally be destroyed or there could be an earthquake or some other natural disaster. In the cloud, they tend to replicate across the world. The cloud is the future, and the finance people and CIOs are trusting the cloud with mission-critical data. Manufacturing data is the next step.”

The edge
A big unknown is where the edge will fit into this picture. While the cloud is good for many things, it still takes time and resources to send massive amounts of data to the cloud. This is particularly true for real-time video generated by cameras in automotive applications, which is too resource-intensive to ship to the cloud for instant processing and too time-consuming to wait for a response in a critical situation—and that’s assuming there is enough bandwidth and a good-enough connection to make that happen in the first place.

This makes the edge a critical piece of the infrastructure, but so far it has yet to be fully defined. Nevertheless, there is almost universal agreement that the edge will be a critical component in the processing of data, whether that happens close to the sensor where data is generated, inside a vehicle, or at some nearby server that may or may not be connected to the cloud.

“The key is universal chip telemetry,” said proteanTecs’ Cohen. “You’ve got operational sensing, performance degradation monitoring, and classification/profiling.”

That requires both standard interfaces for the data and a consistent format for the data itself, but the upside of that can be significant. “This can save a lot of time for testing,” Cohen said.

That data also can be used for a variety of purposes that go beyond just standard testing. ProteanTecs, UltraSoC, ANSYS and Arteris IP are using internal sensing of data movement and thermal signatures to map when activity in a chip is either unusual or happens when no activity is expected. This effectively becomes a motion detector for data, which can be flagged locally and processed at the edge, using the cloud to collect and process broader instances of behavior to determine if an issue is widespread or localized.

The movement of more data to the cloud is a monumental change for the computing world, and one that will continue to have ramifications for years to come. But while its importance cannot be overstated, it is just one of a number of changes underway in computing and compute architectures.

“The edge is supplemental to the cloud,” said OptimalPlus’ Schuldenfrei. “It’s not clear who will win there yet. There are the cloud players, of course. Amazon is talking about servers at the edge, and others are talking about similar capabilities. But none of that changes what’s happening in the cloud.”

The big question now is how that computing gets partitioned, how smoothly that data will move back and forth and how that will affect the underlying architectures. At this point there are a lot of unknowns. The only clear trend at the moment is that much more data is headed to the cloud, and that is expected to continue for the foreseeable future.

Related Stories
Big Shifts In Big Data
Why the growth of cloud and edge computing and the processing of more data will have a profound effect on semiconductor design and manufacturing.
Using Better Data To Shorten Test Time
More sensors plus machine learning are making it possible to focus testing where it has the most impact.
Data Confusion At The Edge
Disparities in processors and data types will have an unpredictable impact on AI systems.
Data Analytics Knowledge Center
More top stories, special reports, blogs and white papers on Data Analytics

Leave a Reply

(Note: This name will be displayed publicly)