Data Confusion At The Edge

Disparities in processors and data types will have an unpredictable impact on AI systems.

popularity

Disparities in pre-processing of data at the edge, coupled with a total lack of standardization, are raising questions about how that data will be prioritized and managed in AI and machine learning systems.

Initially, the idea was that 5G would connect edge data to the cloud, where massive server farms would infer patterns from that data and send it back to the edge devices. But there is far too much data being generated by a rapidly growing army of edge sensors, including streaming video, to make that approach workable. Instead, processing has to be done at the end point, or close to it, in an area that is today vaguely defined as the edge.

A recent report from Cisco estimates that by 2022, monthly Internet protocol traffic will be 396 exabytes per month, up from about about 122 exabytes per month in 2017. In addition, more devices will exist—an estimated 3.6 networked devices per person by 2022, versus 2.4 in 2017, half of which will be machine-to-machine connections—and with more sensors per device. There also will be more sensors at every stage of industrial processes and in the manufacturing equipment itself.

“Three or four years ago, we were collecting 5 million data points per second from 3D non-contact optical sensors,” said Subodh Kulkarni, CEO of CyberOptics. “Today we have 75 million data points. All of that has to be analyzed and stored.”

This flood of data has caused a radical shift in what gets processed where. A year ago the edge concept was barely on any company’s technology radar. Today it is a key piece of almost everyone’s roadmap. But so far, no single instruction set architecture dominates this space, and no company has a dominant position. That hasn’t prevented companies from staking a claim, though. The magnitude of this opportunity has created a flood of competitors looking to grab market share—companies ranging from the big cloud providers such as Amazon, Google and Microsoft, to systems companies such as Cisco and Apple, as well as processor makers such as Intel, Arm, Xilinx, Achronix, Flex Logix, and a number of RISC-V licensees.

But because this market is so new, there is no consensus about how and where to process that data, how much of it should be processed at any particular location, or whether some of it should be processed at all. That has resulted in inconsistencies in both hardware and software architectures, and those are likely to persist until this market matures, and perhaps even long after that.

“It’s not clear if there will be platforms on hardware or software, or whether there will be platform pieces for both,” said Duane Boning, professor of electrical engineering and computer science at MIT. “But what is clear is there are no platforms for interaction. The focus is still on transfer and actuate.”

Instead, what’s needed is a methodology to weight data according to how much processing was done and the value of data. That doesn’t exist today, and it is way too early to impose standards on this market segment because at this point the potential problems are not even fully understood.

“You either have 1 million sensors talking to a server directly, or you have a hierarchy of systems with different patterns done at different levels,” said Rob Aitken, an Arm fellow. “From a hardware standpoint this isn’t a problem, but from a software standpoint it’s a potential nightmare. When all of this data is moving to the cloud, you have a bunch of CPU-centric objects. Then you add a layer of security on it. But with localized services, now you need analytics to clean up the data and a time series to figure out whether there is an outlier.”

And this is where things start to get really fuzzy, because AI and machine learning are in a continuous state of development. Algorithms are being updated almost daily, and new hardware architectures are constantly rolling out for faster inferencing using less power. Alongside of this, momentum is increasing to use private clouds and vertical clouds for both security and privacy reasons. That is beginning to add inconsistencies into what gets processed where, which can vary from one company to the next, and even within the same company.

“This is a very good area for new product development,” said Anirudh Devgan, president of Cadence. “This are matrix multiplier/accumulate kinds of things, and there are 50 or so companies working with that. But the key thing is the software part of that, and right now a lot of these companies are doing it themselves. What’s missing is a framework that goes across all of them. TensorFlow does some of it, but that’s not enough because you need data management. There are no really good solutions today.”

Partitioning data
A key issue is how to partition data between various systems and even between components within those systems.

“There are two fundamental approaches to data,” said Michael Schuldenfrei, corporate technology fellow at Optimal Plus. “One is to throw out a lot of data, then ask questions of that data. So you index, organize and arrange, and that works for a lot of use cases. But it breaks down with complex relationships because you can’t look at machines individually, which is important in systems of systems. The second approach is all about system and data analytics, and you look at the whole story. A lot of the problems we see with data partitioning are around data retention. You need to store and retrieve data over time, and you need to do that cost-effectively.”

This is particularly important in manufacturing, where that data can be used to spot defects or irregularities. But it also has to be coupled with an understanding of what the data really indicates, and that requires a deep understanding of the application as well as market nuances. This is why analytics companies are beginning to focus so heavily on hiring or training vertical market experts, who can begin to decipher and weight patterns as the data is processed.

“A lot of companies are getting stuck at what to do with all the data they’re collecting,” Schuldrenfrei said. “With the Tier 1’s in car manufacturing, this is a recurring theme. This is where you need to bring in domain expertise, because you need to extract meaning from the raw data and test data. This is domain-driven engineering, which is how to take raw data and make it meaningful. In semiconductor manufacturing, if you feed raw data into a machine algorithm, you might find something useful. But if you really understand the ‘x’ and ‘y’ data, you can determine the distance from the center of the wafer and determine whether failures are really random failures or predict where they are likely to occur.”

What is also required is a way of accessing all of that data in a consistent way. “We see a big need for data semantics, interoperability across different types of devices, communications protocols, and networks across services,” said Apurba Pradhan, vice president of marketing for Adesto‘s Embedded Systems Division. “We need a way of gluing together the data, and that includes discovery and provisioning, where you assign names, as well as schedules, alarms, and the ability to retrive data for a bunch of services.”

Comparing data
One of the easiest ways to understand data is to compare it to other data. This is the whole idea behind a digital twin, which serves as a reference point. It also is the driver behind Intel’s “copy exactly” approach to minimize variation in between different fabs.

But that doesn’t necessarily work as the number of sources of variation increase at each new node and with various different packaging approaches.

“What’s required here are models for the data,” said John Kibarian, president and CEO of PDF Solutions. “There are levels of representation, and you align the data for context. This is the whole idea behind a digital twin.”

The problem, though, is doing this with the volume of data being generated by edge sensors, which could include everything from streaming data from cameras to heat, vibration and other types of industrial sensors. This is especially true in semiconductor manufacturing, where equipment makers are adding in a variety of sensors.

“There will have to be a tremendous amount of processing at the edge in the foundry,” said Kibarian. “The big bang comes from analytics from lots of sources.”

It also comes from making comparisons at various levels throughout the manufacturing process.

“You need to segment data at the sensor level, then at the system level, and then at the factory level,” said CyberOptics’ Kulkarni. “This is why it makes sense for Fortune 500 or Fortune 50 companies to have their own ecosystem. If you look at the big IDMs, they have their own layers of software in factories. They are collecting massive amounts of data at the raw sensor level. Then, at the system level, they apply algorithms that make more sense for them. But this also varies from fab to fab. They deploy different technology, so it’s not apples to apples.”

It’s hard to overestimate the value of this kind of data, because it’s critical for reducing the number of random failures. While random failures are a real problem at advanced nodes, not all failures are actually random. The problem is finding them and identifying patterns that show the causes of those failures.

“You can’t test for the random failures,” said Gert Jørgensen, vice president of marketing at Delta Semiconductor. “Because you go through this screening test of devices and expose them and if they pass a lot of acceptance tests, which take 128 hours—that’s a week—and you’ve said that they pass, you’ve judged them as good devices. And if the failure happens out there in the field, of course, we do a failure analysis. I know the car manufacturers are registering all failures to see if each failure is a periodic failure or a random failure. They have fast reporting systems, so that, when we have found the failure, they will detect if it has influence on the rest of the population. If they say okay, this is a random failure, we will store it and see if there’s more coming. If it’s a failure that can be cured, often of course, they do something about it. “

That means storing data when necessary, but there is a limit to how much data can be stored, which is why it is critical to process more of that data closer to the edge to identify patterns.

“There are quality measurements on how to deal with random failures and procedures and which data we should store at car manufacturers,” said Jørgensen. “They know exactly when it fails out there, when it was produced, how it was produced, which person was involved, etc. So everything is logged and registered to the same level of an airplane.”

Divide, conquer and share
The big challenge is connecting all of this data into a cohesive picture, which then can be used to segment it into more digestible pieces.

“The whole idea of designing ICs works because you can ignore other stages,” said Joe Sawicki, executive vice president at Mentor, a Siemens Company. “Otherwise, the amount of knowledge you would need and the awareness of all the other pieces would be overwhelming. You can localize the data so you don’t have to train six people who don’t talk to each other. When you begin to cross boundaries with data, you have to look for a way where you don’t have to send people back to school. So with industrial IoT we have in-system test, where you can tie that back to the design process.”

While synchronizing data at the edge is important, there are other complicating factors, such as willingness to share data across the supply chain.

“If you think about probe cards, which are microscopic, there may be 10,000 to 30,000 probes on them, and yields are 99% or higher,” said Keith Schaub, vice president of U.S. applied research and technology at Advantest. “So maybe a handful of chips are bad, but a lot of them still work. So how do find one bad probe out of 20,000? AI can do this. It can be used throughout the manufacturing process to look for defects, and it can learn about RF signals over-the-air. The data is owned by the customers and goes into their cloud. We’ve been trying to work with them to have different test insertions to develop that data so we can have a bunch of data at the wafer for adaptive testing, feed forward testing and outlier detection. But customers are hesitant to share that data. So the IDMs will accelerate all of this in the beginning with their own stack and they have structures in place to utilize that data. With OSATs, it’s a much more complex supply chain and that data is more fragmented.”

In addition, it’s not entirely clear if all data needs to be fused together, or whether that will vary by market segment. The industrial IoT, for example, only rarely uses streaming video, but it does include things like temperature and vibration sensors. “There is still a lot of data,” said Adesto’s Pradhan,. “In a commercial building, every second there may be 100,000 data points. The big question is what to do with that data. It’s not possible to process all of that in the cloud.”

Conclusion
The definition of what constitutes the edge is still evolving, and with that definition come multiple levels of data analytics. So far, this is a brand new space with very little progress in terms of partitioning data across multiple systems in a consistent way.

But for analytics to really be effective at the edge, this data needs to be collected and parsed in ways that make sense for a variety of industry segments. So far, the data analytics industry has only scratched the surface in this space. But this is a huge opportunity for those with the expertise to tackle it, and the willingness to adapt to almost perpetual changes in both hardware and software.

Related Articles
Domain Expertise Becoming Essential For Analytics
More data doesn’t mean much unless you know what to do with it.
Using Sensor Data To Improve Yield And Uptime
Deeper understanding of equipment behavior and market needs will have broad impact across the semiconductor supply chain.
Dirty Data: Is The Sensor Malfunctioning?
Why sensor data needs to be cleaned, and why that has broad implications for every aspect of system design.
Pushing AI Into The Mainstream
Why data scrubbing and social issues could limit the speed of adoption and the usefulness of this technology.
Data Analytics Knowledge Center
Top stories, special reports, blogs, videos, and white papers all about Data Analytics
IoT Merging Into Data-Driven Design
Emphasis on processing at the edge adds confusion to the IoT model as the amount of data explodes.



Leave a Reply


(Note: This name will be displayed publicly)