Far Out AI In Remote Locations

How far out can AI on the edge go? What lessons in data management and AI can be gleaned from the semiconductor industry already dealing with remote locations on Earth?

popularity

There really isn’t anything that you can do on Earth with electronics that you can’t do in space, but it certainly can be a lot harder and take longer to fix is something goes wrong. And as more intelligent electronics are launched into space, the concern over potential failures is growing.

AI inferencing has been pushing out further for some time, and it is starting to redefine what constitutes the edge. It can be the top of an urban light pole 50 feet in the air, or it can be an outbound rocket or satellite 50 miles above the Earth. But as more devices are more capable of inferencing without being hooked up to a massive data center, the opportunities are growing for more independent computing and analysis in remote locations, and along with that so are the risks.

Space is already the über edge for sensors sending data back to Earth. But satellites, extraterrestrial telescopes, and spacecraft may do their own inferencing someday, or at least a limited version of machine learning. That has profound implications for where and when decisions are made, and it makes it enormously important to ensure that these devices are well-behaved and responsive throughout their projected lifetimes.

“A lot more obviously is happening on the ground, like supercomputers and a massive amount of data crunching —not something, as far as space, weight, and power, you are going to build to fly,” said Josh Broline, director of marketing and applications, Industrial and Communications Business Division at Renesas, whose parts are in the recent Mars rover and many other space electronics. “That’s just too expensive, cumbersome, risky as far as bit flipping and data process corruptions go. I’m sure there’s a limitation as far as how much actual data processing you’re ultimately going to do in space, but [computing in space] is definitely orders of magnitude behind what we’re doing on Earth.”

Reliability is critical. “Satellites traditionally have not been on the leading edge of technology except when they absolutely had to be because of the reliability issues,” said Marc Swinnen, director of product marketing at Ansys. They typically have 10, 15, 20 years of reliability records, and of course the latest stuff never has that. Technology for space always has been a generation or two behind because they were more concerned about rock-solid reliability, and because they have special requirements for temperature, and so on.”

So there are no data centers in space yet, but as chips improve, AI inferencing in orbit or other space mission is on the horizon. A big part of the appeal is getting closer to real-time data and actions based on that data.

Work on AI in satellites is happening now, although terrestrial AI is still the most tried and true answer. Use cases are plentiful in military/national security, communications networks, agricultural monitoring, weather and climate tracking, geographical. In all cases, getting accurate ground measurements is vital. All that data could go through selective gathering and sorting in space before sending to the ground for more processing.

Consider Blackjack, for example, an AI-embedded satellite project from United States Department of Defense (DOD)’s research arm, DARPA, which is working with the U.S. Space Force. They will be sending satellites with supercomputing chips into low Earth orbit (LEO) later this year and into 2021.

Self-driving car startup AiMotive and satellite electronics company C3S are collaborating to adapt AiMotive’s self-driving car software — its aiWare NN hardware acceleration technology — into C3S’s space electronics platform for adding high performance AI into small, power-constrained satellites. A satellite chip that may put a neural network in space could, in one use case, look for certain features before taking pictures, reducing the extraneous data on the downlink.

The Jet Propulsion Lab’s Artificial Intelligence Group is working on some autonomous spacecraft that will be capable of making some decisions on their own.

And there are already some rad-hard chips available. Xilinx’s Kintex KU 060, which has machine learning for inferencing in space built in, is one example.

It makes sense that edge computing is the next frontier. “In beginning, when the cloud first came up, everything was going to be handled in the cloud,” said Swinnen. “But people quickly found that the amount of data that needs to be shipped back and forth for this would be unrealistic. We’ve come to the next phase in that development — compute on the edge, which means that the devices at the forefront for the actual sensing do a lot of the processing, at least the pre-processing themselves and only send back a filtered amount of data. These devices on the edge often using machine learning and have to do the processing, where they are at a low power, low memory, and have to be a low cost. Machine learning is creeping into the edge, which is much more distributed.”

Space requirements
Launching that concept into space requires chips to be radiation hardened (rad-hard), because there are many more sub-atomic particles in space, and more severe environmental conditions.

“Rad-hard is important for the space industry and high temperature ranges and low pressures and strange atmosphere — the environmental conditions that they have to deal with,” said  Swinnen.

Sub-atomic particles can flip a bit on the chip and change whatever outcome the chip is working on. Bit flipping happens on Earth, too, but it happens much more frequently in space. Earth’s magnetic field protects us on its surface from many particles.

“Certainly, it does take a little bit longer to get kind of the latest technology into space because you have to be concerned about radiation effects,” said Broline. “Memory is important, as well, because they’re going to have to store all this data. [The space industry engineers] are pushing more and more into the DDR, which is higher densities of one gig, two gigs, four gigs — gigabits of memory. Memory is one of those things where you don’t want a lot of bits being flipped and corrupted on your mission. That’s something that they can deal with, but don’t want a lot of.”

Along with reliability, space (area), weight, and low power are other concerns for electronics in space.

Moving the payload
The payload is data. A use case closer to home demonstrates some of the issues of data as a payload.

The Pacific Ocean still serves as a barrier, for instance, for semiconductor manufacturing. Many fabs are in Asia, whereas test data analytics companies, such as California-based PDF Solutions and EMEA-based yieldHUB, proteanTecs and OptimalPlus, are on the other side of the globe.

Getting the data together is complex because multiple participants and locations are involved in manufacturing and testing ICs, and security of the data produced through these processes is always a concern. Doing edge inferencing on data makes sense. It used to be the data would be shipped back to the analytics house to run the models.

“They would do an inference on that data and send your predictions back to the OSAT. And the headquarters might be here in California, so there’s a lot of data transfer, back and forth, over the Pacific Ocean,” said Jeff David, vice president of AI Solutions at PDF Solutions. “And that’s a problem for those three reasons — latency, security, and data loss. What we’ve come up with is something that we call edge prediction. Instead of sending the data back and forth across the ocean, you take that model and you deploy it at the OSAT, so you take care of all three of those problems, or you address them significantly. That means you have to have the ability to infer a prediction at the ‘edge’ — and I’m referring to edge here as at the OSAT — to have the ability to aggregate all your data necessary to make a prediction at the OSAT.”

That’s not as simple as it sounds, because data may be coming in from different sites, as well as the foundry, and all of that data is required to make that prediction at a given test insertion point. To complicate things even further, all of these have different types of time constraints on them.

“Depending on what the time constraint is will depend on how the solution is deployed,” David explained. “And that’s where we’re working very closely with Advantest. If predictions need to be made on the tester, we’re working closely with them to be able to enable that. If the predictions are less time-critical, those can be done outside of the tester on the DEX node itself (PDF’s Data Exchange Network), which resides in the facility. There are different flavors of these things, depending on the different time constraints that are needed.”

This kind of encrypted handoff is becoming a necessity as data is collected from more sources than in the past, but it becomes particularly difficult to navigate with inconsistent data quality. That data needs to be cleaned up and structured, but by whom and where that is done is important.

“Data quality is always going to be an issue,” he said. “Data cleansing is done through machine learning. You could say that’s why you do the machine learning in the first place — to ditch the data that you don’t need to download.”

Identifying missing or corrupted data and training the system to take some action is important on the ground, but it may be far more important in space. Different checks can be done at different phases to check the health or completeness of data, and to filter out columns that don’t have enough data or impute what’s missing.

“Machine learning algorithms can make those imputations, depending on your server bandwidth,” David said. “The easiest way to impute missing data is maybe just take the median or the most common value and put it in there. Or, you can employ more advanced techniques that use different machine learning algorithms to come up with an imputation based on other [historical] data for that.”

Recently, inferencing was used to fill in missing sensor data from a sensor on the NASA’s Solar Dynamics Observatory (SDO) that failed a couple years into the mission. The team that devised a way to fill in the data with an educated guest based on the data collected from the sensor for the years it was working well.

Monitoring health of chips
Monitoring the health of these devices from inside a rad-hard chip is another option that is gaining attention. This already is being done on server and in automotive chips, but it requires some extra planning in space.

For one thing, there is power/performance overhead. The most data-intensive readings are measurements taken externally, such as temperature and power. These require sequence measurements, which automatically take more bandwidth. In contrast, checking how a chip acts internally against its spec and its known behavior will always show deterioration over that first sterile test, but after that it will show whether the chip’s internal circuitry is still functioning as needed or is degrading faster than anticipated, which requires less energy.

In predicting a rate of failure, two things are required. One is to understand what is normal. The second is to compare the degradation in performance or power against that baseline reading.

Conclusion
Plenty of projects exist to use ground-based AI to solve problems for space agencies, such as NASA, which are looking at speeding data analysis and problem solving. But many of those involve terrestrial number crunching. On the ground, the issues are similar for electronic systems and chips.

Space exaggerates some issues for electronics. Remoteness and radiation are more extreme in space. But with Amazon and Tesla each launching their own army of satellites, along with other companies and countries, orbits around the earth are getting crowded.

That opens up all sorts of new opportunities for advancements in space electronics, such as how to avoid objects approaching at 17,000 miles per hour without having to call home first.

 

Related Stories:

Challenges In Using AI In Verification

Apples, Oranges & The Optimal AI Inference Accelerator

Making Everything Linux-Capable

The Challenge Of Keeping AI Systems Current

Monitoring Chips After Manufacturing



Leave a Reply


(Note: This name will be displayed publicly)