System Bits: Feb. 13

Autonomous 3D scanner; AI biases found; computer vision for patients.

popularity

Enabling individual manufacturing apps
Researchers at the Fraunhofer Institute for Computer Graphics Research IGD focused on Industrie 4.0 recognize that manufacturing is turning toward batch sizes of one and individualized production in what is sometimes referred to as ‘highly customized mass production.’

The scanning system is able to measure any component in real time, making protracted teaching processes a thing of the past.
Source: Fraunhofer IGD

And although individual manufacturing is still some way off, the team is taking the vision of batch sizes of one a big step closer to reality, with a new type of 3D scanning system.

Pedro Santos, department head at Fraunhofer IGD explained the special thing about this 3D scanning system is that it scans components autonomously and in real time. For the owners of things like vintage cars with a broken part, this means that the defective component is glued together and placed on a turntable, which is situated beneath a robot arm with the scanner. Everything else happens automatically. The robot arm moves the scanner around the component so that it can register the complete geometry with the minimum number of passes. Depending on the size and complexity of the component, this takes anything from a few seconds to a few minutes. Already while the scan is running, intelligent algorithms create a three-dimensional image of the object in the background. Then a material simulation of the 3D image checks whether a 3D print satisfies the relevant stability requirements. In a final step, the component is printed using a 3D printer and is then ready to be fitted in the vintage car.

The real achievement here is not so much the scanner itself, but the combination of the scanner with view planning to form an autonomous system, which was also created by Fraunhofer IGD.

During an initial scan, algorithms calculate what further scans are necessary so that the object can be recorded with as few scans as possible. Thanks to this method, the system is able to quickly and independently measure objects that are entirely unknown to it.

The researchers expect this is a unique selling point because previous scanners either had to be taught how to do this, or else required the CAD model of the component, making it possible to recognize the position of the object relative to the scanner.

If the scanner was taught to scan a car seat for quality control (TARGET-ACTUAL comparison), then it would be able to scan the next 200 car seats, because they would be largely identical in mass production conditions. Conventional scanners are not suited to the task of handling batch sizes of one, but this one is. The scanner can also be used as a manufacturing assistant to improve cooperation between humans and machines, the team added.


Gender, skin-type biases found in commercial AI systems
Researchers from MIT and Stanford University found that three commercially released facial-analysis programs from major technology companies demonstrate both skin-type and gender biases, which illustrates that neural networks still have a ways to go in terms of training and algorithm refinement.

Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media group.
Source: MIT


According to the researchers, experiments showed the error rates in the three programs in determining the gender of light-skinned men were never worse than 0.8 percent. But for darker-skinned women, the error rates ballooned to more than 20 percent in one case and more than 34 percent in the other two.

The team said these findings raise questions about how today’s neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated.

For example, while researchers at a major U.S. technology company claimed an accuracy rate of more than 97 percent for a face-recognition system they’d designed, the data set used to assess its performance was more than 77 percent male and more than 83 percent white.

Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media group and first author on the new paper explained, “What’s really important here is the method and how that method applies to other applications. The same data-centric techniques that can be used to try to determine somebody’s gender are also used to identify a person when you’re looking for a criminal suspect or to unlock your phone. And it’s not just about computer vision.” She is hopeful that this will spur more work into looking at other disparities.

Using computer vision to protect patients
Researchers at UC Berkeley reminded that Alzheimer’s disease not only robs people of memories and cognitive skills, but over time, it increases their vulnerability to falling down, and suffering a head injury — and they want to do something about that. State regulations require an MRI of the head any time a patient suffers an unwitnessed fall, and about a fourth of all Alzheimer’s-related hospital visits are triggered by a fall.

This means with five million Americans currently living with Alzheimer’s, the task of preventing, tracking and treating fall-related injuries has become daunting and costly, with more than a $5 billion annual cost to medicare. And the number of people with Alzheimer’s is expected to double in the next 15 years.

So, a few years ago, UC Berkeley computer science PhD student Pulkit Agrawal turned to research applying computer vision to improve medical care of people with Alzheimer’s. “There are no effective drugs yet to treat Alzheimer’s, so until we have them, we have to help patients where they are. Developing computer vision systems to detect falls and fall vulnerability seemed like a good way to improve healthcare for a growing patient population.”
A system capable of detecting falls by autonomously monitoring patients could help therapists by only sending them video clips when a patient stumbles or falls, he said.

A small video camera records patients’ activities as part of the automated, online system to detect and record falls. To assure maximum patient privacy, only videos of falls are saved for review. Source: UC Berkeley

Agrawal partnered with Alexandre Bayen, professor of electrical engineering and computer sciences, an expert in mobile sensing to design and eventually market a reliable, inexpensive computer vision technology to improve detection and risk of falls.

Along with team members, Julien Jacquemot, visiting scholar at UC Berkeley, and PhD student George Netscher, they saw the potential of passive vision-based sensing technology to overcome the limitations of the previous approaches. 

With support from the Signatures Innovation Fellows program, they are already developing an automated online detection system for use in large networks of memory care facilities.  

Agrawal employed the artificial intelligence method, deep
learning, to train a computer to identify falls and distinguish them from unsteady movements and other patient behavior. The deep learning algorithms are taught to detect falls by the process of trial and error. In contrast to earlier computer vision methods, deep learning methods are known for steadily improving their performance as they are fed more data. This means that this fall detection system only gets better with time.

The team also anticipates a broader potential for the technology, such as bed sores prevention and unsafe wandering.



Leave a Reply


(Note: This name will be displayed publicly)