Power/Performance Bits: Jan. 29

Neural nets struggle with shape; managing data centers with music; nanogenerator bandages.


Neural nets struggle with shape
Cognitive psychologists at the University of California Los Angeles investigated how deep convolutional neural networks identify objects and found a big difference between the way these networks and humans perceive objects.

In the first of a series of experiments, the researchers showed color images of animals and objects that had been altered to have a different pattern to the VGG-19 deep learning network, which was trained on ImageNet. For example, the surface of a golf ball was displayed on a teapot; zebra stripes were placed on a camel; and the pattern of a blue and red argyle sock was shown on an elephant. VGG-19 ranked its top choices and chose the correct item as its first choice for only five of 40 objects.

VGG-19 thought there was a 0 percent chance that the elephant was an elephant and only a 0.41 percent chance the teapot was a teapot. Its first choice for the teapot was a golf ball, which shows that the artificial intelligence network looks at the texture of an object more so than its shape, said Nicholas Baker, a UCLA psychology graduate student.

Philip Kellman, a UCLA distinguished professor of psychology, adds, “It’s absolutely reasonable for the golf ball to come up, but alarming that the teapot doesn’t come up anywhere among the choices. It’s not picking up shape.” Humans, he said, identify objects primarily from their shape.

An artificial intelligence network thought there was a 0.41 percent chance this object is a teapot. Its first choice was a golf ball, which is quite reasonable. (Source: Nicholas Baker / PLOS Computational Biology / UCLA)

In the second experiment, the team showed images of glass figurines to VGG-19 as well as the AlexNet network. Neither network was able to identify the glass figurines as their first choices. An elephant figurine was ranked with almost a 0 percent chance of being an elephant by both networks. Most of the top responses were puzzling to the researchers, such as VGG-19’s choice of “website” for “goose” and “can opener” for “polar bear.” On average, AlexNet ranked the correct answer 328th out of 1,000 choices.

In subsequent experiments, the networks were shown drawings of objects in white and outlined in black, and then objects in solid black. With the white objects, the networks did poorly, but fared better on the solid black images, producing the correct object label among their top five choices for about 50 percent of the objects.

The fifth experiment involved scrambling the images, which were also shown to students. For the most part, these were much more difficult for humans to recognize, while the network did well in identifying images it previously got correct.

“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

The researchers think their findings apply broadly across deep learning systems.

Managing networks with music
Researchers at Saint Louis University propose a new way to manage network tasks in huge data centers: Music-Defined Networking. Music-defined networking is a model in which network functions can be programmed in response to specific sound sequences coming from real or virtual devices.

The researchers explored both active applications, where network devices were programmed to emit a certain sound, and passive applications, where sounds produced by devices such as datacenter fans are monitored to identify when they may have failed.

“Unlike light, sound is not high speed but instead travels slowly. So, rather than looking at sound as a means of sending lots of data around a network, we’re looking at it for the network management tasks that happen, for example, in the physical space of the datacenter,” said Flavio Esposito, an assistant professor of computer science at SLU.

The team used low-cost speakers, microphones, and Raspberry Pi’s to augment existing network components with sound capabilities with the aim of determining how music could be used for several network tasks, including datacenter server fan failure detection, authentication, load balancing and congestion notification.

“Nobody’s incorporating the capabilities of the human ear into network management,” Esposito said. “Sound has its limits – it’s noisy and doesn’t travel very far – but it’s almost completely underused right now. In addition to the human ear, machines can recognize a tune that serves as a signal.”

The researchers say sound-based network management has potential to be further explored as an effective and inexpensive out-of-band orchestration technique and plan to continue studying the use of sound to for other network tasks, including malicious intrusion detection.

Nanogenerator bandages
Researchers at the University of Wisconsin-Madison, University of Electronic Science and Technology of China, and Huazhong University of Science and Technology developed a self-powered electronic bandage to speed wound healing. Chronic and slow-healing wounds such as diabetic foot ulcers, venous-related ulcerations, and nonhealing surgical wounds are a major healthcare issue and cost an estimated $25 billion per year.

While electrical stimulation has been known to help skin wounds heal since the 1960s, the equipment for generating the electric field is often large and may require patient hospitalization. Instead of bulky equipment, the researchers took advantage of advances in nanogenerators to create the electric bandage.

The e-bandage’s wearable nanogenerator was comprised of overlapping sheets of polytetrafluoroethylene (PTFE), copper foil and polyethylene terephthalate (PET). The nanogenerator converted skin movements, which occur during normal activity or even breathing, into small electrical pulses using piezoelectric and triboelectric effects. This current flowed to two working electrodes that were placed on either side of the skin wound to produce a weak electric field.

The team tested the device by placing it over wounds on rats’ backs. Wounds covered by e-bandages closed within 3 days, compared with 12 days for a control bandage with no electric field. The researchers attribute the faster wound healing to enhanced fibroblast migration, proliferation and differentiation induced by the electric field.

The team noted that rats and similar mammals heal wounds differently from humans and future more clinical-relevant studies will need to shift to swine and human skin models.

Leave a Reply

(Note: This name will be displayed publicly)