System Bits: May 22

AI benefits, disruptions; VR drone testing; wearable smart tech control.


AI disruptions and benefits in the workplace
According to Stanford University researchers, artificial intelligence offers both promise and peril as it revolutionizes the workplace, the economy and personal lives.

Visiting scholar James Timbie of the Hoover Institution, who studies artificial intelligence and other technologies, said that in the workplace of tomorrow, many routine jobs now performed by workers will increasingly be assumed by machines, leaving more complicated tasks to humans who see the big picture and possess interpersonal skills. “Artificial intelligence and other advancing technologies promise advances in health, safety and productivity, but large-scale economic disruptions are inevitable.”

Visiting scholar James Timbie says that the artificial intelligence revolution will involve humans and machines working together, with the best results coming from humans supported by intelligent machines.
Source: The Hoover Institution

He points out that AI combined with other advancing technologies – such as robotics and 3D printing – will lead to more efficient production of goods and services since machines can be trained to perform a wide range of non-routine cognitive tasks, and advanced robotics can increasingly perform manual tasks. And while society as a whole will benefit from increased productivity and lower costs, many individual workers will be adversely affected. Specifically, research indicates that on the order of half of today’s workers are in industries vulnerable to disruption in the near term.  In some cases – truck drivers – machines will replace workers. In other fields – education and medicine – work will be transformed, with machines assuming some tasks in close coordination with skilled humans performing other tasks.
When it comes to well-paying ‘cognitive’ jobs, many of these will are vulnerable to disruption, perhaps more over time than the well-paying factory jobs that were lost to globalization, Timbie noted. These jobs, which have traditionally been filled by well-educated, well-paid workers include tax preparers, radiologists, paralegals, loan underwriters, insurance adjusters, financial analysts, translators, and even some journalists and software engineers.
Still humans and machines can work together for greater efficiency and productivity, in such as areas as medical diagnosis particularly because a diagnosis is a determination of how information on a patient fits into a pattern characteristic of a disease, which is something machines do well. 

Machines can take into account far more data and keep up with the latest research better than any doctor, whose primary role would be to convey the outcome to the patient, and help the patient understand and accept it, so the patient follows through with the treatment plan.

Read more here.

Virtual-reality drone testing ground
While training drones to fly fast, around even the simplest obstacles, is a crash-prone exercise that can have engineers repairing or replacing vehicles with frustrating regularity, MIT engineers have developed a new virtual-reality training system for drones that enables a vehicle to “see” a rich, virtual environment while flying in an empty physical space.

The team has dubbed the system “Flight Goggles,” that they expect could significantly reduce the number of crashes that drones experience in actual training sessions as well as serve as a virtual testbed for any number of environments and conditions in which researchers might want to train fast-flying drones.

MIT engineers have developed a new virtual-reality training system for drones that enables a vehicle to “see” a rich, virtual environment while flying in an empty physical space. 
Source: MIT

Sertac Karaman, associate professor of aeronautics and astronautics at MIT said, “We think this is a game-changer in the development of drone technology, for drones that go fast. If anything, the system can make autonomous vehicles more responsive, faster, and more efficient.”

Karaman was joined in the work by colleagues from MIT’s Laboratory for Information and Decision Systems, MIT’s Computer Science and Artificial Intelligence Laboratory, and Sandia National Laboratories.

FlightGoggles comprises a motion capture system, an image rendering program, and electronics that enable the team to quickly process images and transmit them to the drone.

The actual test space — a hangar-like gymnasium in MIT’s new drone-testing facility in Building 31 — is lined with motion-capture cameras that track the orientation of the drone as it’s flying.

With the image-rendering system, Karaman and his colleagues can draw up photorealistic scenes, such as a loft apartment or a living room, and beam these virtual images to the drone as it’s flying through the empty facility.    
The virtual images can be processed by the drone at a rate of about 90 frames per second — around three times as fast as the human eye can see and process images. To enable this, the team custom-built circuit boards that integrate a powerful embedded supercomputer, along with an inertial measurement unit and a camera. They fit all this hardware into a small, 3-D-printed nylon and carbon-fiber-reinforced drone frame. 

Wearable ring, wristband allows smart tech control with hand gestures
FingerPing, a new technology created by Georgia Tech researchers could make controlling text or other mobile applications as simple as “1-2-3.” The system uses acoustic chirps emitted from a ring and received by a wristband, like a smartwatch, and is able to recognize 22 different micro finger gestures that could be programmed to various commands — including a T9 keyboard interface, a set of numbers, or application commands like playing or stopping music.

At a high rate of accuracy, FingerPing can recognize hand poses using the 12 bones of the fingers and digits ‘1’ through ‘10’ in American Sign Language (ASL), the team said.

This is a preliminary step to being able to recognize ASL as a translator in the future, even though other techniques utilize cameras to recognize sign language, but that can be obtrusive and is unlikely to be carried everywhere.

The researchers pointed out that unlike other technology that requires the use of a glove or a more obtrusive wearable, this technique is limited to just a thumb ring and a watch. The ring produces acoustic chirps that travel through the hand and are picked up by receivers on the watch. There are specific patterns in which sound waves travel through structures, including the hand, that can be altered by the manner in which the hand is posed. Utilizing those poses, the wearer can achieve up to 22 pre-programmed commands.

Leave a Reply