System Bits: March 20

Algorithm transparency; selfie drone; quantum effects; machine learning.

popularity

Design has consequences
Carnegie Mellon University design students are exploring ways to enhance interactions with new technologies and the power of artificial intelligence.

Assistant Professor Dan Lockton teaches the course, “Environments Studio IV: Designing Environments for Social Systems” in CMU’s School of Design and leads the school’s new Imaginaries Lab. “We want the designers of tomorrow to think about the overlap between the human world and AI. Many of our students are going to go work for companies like Facebook or Google, and they’re going to be making decisions that might seem very small in the moment — what text do we put on a button, how easy do we make it for someone to do this thing or that — but those decisions are going to impact people’s lives. We want them thinking through how their design has consequences.”

Juniors in CMU’s School of Design are taking the course, which examines how humans interact with increasingly ubiquitous new technologies.

During a recent class, juniors Cameron Burgess and Marissa Lu spoke about the need for transparency in algorithms and technologies to explain why certain ads and articles appear on specific social media feeds. They argued that transparency is needed to increase tech literacy and bridge the gap between human thinking and machine thinking.

“We take what Amazon and Google say to us at face value, but we shouldn’t. We should be asking why,” Lu said.

Design juniors Marissa Lu and Cameron Burgess discuss a point during a recent class.
Source: Carnegie Mellon University

Other students pitched projects that would allow programmers to take specific aspects of obsolete apps and incorporate them into new technologies and uses for artificial intelligence. They are working on creating a speculative ‘design fiction’ to investigate directions that new approaches to intelligences in environments might lead, and some of the consequences, including ethical problems because it is important to consider the ethics of design in everyday life, especially as AI becomes more prevalent.

Lockton said the decisions designers make become part of peoples’ lives in ways we maybe don’t consciously notice. “It happens gradually. It doesn’t happen all of a sudden — you don’t suddenly decide to exist in a digital world alongside or overlapping with the physical world. We need the next generation to understand that, particularly design students and anyone working with the human side of technology development.”
Traditionally, design students are not schooled in the social, psychological and societal issues surrounding design, but at CMU’s School of Design this has been placed centrally in the curriculum. “The world is made up of decisions made by designers. It would be irresponsible to not give them an education on the other dimensions of their work,” Lockton added.

Autonomous selfie drone
MIT alumni founded Skydio, a San Francisco-based startup, is commercializing an autonomous video-capturing drone — aka, a “selfie drone” — that tracks and films a subject, while freely navigating any environment.

The R1 drone is equipped with 13 cameras that capture omnidirectional video. It launches and lands through an app — or by itself. On the app, the R1 can also be preset to certain filming and flying con

Skydio, a San Francisco-based startup founded by three MIT alumni, is commercializing an autonomous video-capturing drone — dubbed by some as the “selfie drone” — that tracks and films a subject, while freely navigating any environment.
Source: MIT

The goal with this first product is to deliver on the promise of an autonomous flying camera that understands where you are, understands the scene around it, and can move itself to capture amazing video you wouldn’t otherwise be able to get, according to Adam Bry, co-founder and CEO of Skydio.

R1’s system integrates advanced algorithm components spanning perception, planning, and control, which give it unique intelligence that’s analogous to how a person would navigate an environment.

On the perception side, the system uses computer vision to determine the location of objects. Using a deep neural network, it compiles information on each object and identifies each individual by, say, clothing and size. “For each person it sees, it builds up a unique visual identification to tell people apart and stays focused on the right person,” Bry said.

That data feeds into a motion-planning system, which pinpoints a subject’s location and predicts their next move. It also recognizes maneuvering limits in one area to optimize filming. All information is constantly traded off and balanced … to capture a smooth video.

Finally, the control system takes all information to execute the drone’s plan in real time.

For users, the end result, Bry said, is a drone that’s as simple to use as a camera app: “If you’re comfortable taking pictures with your iPhone, you should be comfortable using R1 to capture video.”

The lightweight drone can fit into an average backpack and runs about $2,500.

Plasmons triggered in nanotube quantum wells
In a discovery that could lead to the development of unique lasers and other optoelectronic devices, Rice University and Tokyo Metropolitan University researchers have observed a novel quantum effect in a carbon nanotube film.

Rice University researchers Junichiro Kono, left, and Fumiya Katsutani prepare a nanotube film for testing. The lab observed a novel quantum effect in their carbon nanotube film that could lead to the development of near-infrared lasers and other optoelectronic devices.
Source: Rice University

Specifically, the team reported an advance in the ability to manipulate light at the quantum scale by using single-walled carbon nanotubes as plasmonic quantum confinement fields.

The phenomenon found in the Rice lab of physicist Junichiro Kono could be key to developing optoelectronic devices like nanoscale, near-infrared lasers that emit continuous beams at wavelengths too short to be produced by current technology.

The project came together in the wake of the Kono group’s discovery of a way to achieve very tight alignment of carbon nanotubes in wafer-sized films. These films allowed for experiments that were far too difficult to carry out on single or tangled aggregates of nanotubes and caught the attention of Tokyo Metropolitan physicist Kazuhiro Yanagi, who studies condensed matter physics in nano materials and who brought the gating technique (which controls the density of electrons in the nanotube film), and Rice provided the alignment technique. Kono said, “For the first time we were able to make a large-area film of aligned nanotubes with a gate that allows us to inject and take out a large density of free electrons.”

“The gating technique is very interesting, but the nanotubes were randomly oriented in the films I had used,” Yanagi said. “That situation was very frustrating because I could not get precise knowledge of the one-dimensional characteristics of nanotubes in such films, which is most important. The films that can only be provided by the Kono group are amazing because they allowed us to tackle this subject.”

A wafer of highly aligned carbon nanotubes, seen in gray on a piece of glass, facilitated a novel quantum effect in experiments at Rice University.
Source: Rice University

Their combined technologies let them pump electrons into nanotubes that are little more than a nanometer wide and then excite them with polarized light. The width of the nanotubes trapped the electrons in quantum wells, in which the energy of atoms and subatomic particles is “confined” to certain states, or subbands.
Light then prompted them to oscillate very quickly between the walls. With enough electrons, Kono said, they began to act as plasmons.

The researchers believe the phenomenon could lead to advanced devices for communications, spectroscopy and imaging, as well as highly tunable near-infrared quantum cascade lasers.

Advancing machine learning basic, applied science
As the desire for practical application of machine learning (a subcategory of artificial intelligence) continues to grow, Georgia Tech is responding within its Center for Machine Learning. The Center is comprised of researchers from six colleges and 13 schools at Georgia Tech — a number that keeps growing. “Among our goals is to better coordinate research efforts across campus, serve as a home for machine learning leaders, and train the next generation of leaders,” said Irfan Essa, professor and associate dean in Georgia Tech’s College of Computing who also directs the Institute’s Center for Machine Learning., referring to Georgia Tech’s new Ph.D. program in machine learning.

Within the center, researchers are striving to advance both basic and applied science. “For example, one foundational goal is to really understand deep learning at its core,” Essa said. “We want to develop new theories and innovative algorithms, rather than just using deep learning as a black box for inputs and outputs.” And on the applied research front, the center has seven focal areas: health care, education, logistics, social networks, the financial sector, information security, and robotics.

Read more on these efforts here.



Leave a Reply


(Note: This name will be displayed publicly)