college students taking provigil, provigil and pancreatitis, provigil first time, provigil kidney damage, provigil joint pain, provigil canada online

System Bits: Oct. 2

AI robots develop prejudice; cognitive science + information theory; ML aids environmental monitoring.


Computer algorithms exhibit prejudice based on datasets
Researchers at Cardiff University and MIT have shown that groups of autonomous machines are capable of demonstrating prejudice by identifying, copying, and learning this behavior from one another.

The team noted that while it may seem that prejudice is a human-specific phenomenon that requires human cognition to form an opinion of, or to stereotype, a certain person or group, some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans. This new work demonstrates the possibility of AI evolving prejudicial groups on their own.

Source: Cardiff University

The findings are based on computer simulations of how similarly prejudiced individuals, or virtual agents, can form a group and interact with each other. For example, in a game of give and take, each individual makes a decision as to whether they donate to somebody inside of their own group or in a different group, based on an individual’s reputation as well as their own donating strategy, which includes their levels of prejudice towards outsiders.
As the game unfolds and a supercomputer racks up thousands of simulations, each individual begins to learn new strategies by copying others either within their own group or the entire population.

Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said, “By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it. It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population.”

The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.

Many of the AI developments that the researchers observed involve autonomy and self-control, meaning that the behavior of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. This study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource, the team reported.

Interestingly, the researchers also found that under particular conditions, which include more distinct subpopulations being present within a population, it was more difficult for prejudice to take hold.
“With a greater number of subpopulations, alliances of non-prejudicial groups can cooperate without being exploited. This also diminishes their status as a minority, reducing the susceptibility to prejudice taking hold. However, this also requires circumstances where agents have a higher disposition towards interacting outside of their group,” Professor Whitaker concluded.

What a cell phone camera can tells about the brain
Driving down a dark country road at night, you see a shape ahead on the roadside. Is it a deer or a mailbox? Your brain is structured to make the best possible decision given its limited resources, according to Rensselaer Polytechnic Institute researchers that combines cognitive science and information theory, which is the branch of mathematics that underlies modern communications technology.

Chris Sims, a Rensselaer Polytechnic Institute assistant professor of cognitive science said the findings are an outgrowth of National Science Foundation-supported research into improving pedagogy in STEM fields, which often rely heavily on perceptual abilities and perceptual expertise. “Understanding how the brain works can help us build better classroom training exercises that teach those abilities more efficiently.”

A canonical law of cognitive science—the Universal Law of Generalization, introduced in a 1987 article also published in Science—says that the brain makes perceptual decisions based on how similar the new stimulus is to previous experience. Specifically, the law states that the probability you will extend a past experience to new stimulus depends on the similarity between the two experiences, with an exponential decay in probability as similarity decreases. This empirical pattern has proven correct in hundreds of experiments across species including humans, pigeons, and even honeybees.

Source: Rensselaer Polytechnic Institute

“This is a fundamental equation that is universal in nature, and it’s held up very well. But while the law describes the empirical pattern, it doesn’t adequately explain why this pattern should appear in nature. And that’s what I set out to do,” said Sims.

In his research, Sims turned to information theory, a branch of mathematics founded at Bell Labs in the 1940s that makes it possible to predict the best possible performance of a communication system given the limits of the system. For example, information theory makes it possible to predict the best possible voice accuracy that a telephone wire could transmit given a specific level of noise in the signal. He built on the evident parallels between noisy telephone wires and noisy neurons, and has been using information theory to understand the biological communication systems of perception and memory.

The idea is that visual perception is a communication channel: there is information in the world, and that information must be transmitted from the eyes to the brain. Just as there are limits on a mechanical system like a telephone line, there are limits on a biological system, and Sims said he looked to information theory to describe and predict the optimal performance that could be achieved from the human visual system. That led to a serendipitous connection between the Universal Law of Generalization long studied in cognitive science, and the mathematical framework of information theory.

When Sims described the visual system using the information theory framework, he found that a well-known aspect of information theory known as efficient coding predicted the same exponential generalization gradient as that predicted by the Universal Law of Generalization. His work connected the dots between two foundational laws within disparate fields, and suggests that evolution has given us a perceptual system that approaches the optimum predicted by the mathematical laws of information theory.

“I set out to explain why this pattern appears in nature, and the answer according to information theory is that nature has given us perceptual systems that are as efficient as possible, given the constraints and limits they have to work with. It’s a simple explanation for why this pattern exists everywhere and that’s promising.”

The finding might be used to help develop more accurate measurement of perceptual expertise and progression, but Sims said at heart, he’s pleased to have advanced the foundational science.

“I’m excited that now we have mathematical laws we can use to better describe and understand information processing in the brain, and the nature of intelligence in general,” he added.

Machine learning aids environmental monitoring
According to Stanford University researchers, cash-strapped environmental regulators have a powerful and cheap new weapon in machine learning methods that could more than double the number of violations detected.

As Hurricane Florence ground its way through North Carolina, it released what might politely be called an excrement storm. Massive hog farm manure pools washed a stew of dangerous bacteria and heavy metals into nearby waterways.

Satellite images of river outflows to the Atlantic Ocean in the wake of Hurricane Florence show water discolored by debris and pollutants.
Source: NASA

More efficient oversight might have prevented some of the worst effects, but even in the best of times, state and federal environmental regulators are overextended and underfunded. More can be done with machine learning, and training computers to automatically detect patterns in data, the researchers said.

Their study finds that machine learning techniques could catch two to seven times as many infractions as current approaches, and suggests far-reaching applications for public investments.

Elinor Benami, a graduate student in the Emmett Interdisciplinary Program on Environment and Resources (E-IPER) in Stanford’s School of Earth, Energy & Environmental Sciences said especially in an era of decreasing budgets, identifying cost-effective ways to protect public health and the environment is critical.

Machine learning methods can help optimize that process by predicting where funds can yield the most benefit. The researchers focused on the Clean Water Act, under which the U.S. Environmental Protection Agency and state governments are responsible for regulating more than 300,000 facilities but are able to inspect less than 10 percent of those in a given year.

Using data from past inspections, the researchers deployed a series of models to predict the likelihood of failing an inspection, based on facility characteristics, such as location, industry and inspection history. Then, they ran their models on all facilities, including ones that had yet to be inspected.

Their technique generated a risk score for every facility, indicating how likely it was to fail an inspection. The group then created four inspection scenarios reflecting different institutional constraints – varying inspection budgets and inspection frequencies, for example – and used the score to prioritize inspections and predict violations.

Under the scenario with the fewest constraints – unlikely in the real world – the researchers predicted catching up to seven times the number of violations compared to the status quo. When they accounted for more constraints, the number of violations detected was still double the status quo.

At the same time, despite its potential machine learning has flaws to guard against, the researchers warn. “Algorithms are imperfect, they can perpetuate bias at times and they can be gamed,” said study lead author Miyuki Hino, also a graduate student in E-IPER.

For example, agents, such hog farm owners, may manipulate their reported data to influence the likelihood of receiving benefits or avoiding penalties. Others may alter their behavior – relaxing standards when the risk of being caught is low – if they know their likelihood of being selected by the algorithm. Institutional, political and financial constraints could limit machine learning’s ability to improve upon existing practices. The approach could potentially exacerbate environmental justice concerns if it systematically directs oversight away from facilities located in low-income or minority areas. Also, the machine learning approach does not account for potential changes over time, such as in public policy priorities and pollution control technologies.

The researchers suggest remedies to some of these challenges. Selecting some facilities at random, regardless of their risk scores, and occasionally re-training the model to reflect up-to-date risk factors could help keep low-risk facilities on their toes about compliance. Environmental justice concerns could be built into inspection targeting practices. Examining the value and trade-offs of using self-reported data could help manage concerns about strategic behavior and manipulation by facilities.

The team suggested future work could examine additional complexities of integrating a machine learning approach into the EPA’s broader enforcement efforts, such as incorporating specific enforcement priorities or identifying technical, financial and human resource limitations. In addition, these methods could be applied in other contexts within the U.S. and beyond where regulators are seeking to make efficient use of limited resources.

“This model is a starting point that could be augmented with greater detail on the costs and benefits of different inspections, violations and enforcement responses,” said co-author and fellow E-IPER graduate student Nina Brooks.

Leave a Reply

(Note: This name will be displayed publicly)