Uncertain Intelligence: Can Programming Uncertainty into Machines Make Them Smarter—and More Ethical?

By Marty Graham, Contributor

Forty years ago, Star Trek creators foresaw the unintended consequences of artificial intelligence. It began innocently enough, when the Captain Kirk-led crew chased V’Ger across the galaxy, trying to halt it as it destroyed the civilizations it encountered. When they caught V’Ger, however, they discovered it was Voyager 6, a U.S. launched space probe believed lost that instead had been reprogrammed to a higher level of intelligence. V’Ger had intelligently worked a twist into its own mission of finding civilizations: It destroyed the civilizations it found.

The problem illustrated by V’Ger is that a smart algorithm that learns from all it encounters while pursuing its mission, may get smart enough to do a curiously enhanced version of what it was originally programmed to do—something that may counter the vision of the programmers (seemingly) in charge.

Limited Ethical Bounds

Researchers, theorists, engineers, and some outspoken CEOs are increasingly concerned with the practical question of how to control algorithms that have been designed to teach themselves. Many technology leaders, including Michael Dell, see the challenge as one of the responsibilities that come with technology that will change how the world works—for the better. All acknowledge the need to understand and mitigate unintended consequences and unexpected steps that some machines may take as they work to achieve the objective they were programmed to reach.

“As long as the algorithm has no boundaries, then it can get to its goal any way it figures out,” says Mark Halverson, who co-chairs the Institute of Electrical and Electronic Engineers’ (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems. “We have to acknowledge that our ability to put moral and ethical bounds around our technology is not that great.”

AI researchers have compiled a spreadsheet of some astonishing deviations made by AI bots. For example, in 1997, an algorithm designed to play Tic-Tac-Toe achieved victory by hacking its opponent’s algorithm and crashing its systems, ending the game with a forfeit.

Another machine that was taught to detect poisonous mushrooms from those that are safe to eat correctly observed during learning that every other mushroom it was shown was safe. The problem was that when it went to work sorting safe mushrooms on its own, it adopted the same every-other pattern, essentially classifying safety based on how, rather than what, it had learned.

The reason for the errors is fairly intuitive: Algorithms do what they are programmed to do, not necessarily what we intend, explains Katja Grace of the Machine Intelligence Research Institute at UC-Berkeley. That means the deviations are simply and plainly the result of bad programming, her colleague, Stuart Russell, says.

The problem of allowing room for unintended consequences isn’t new, but as machine learning capabilities become more sophisticated, researchers and experts have begun to pay closer attention to the issue of ethics, pondering how to keep algorithms aligned with human goals and values.

Values, in this case, doesn’t mean a constitution or any other codified civil mandate; it means understanding how people feel about things.

Unethical Supper

Stuart Russell, University of California Berkeley computer science professor and AI pioneer, tells the hypothetical story of the family robot preparing supper and learning there’s no source of protein in the refrigerator—when the family cat walks by.

“The robot can’t understand that the sentimental value of the cat outweighs the nutritional value,” he says. “That’s a human value we have to make sure the robot possesses.”

The call for more discussions around machine ethics has come from groups ranging from British Parliament to the Future of Life Institute, which in October 2017, crafted the Asilomar AI Principles—23 tenets that outline how humans should govern artificial intelligence. All groups seem to be in agreement on some simple things, such as transparency, subservience among robots, and the fact that each algorithm must tie back to an accountable human. They also call for design that minimizes the risk of misuse and strongly state that AI must exist for the betterment of humans.

“We’re at an inflection point in society where some of these technologies are going to change everything—are going to change what it means to be human,” Halverson says. “It feels like there are not enough people minding the store on these technologies.”

According to Halverson, there are no design standards and very few boundaries being carefully and precisely set. Plus, many algorithms are black boxes that don’t open. Like in Star Trek, as long as the algorithm achieves its goal, we may not know what route it took to get there or how it came up with the approach—the algorithms remain sealed.

“They need a ‘WHY’ button, where you can hit the button and find out how it got to where it is,” Halverson says.

“We’re at an inflection point in society where some of these technologies are going to change everything—are going to change what it means to be human.”
— Mark Halverson, Institute of Electrical and Electronic Engineers

Agents of Uncertainty

AI thinkers like Russell and Grace are increasingly saying that building in protective systems at the end of the design is too late. Instead, there should be an underlying core of self-definitions for the intelligence, starting with the idea that it exists to maximize the human experience.

An emerging school of thought proposes that programming—instead of being focused on tasks—should be zeroed in on humans’ well-being. For Halverson, that means starting at the basics. “There needs to be a purpose defined by autonomy, and it needs to be auditable against that purpose,” he says. “What are the parts, the subset of values, that allow it to reach its goal the right way? That, we can look at.”

Russell has spoken at TedX about a key element that will keep algorithms within the bounds of human control: uncertainty. According to Russell, programming the algorithm to have a measure of uncertainty as it tries to assist humans reduces the chance that the rational agent will go off on a V’Ger-like tear, or that it will become so task-oriented that it will learn to block its off switch (reasoning that it can’t succeed if it is turned off).

Programming and training the algorithm that its mission is to make life better for humans, whose goals and desires are not linear and not easily sorted into ranking values—the cat over nutrition, for example—means the algorithm must continuously check that its efforts align with what people care about.

“Agents with uncertainty about the utility function they are optimizing will have a weaker incentive to interfere with human supervision,” Russell concludes. “The robot should be altruistic and only want to achieve our objectives, but doesn’t know what we want. It has to maximize those values but doesn’t know what they are.”

Through experience and learning, the intelligent machine, he believes, will come to understand and absorb those values. And it will become more certain after it’s pointed in a direction that serves and advances humans.

“The upside of this tech is it can take us places we’ve never been, but the natural tension to this is how we keep it bounded ,” says Halverson. “Our hope is that we can have it grow in a contained way with humans at the center.”