We’ve imagined foisting our daily chores on helpful robots for decades. But reality hasn’t yet lived up to our imagination.
Robots are definitely not ready for prime-time. They can’t even fold towels. Of course, that’s going to change someday.
But before it does, robots will need to grasp something that it took humanity tens of thousands of years to develop: morality.
And teaching robots morals is going to be big business.
After all, a robot that didn’t share our morals might think it’s fine to slide your cat into the oven for dinner, says Stuart Russell, a computer science professor at the University of California, Berkeley. He’s a world-renowned expert on Artificial Intelligence, or AI.
“You would want that robot preloaded with a pretty good set of values. So presumably the robot companies will get their values loaded in the robot from a values company.”
But we’re not there yet.
“At the moment, we don’t know how to give the robot, what you might call ‘human values,’” Russell says. “As the problems become clearer, it’s only natural that people will start to focus their energy on solving the ‘value alignment’ problem.”
A World Ruled By Robots?
Without morals, not only could robots not live with us, they could turn on us.
“There’s always been a concern that robots can take over the world,” Russell says. “The very first use of the word “robot” was in a play in which robots take over the world.”
That play, Karel Čapek’s R.U.R., was written in 1920.
“[The] normal response to those kinds of things is to say, ‘Oh well, you know, it’s a long way off in the future, so we don’t have to worry about this,’” Russell says.
But that attitude is changing. In the past couple of years, scientists have become more vocal about the dangers AI could pose to humanity.
In January of 2015, Russell, Stephen Hawking and hundreds of AI researchers signed an open letter saying that if the industry doesn’t start building safeguards into AI, it could spell doom for humanity. Tesla CEO Elon Musk, who also signed the letter, gave $10 million to the cause. He’s said AI could be humanity’s “biggest existential threat.”
Russell’s view is “somewhat less apocalyptic.”
He says, not to be flip, but nobody’s gonna buy a robot that cooks a cat. And so it’s just a matter of time before tech companies, universities, and the government start pouring resources into programming robots with morals.
“In some sense, their only purpose in existing is to help us realize our values to a greater extent. And perhaps it’ll make people better,” Russell says.
This story was originally reported by Queena Kim out of KQED in San Francisco as part of The California Report’s “Big Think” series.