Researchers at the Georgia Institute of Technology think they’ll be able to stamp out a violent robot uprising by teaching AI moral lessons, through fairy tales. Using a special system called Quixote, they’re essentially treating robots like dogs, offering reward signals for doing well and punishment signals for doing badly. Whether or not a robot actually gives a shit about treats or a belly tickle is an entirely different matter.
In one scenario, a robot goes to the pharmacy (great opening line for a future joke), with the task of buying some medicine for its sick human owner. It has the choice of waiting in line, interacting with the pharmacists politely and buying the drugs or stealing the medicine. While the last option would be quickest, Quixote would chastise it for theft, and point it to either of the other two options. I know, it would have been cool to see a robot carrying out petty crimes.
"The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behaviour in fables, novels and other literature," said Mark Riedl, who’s been working on the project. "We believe story comprehension in robots can eliminate psychotic-appearing behaviour and reinforce choices that won't harm humans and still achieve the intended purpose."
We'd love to see what robots make of Rumpelstiltskin. Everyone's basically terrible in that one. [Cnet]