One of the major difficulties in creating a general artificial intelligence (okay, one of several major difficulties), is teaching it what normal behavior is. Say you're in possession of a sentient robot butler and, after a heavy night out drinking Future Beer™, you ask it to pop to the shops and pick up some Alka-Seltzer. What if the robot decides that given your obvious need, the quickest way to carry out this order is to rob the store and kill anyone who stands in its way. "Robots today," you tut at your bloody bot as it hands you the pills. "No sense of right or wrong."
But avoiding this mess and teaching a computer to follow normal human behavior could be as simple as getting them to read stories, say AI researchers from Georgia Institute of Technology. "Stories encode many types of sociocultural knowledge: commonly shared knowledge, social protocols, [and] examples of proper and improper behavior," they write in a recently-published paper. "We believe that a computer that can read and understand stories, can, if given enough example stories from a given culture, 'reverse engineer' the values tacitly held by the culture that produced them." Of course, humanity has been doing this for millennia. Think of Aesop's fables, the myths of the ancient Greeks and Romans, and even the Bible. These are texts that aim to distill society's values and teach future generations how to behave. (Even if we don't always agree with what they say.)
Teaching values using the written word is as old as Civilization
It's a fascinating concept, but Riedl and Harrison's work is only a barebones structure of how this sort of machine learning might work. There are numerous limitations and challenges to their proposal, including the fact that even the most detailed stories leave gaps a computer would have to work out by itself, and that translating even morally-approved and socially acceptable actions into real physical movements is tricky. (Just think of all those expensive DARPA robots that struggle to open doors without falling over.) There's also an ominous sounding aside in the study that notes that if certain actions were accidentally given the wrong "reward signal," it has "the potential to cause psychotic-appearing behavior."
This invites the question: how would we know what stories to trust? Not all narratives have praiseworthy protagonists, and even if the straightforwardly evil are eliminated, this still leaves a whole lot of ambiguous characters doing ambiguous deeds. Riedl and Harrison's theory is that these potentially corruptive stories are in the minority, and that if the sample size of books is large enough (they suggest using every story possible), then "subversive or contrarian texts will be washed out by those that conform to social and cultural norms." Perhaps. Or perhaps some fluke of literary statistics — the glut of pulp paperbacks in the first half of the 20th century, for example — will skew the whole experiment completely, leaving us with a bunch of AIs that act and talk like tough detectives. That wouldn't be so bad, I guess, but would such a robot bother to get your Alka-Seltzer for you?
Five stories to start your day
A new Japanese PC giant could be about to emerge. Bloomberg News reports that VAIO, the spin off from Sony's PC brand, is nearing a deal to combine itself with Toshiba and Fujitsu's PC businesses....
But it wasn't easy. "We busted a lot of balloons," moonshot chief Astro Teller said during a talk at the TED conference.
Samsung's Indonesian arm may have shown off the Galaxy S7 almost a week earlier than planned. An unlisted commercial marked #TheNextGalaxy and posted to the branch's official YouTube channel shows...
After being teased as a "very controversial" performance by host LL Cool J, Kendrick Lamar hit the 2016 Grammy stage and did not disappoint. The rapper delivered the performance of the night,...
Google's latest Android ad aired tonight at the Grammy's, and it was only fitting that it use music to play up Google's drive for originality. Here, we watch a pianist play on a normal piano. Each...