IBM wants to see researchers writing programs for the digital equivalent of a human brain. It's a lofty, almost sci-fi task that's likely a far way out, but IBM announced on Wednesday that it had made some significant steps toward making it happen: it's been developing just such a digital brain, and it's releasing the very first toolkit to let developers start using it.
"How do you decide your friend's face in a crowd?"
The project, known as SyNAPSE, has been in the works since 2008 in conjunction with DARPA and several other organizations. The hope is to create a "cognitive computer" that thinks more like humans do, by relating objects and ideas, rather than seeing data in if-this-then-that terms as a conventional system would. "How do you decide your friend's face in a crowd?" IBM's lead cognitive computer researcher, Dharmendra Modha, asked in a conversation with The Verge. "If you try to encode this in if-then logic, it's not going to be possible."
At least, it wouldn't be possible in any simple way. While a bundle of software algorithms may be able to figure it out, it would take a decent amount of computing power — even if it seems negligible nowadays. But by using IBM's architecture for a cognitive computer, it's possible that a future machine could effectively just recognize a face, similar to how the human brain does.
Until now, coding apps for these cognitive computing chips — which are still only said to be capable of mimicking the intelligence of small insects — was a highly complex process. But with its new programming tools called "corelets," IBM says that programmers will be able to streamline development by focusing just on what the system is capable of, rather than also having to figure out how the new system works from the ground up.
From simple chips to robotic jellyfish
IBM has thought up a number of potential tools that could eventually be built with its cognitive chips. One possibility is a robotic tumbleweed, which might be able to roll about through a Fukushima-like disaster and survey the environment. Another idea is a robotic jellyfish, which would float in the ocean and watch the wind, water, and temperature to determine when there's risk of a tsunami forming. While traditional tools may be capable of similar tasks, these new tools will theoretically be more efficient because they would be able to look at the data as a whole, rather than connecting the dots one at a time.
But IBM's far-out ideas are still just that: far out. Modha thinks that it's all about 10 years away, but he says that he could be wrong. "Headlights can only show you the road up until a certain point," he said. "There are twists and turns. We need to work with them to make it happen."
Carl Franzen contributed to this report.