For most of their history, computers have obeyed human commands with an uncritical, unquestioning diligence. It seemed like an enduring truth of computing that if you fed a machine erroneous information, you would get errors in return. We even coined an aphorism for it — garbage in, garbage out — but somewhere over the past few years, that truth has subtly lapsed.
Just look at your phone’s autocorrect. You feed that thing garbage every single day and it produces reasonably coherent expressions in response. Predictive texting keeps growing in sophistication, with the latest varieties using neighboring words to more accurately estimate the ham-fisted typist’s intended meaning. It’s no overstatement to say that without autocorrect, touchscreen keyboards would never have stood a chance against physical keys — people are just much more precise with the tactile nubs of a real keyboard.
It’s the evolution of machine intelligence that has made modern phones possible, and it’s the thing that underpins the most exciting developments for the future. One way to recognize AI, or artificial intelligence, is the erosion of the old truism of garbage in, garbage out. Machine learning algorithms are being developed to make cars smart enough to drive themselves; that will require AI capable of taking bad information, such as an obscured oncoming pedestrian, and making sense of it using further inputs and data.
The greatest change of the modern age is in the fact that we can now feed computers flawed information and they can filter out the impurities and errors. At least to a degree. You still can’t mash your fist against a touchscreen and expect Shakespearean prose to pour out the other end. But the thrilling thing is that, with contextual awareness constantly growing, maybe one day that’s exactly how we’ll communicate — with computers being smart enough to predict what people want to say and users simply selecting the most apt articulation of their message. That is already happening with Google’s Inbox and its Android messaging app, Allo, both of which exhibit a disquieting degree of accuracy in their guesses.
As we move forward, the amount of direct information we need to provide to a computer to get it to do something will diminishing to almost nothing. There’s no great magic to any of this; it’s just a matter of computers having so many inputs of data — both from ambient sources of information and historic learning they’ve accumulated over time – that any garbage we might throw their way is no longer enough to trip them up.
Five stories to start your day
Nvidia first tried its hand at putting desktop-class graphics chips inside a notebook last year, with its Maxwell-based GTX 980. That was a hint at what the US-based technology company was really...
Here's a puzzle: Google appears to have started work on a completely new operating system, but no one knows quite what it's for. The project's name is Fuchsia, and it currently exists as a growing...
Chasing down honesty in BBC’s The Hunt
Audi is launching a new vehicle-to-infrastructure platform (V2I) in some of its 2017 cars, allowing vehicles to notify drivers of how long they’ll be waiting at a red light. It’s the first step in...
US billionaire Warren Buffett likes to note how little advantage his considerable wealth secures him over the average person. He still wears pants like everyone else, lives in a modest house, and...