Growing up, my mom would take me and my siblings to the library once a week. When I graduated past Curious George and Boxcar Children books, I began to haunt the magazine racks. You could only browse current issues, but you could check out any back issues, and I began to run through months of Macworld, Wired, and Popular Science at a time. I let the tech writing wash over me, and formulated endless ideas of what the future would look like, but there was one techno-vision that stuck out to me most of all: MIT Media Lab's wearable computer group.
Presented to me was beautiful future, already existing in microcosm. Students, I was now aware, were roaming real hallways somewhere in Massachusetts, wearing real head-mounted displays, connected to real backpacks that carried portable computers. To me it didn't really matter what any of them were actually doing with their wearable computers, what was important to me was that they had them. Wouldn't a glorious future flow naturally from there?
In the decade or so that's passed since then, the wearable computer has continually rode the crest of a "5 years out" wave, which in consumer electronics means "we have no idea how to do this, but maybe someone will figure it out by then." In comparison, "10 years out" means never, so I was always hopeful. Unfortunately, the references to wearable computing have slowly faded since 1999-ish.
Perhaps it was the death of relentless optimism and monetization-be-damned after the dot-com bust. Maybe it was just the simple reality of immature technology — head-mounted displays have always seemed a few ounces overweight, or a few hundred dollars overly expensive. Or maybe the growing definition in the 21st century of a "computer" being something that's constantly connected to the internet. A live, high-bandwidth internet connection has been very difficult to make portable, much less for 16 hours at a time.
And, of course, there was the rise of the smartphone. The smartphone fills almost every potential application for head mounted displays, by offering glanceable information that's as convenient to pull out as your billfold. As obtrusive as the smartphone feels to many people, it's a far cry from the vision of technology that sci-fi has been offering since the 80s. Why implant a chip in your head, or wear expensive computer goggles and VR gloves constantly, when a tiny little slab of carrier-subsidized technology can solve everything for you?
Maybe we are in another bubble, and when we finally face reality, Sergey and Larry will have to hide their toys away from the shareholders and get back to optimizing AdSense. But in the meantime, I think it's worth looking at how we got here.
And yet, here comes Google, who has decided it's Project Glass
History
My history lesson is presented in two parts:
1. This awesome video of Terminator heads up displays
Here's how to spot an icon of wearable computing: Google a name, and if you find more photos of him wearing a computer than without, you've found a winner. My absolute heroes in this regard are Thad Starner and Steve Mann, who have roughly a half century of wearing between them.
While much of Starner's research has been on the practical, immediate applications of wearable computers, Mann always trended more philosophical, and his ideas have aged better. For instance, the MIT definition of a wearable computer is indeed a little smartphone-redundant, but Mann offered a vision of something very different and, I think, something better.
"Traditional computing paradigms are based on the notion that computing is the primary task," says Mann. "Wearable computing, however, is based on the idea that computing is NOT the primary task."
Mann offered a few "operational modes" of this sort of wearable: augmentation, mediation, and personal.
“The computer should serve to augment the intellect, or augment the senses.”
- Mann
Augmentation
Taking speed, trajectory, and physics calculations out of the subconscious and into the conscious makes us something slightly more than human
One of the first applications of a "wearable computer" was Beating roulette. Nerds have been bringing computers into casinos as early as the 60s, culminating in the 80s with serious wins at roulette. Thomas Bass, who wrote the definitive piece on wearables for Wired in 1998, smuggled a shoe computer to a table, gaining a 44% advantage over the house.
Predicting roulette is using a computer for something only a computer could do. It's taking in the same visuals as a human, but then running complicated calculations to predict the deceleration of the wheel, the movement of the ball, and the eventual landing spot. "The problem is similar to landing a spaceship on the Moon," writes Bass, "except all the calculations have to be done within the few seconds between the launch of the game and the croupier's call to place your bets."
This is "Terminator vision" come to life. Arnold's computer-aided lens helps him size up potential enemies, compute trajectories, read lips, and in general perceive the world around him more fully than a regular human can. It's just a pity about that Virtual Boy-style display he has to deal with.
These powers are extrapolated even further in Vernor Vinge's 2006 sci-fi novel, Rainbow's End. A kid with a wearable downloads an engineering program to scan a machine and extrapolates its function, all in a matter of seconds. He's even able to offer suggestions to tweak the machine's operation with his newfound computer-aided intuition.
In the 90s Thad Starner developed a wearable system to translate American Sign Language into English phrases. As far back as 1967, lip reading was possible with machines. HAL's similar feat in the 1968 film 2001: Space Odyssey was considered sci-fi, but in fact it was yesterday's news. Other wearables can use sound or vibrations to help guide the blind through complicated terrain, and simple informational overlays of points of interest for clueless tourists are practically routine at this point.
All this harkens back to the origins of augmented reality. A couple of Boeing researchers created heads-up displays for engineers to overlay schematics on top of their work, reducing the distraction of double-checking a blueprint every few minutes. Despite the concept's age, the paradigm still has an advantage over the smartphone. Something that floats in front of you is part of your senses, something you have to check and re-check is just a computer.
“The computer runs continuously, and is 'always ready' to interact with the user.”
- Mann
Constancy
Everybody talks about the interruptions that smartphones present, but it seems like little is being done about it aside from the occasional call for self-control. There seems to be a new app on your phone every few weeks that's begging to litter your notification tray.
It might surprise you to learn that much of the wearables research in the 90s was about making your computer less obtrusive, not more. And the solution seems counter-intuitive: make the computer more present in your life, to the point where it fades away and becomes a part of you. The shift from "user" to "cyborg" lowers the barriers of operation, as the interface disappears completely.
Richard DeVaul, a major researcher in low-impact wearable UI, actually works at Google X, presumably on Project Glass. In the early 2000s he developed the "Forget-Me-Not" memory glasses, with "subliminal cueing" to prompt a user's memory using quick-flashing LEDs. He also worked on more traditional screen overlays like context-aware shopping lists and appointment reminders.
"I can improve your performance on a memory recall task by a factor of about 63% without distracting you," claimed DeVaul, "in fact without you being aware that I'm doing anything at all." That's a bold promise, but if it's true I hope DeVaul brought a little bit of that subliminal magic to his Google work.
Meanwhile, Thad Starner took his own approach to an always-with-you memory augmentation. Called the "Remembrance Agent," Starner and a colleague built a hyped-up Emacs setup to search his documents and emails constantly as he took notes with a one-handed Twiddler keyboard. He could pull up information related to whatever he was doing in the real world, letting him resume conversations years later, and work as a sounding board for other deep thinkers. He actually got fast enough with his wearable that he could start speaking a sentence before he knew where it would go, adding in facts from Remembrance as fast as he could supply them from his own meat brain.

Unfortunately, we can't all be MIT Ph.D-types with 70 words-per-minute-per-hand. Much less understand Emacs. But I think what Thad proves is that there's a wide gulf between "wait a minute, let me look that up" on a phone, and constant information access on your eyeball.
But constancy isn't just about helping your present memory. At Microsoft Research, Gordon Bell and Jim Gemmell have been refining the practice of "lifelogging" for over a decade. Bell records everything he does, both virtual and IRL, and Gemmell has been building the software that helps make sense of it.
The idea is to improve long term recollection of events, but also to improve reflection on events:
"Instead of the rosy-colored concept of the time I spent with my daughter," says Gemmell, "I now have a record and can look at it and say, 'Oh jeez, could I have handled that better?'"
Devices like Nike+ and Fitbit let us reflect on our history of fitness, both for a day and for years of activity, and mixing in cameras, sound recording, location data, a day's worth of internet communication, and manually-entered meta data, could make for a very interesting (or very disturbing) record of how you live your life.
More recently, Abigail Sellen and Steve Whittaker have actually argued against recording too much. Useful information is better than exhaustive information, because the term "digital memories" is a bit of misnomer: what's more important is what actual, brain-stored memories your digital records can spark. A computer that's always with you knows enough to serve you only what you need, only when you need it. "Synergy not substitution," as Sellen and Whittaker put it.
As for now, Thad Starner's notes on a conversation are better in the moment than Gordon Bell's two hour playback of it. A "constant" computer doesn't require you to zone out and "compute," but instead, by the virtue of its intimacy, it can work with the minimum amount of input and offer the minimum amount of output. A close friend, instead of a stranger.
“In the same way ordinary clothing prevents others from seeing our naked bodies, the wearable computer may, for example, serve as an intermediary for interacting with untrusted systems.”
- Mann
Mediation
You could even block out someone's face, taking the concept of "ignoring" to a whole new level
Wearable computers can help you engage the world with augmented senses and a perfect memory, but they can also help you perceive a world entirely of your own choosing. Steve Mann calls this "diminished reality": altering your visual and audio perception of what's around you. The line between augmented and diminished realities is very thin: do you see the roulette wheel with physics calculations overlaid, or do you only see the physics? When you're in Times Square, do your video goggles offer purchasing options for each of the ads, or do they block out the ads entirely? Does your car HUD overlay directions and road contours on your windshield, or does it block out distractions and replace the cars around you with Mario Kart contestants?
These aren't just theoretical possibilities. Programs can already "delete" on-camera objects in real-time, and if you'd like you can experience the world as a video game with yourself as an avatar. Steve Mann would often simply flip his vision sideways, just to keep things fresh, and when he rides his bike he sets one "eye" forward, and the other behind him, which sounds horribly dangerous.
Your wearable can also be a "mediator" between you and others, making the visualize-them-in-their-underwear speaking technique a reality. Vernor Vinge offers endless examples of this sort of mediation in Rainbow's End: in a world where everybody is "wearing," a mediated reality can be shared, meaning each party can define how they appear to the other, and a third party can define the scenery, with the real and virtual becoming increasingly hard to differentiate.
For Steve Mann, mediation wasn't just about altering his own perception, but about pushing back against an overbearing society. His early experiments included what he called "sousveillance," a form of counter-surveillance where he'd walk into a store and live stream (via uninhabited TV frequencies, sent to a web server) his experiences. He wasn't subtle about it either, wanting everybody in the store to know that, for their own protection, he was offering an alternative to the store's security cameras all around them. The Occupy Wall Street protesters using their phones to record and broadcast their own activities and the activities of the authorities through Ustream might not have been thinking of Mann, but he beat them to the punch by a couple decades.

Photo by Sam Ogden

“This technology, if you give it a sinister twist, could bring totalitarian control beyond anything Orwell imagined. I want smart clothing — owned, operated, and controlled by individual wearers.”
- Mann
Personal

"If you wish, your wearable computer could whisper in your ear..."
Implicit in all of Mann's ideas of the wearable computer is that it be a very personal object, and that when your knowledge and perceptions of reality were shared, they were shared from person to person. By default, the computer had to be private, because it knew too much.
For instance, Gordon Bell's system he's honing at Microsoft has been capturing nearly every moment of his waking life — and your dad just got a Fitbit for Christmas — but the purpose isn't voyeurism.
"What we're doing is not really aimed at putting your whole life on Facebook or MySpace or wherever," Bell says. "This is a memory aid and a record aid, something you utilize at a personal level."
When a computer is truly personal — personal in the way your underwear is personal, not the way your Compaq desktop was personal — it has the opportunity to understand your mood, emotions, health, and social interactions.
At MIT they use the word "affective" to describe this. MIT's Nicholas Negroponte explained the need in a 1996 Wired editorial:
Isn't it irritating to talk to someone as his attention drifts off? Yet all computer programs ignore such matters. They babble on as if the users were in a magical state of attentiveness.
In Bass's Wired piece, he spends time with Jennifer Healey, another Media Lab graduate student working in the Affective Computing Group. She lived life with electrodes on her face to track when she was smiling and frowning, rings to measure the conductivity of her skin (sweat), and a heart rate monitor. A Palm Pilot tracked the data, and an earpiece could let her know how she was doing.
"Your wearable computer could whisper in your ear," writes Negroponte, "perhaps after playing for a few too many hours with a few too many kids, 'Patience, the birthday party is almost over.'"
Richard DeVaul's Forget-Me-Not glasses could let him know when he had spent too long staring at a computer. MIT's work on a Social Emotional Sensing Toolkit turns these perceptions outward, allowing people with Autism Spectrum Disorders to understand the nonverbal cues of people they talk with.
And again, nobody wants this data on Facebook. There was a concept in the 90s of an "innernet," in addition to the intranet and internet. In 1995, Tom Zimmerman was working on a "Body Net" that uses the conductivity of your skin to pass data between different sensors on your body, like the ones Jennifer Healey wore. Since the transmission requires physical contact, it's less likely to get hijacked, though at this point Bluetooth might make more sense. But the cloud? I doubt it belongs in the conversation.

Mann and the 90s

It's easy to look at the wearable computing proponents of the 90s and chuckle. Fanny packs stuffed with wires, backpacks with hacked-up laptops, and those hilariously impractical head mounted displays. How could anyone take Mann seriously when he had a CRT obscuring half his face?
But while the technology seems quaint, I don't think the philosophies are, even if they were the product of a different time.
In the 90s, a band was a "sellout" if it sold a song for a commercial, now it's a success. Fighting the "mainstream" and "corporations" was still a vital movement, even if its 1970s punk-charged heyday was over. The Occupy Wall Street protests look anemic in comparison. It's now an unquestioned goal by most people in Silicon Valley to build the Next Big Startup and flip it for a billion dollars, but those priorities "sucked" to a counter-culture Gen Xer. There was a time when you built an open source operating system or open source browser in your free time to stick it to big-bad Microsoft, but now you're more likely to be a Google employee.
Maybe we're less introspective now, or better entertained. Privacy advocates, Facebook-quitters, and open source holdouts cry from the rooftops, but they're reactionaries, not champions. Much of what passes for innovation these days is enclosed inside a very small space: a better way to check-in, or upload a photo, or manage your friend list. It doesn't question the 90s-maligned "status quo," but instead perpetuates it, and "fight the power" doesn't sound as fun when it just means joining another Facebook group protesting the recent Facebook redesign.
The wearable computer was meant by its early builders to be a different kind of computer, not just a computer in a different place. A computer interrupts you, but a wearable watches your mood, a computer needs your active attention, but a wearable is the one that's paying attention, a computer needs direct input, but a wearable takes care of its own perceptions. When you're wearing a computer, you could be fully human, and then some, but when you're using a computer, you have to reduce yourself to a set of instructions, subject to the machine's interpretation.
It's more than a question of use-models, there's an implicit philosophy in a machine that's based on the intentions of its creator. Thomas Bass described the limits of translator wearables designed for NATO troops going into Bosnia: "For 'I love you,' it says, 'I do not know.' For 'You are beautiful,' it says, 'Put the pieces together and tighten them.'"
Steve Mann wanted the very nature of wearable computing to be counter to something that could be used by the military, or as a work "uniform," or as anything that could be controlled by a centralized entity. He called it an "existential" computer. Instead of your computer perpetuating consolidated power, making you a literal pawn in their game — a dot on the HQ's map, with a gun (or a wrench) in your hand, and a to-do list in your eyeball — the computer empowered the individual.
Blocked advertisements, counter-surveillance, and peer-shared realities could "reclaim" the public space, and by their nature they'd refute a monoculture that hadn't yet been rescued by Tumblr and YouTube. It was a new way forward.
"If you wish, your wearable computer could whisper in your ear..."

Google's approach
My worry is that Google is taking 21st century smartphone thinking and choking out Steve Mann's skepticism and Thad Starner's practicality. Even with Richard DeVaul on Google's payroll, I don't see the fingerprints of his prior work on what we've been shown of Project Glass. He was working on mnemonics, but all I see is Google+.
Of course, until Google showed up, these efforts were mostly academic. The hardware companies made their money selling to military and corporations, but Starner and Mann's hacked-together creations were explorations into possibilities, not prototype products. In fact, their failure to get wearables into the consumer mainstream seems to be directly related to their lack of interest in building anything a corporation could make profitable, or a consumer would find affordable.
A wearable device that's completely reliant on the internet misses many of the rich opportunities of wearables
It makes sense Google would have to tweak the formula to make a buck, but did they go too far? I think Google thinks of "augmentation" as the internet, not as something that can be performed by a lone computer, which only has your wearable's self-gleaned perception of its surroundings to go on. They're marking a line in the sand, proclaiming that the internet is our only hope for technological betterment. Instead of Project Glass mediating and augmenting your connection with reality, it mediates and augments your connection to Google services.
"The compelling use case for us is the sharing experience," Google X lead Sebastian Thrun told Charlie Rose in his debut interview about Project Glass. In fact, his one use of his wearable Glass prototype during his chat was to take a picture of Charlie Rose and upload it to Google+. Halfway through the conversation, his eyepiece glowing blue, Thrun reached down and turned off Glass entirely.


It's not like Google is the only one who thinks in a "if we could just connect this activity to the internet" sort of way. Text-to-speech is performed in the cloud by both Google and Apple, Instagram doesn't put filters on your local photo collection, just your uploads, and Facebook doesn't even have a conception of a friend group minus an internet connection. Nike's "Fuel" promises continual enhancements in the way the cloud crunches your data, not in any sort of tweaks to your hardware sensors.
But Google wants to take this highly specific concept of what the future can offer and put it on the bridge of your nose, so it's Google's responsibility to finally think long and hard about what that means. If you've read You Are Not A Gadget, or browsed the End Times section of your local bookstore lately, you know that many of the early paradigms of computing, formed in the 70s and 80s, have stuck with us to the present day, and are bemoaned by an endless parade of authors. Google has a chance to do some things from scratch; will it try to right the wrongs?
Unfortunately, I'm not sure Google is on the right track yet. In the video of what Project Glass "could" offer (which you might take as a best-case scenario, or just one of many paths), I think Glass misses out on what's happening right in front of it, because it's too busy checking what's happening in the cloud, or sharing what's happening to the cloud.
Wrap-up

photo by Gary Meek
Look, I'll confess, I'm far from objective when it comes to wearable computers: when I first heard Google was going to make a wearable computer, I flipped out, ready to empty my bank account at the nearest Best Buy. I found myself instantly evangelizing the concept to confused doubters, and before I even watched Google's video I had 100 "must-have" features in mind for the final product.
I think we've been missing out on what wearables could offer for too long. Maybe we were afraid of how we'd look, or how we'd use them — or how they'd use us. It can be hard to separate the idea of wearable computers from kneejerk sci-fi or dorky computer labs. A wearable computer isn't going to hack into your brain, and it's not meant to separate you from society. You might become a cyborg, but you don't have to be The Borg.
Once upon a time, in MIT's storied hallways, young men and women envisioned a future where a computer was subtler. Where it "got out of your way," as Google puts it. The idea was that a computer could no longer call you away from the real world and into its hunched ergonomics, but instead it would make you a better part of that real world, head up and smarter than ever.
Please Google, I'm begging you: don't just put Google+ in my face.
All images courtesy of Steve Mann and Google
I want my eventual, inevitable wearable to be a lot more like Terminator, a lot less like a pretty web browser