clock menu more-arrow no yes mobile

Filed under:

Meet the godfather of wearables

It all started with beavers. When Alex Pentland was three years into his undergraduate degree at the University of Michigan, in 1973, he worked part-time as a computer programmer for NASA’s Environmental Research Institute. One of his first tasks — part of a larger environmental-monitoring project — was to develop a method for counting Canadian beavers from outer space. There was just one problem: existing satellites were crude, and beavers are small. “What beavers do is they create ponds,” he recalls of his eventual solution, “and you can count the number of beavers by the number of ponds. You’re watching the lifestyle, and you get an indirect measure.”

The beavers were soon accounted for, but Pentland’s fascination with the underlying methodology had taken root. Would it be possible, the 21-year-old wondered, to use the same approach to understand people and societies, or use sensors to unravel complex social behavior? And in so doing, could we find a way to improve our collective intelligence — to create, in a sense, a world that was more suited to human needs, where cities and businesses alike were developed using objective data to maximize our happiness and productivity?

Pentland would spend the next four decades exploring those very questions, finding ways to observe people and their patterns from a computer rather than outer space. You didn’t need to be physically or emotionally close to someone, Pentland reasoned, to grasp the essence of their thoughts, actions, or motivations. “If you think about people across the room talking, you can tell a lot,” he points out. “He’s attracted to her, she’s pissed at him. If you could shadow someone for a day you’d know a huge amount about them and you’d never have to listen to a word.”

Being more fully immersed, as a matter of fact — joining the proverbial beavers on their romps — could actually hurt more than help: your newly formed prejudices would sway what the objective sensory data was telling you. All you really needed, Pentland concluded as he stared at those satellite screens, was a sensor that moved with each individual and took in her environment, from her physiological signs to her vocal signals to the sights and sounds that surrounded her throughout the day.

By the 21st century, Pentland would be known as one of the most important leaders in wearable technology, having spearheaded or inspired the development of everything from Google Glass to fitness trackers. But it was in that initial, beaver-inspired intuition that the seed for wearable technology as we know it today was firmly planted.

Sandy (no one seems to call him Alex) meets me in his office at MIT’s Human Dynamics Laboratory, a group he founded some three decades ago and has led ever since. The nickname, he explains, was a way to differentiate him from Alex, Sr., his father. And it suits him, at least today, with his outfit a perfect match for the moniker: wet graying hair (it’s been raining), a pewter turtleneck, sneakers. He looks younger than his 62 years, taking time out each afternoon for a bout of exercise, and, on the weekends when he’s lucky, going for a hike or a ski trip with his wife and two sons. Their interests, he’s proud to say, correspond closely with his own. His lounging demeanor — back nestled into the corner of a couch, legs outstretched, one arm grazing the sofa’s back — is more reminiscent of an affable uncle than the pioneering scientist informally known as the grandfather of wearable technology.

It’s a title he nearly missed out on: in 1973, shortly before his beaver-inspired breakthrough, Pentland dropped out of college and worked as a truck driver — a brief fork in his career path driven by frustrations with what he perceived as overly rigid degree requirements. He returned to finish his undergraduate degree, then wound up pursuing a PhD in MIT’s psychology department on a lark: his girlfriend was moving to Boston, so he applied to two schools (MIT and Harvard) in the area. Rather than craft an original submission, he photocopied his college application forms from the University of Michigan. Naturally, MIT said yes.

The technology to accomplish Pentland’s vision remained purely science fiction

Pentland was an odd arrival on campus in 1976. Where his colleagues mostly came from the worlds of computer science and technology, he brought with him an interest in human psychology. Artificial intelligence and computational modeling were the hot topics of the era, and, at the time, AI and psychology were represented by the same department. Pentland wanted to explore those intersections, but he had also studied social psychology — the way that people interact with and perceive one another. Late at night, he would work out his theories on the top floor of the AI lab, surrounded by the world’s first robots, earliest Lisp machines (considered to be the first single-user computing stations), and laser printer prototypes: in short, technology that existed nowhere else and that most of the world could only dream about. So why couldn’t he do the same? Create something the world had never seen — computers that understood what it meant to be human? Pentland worked towards that goal throughout his PhD, and in 1986, after leaving MIT for a brief professorial stint at Stanford, he would return to the school and name his first lab in honor of that goal: Looking at People.

Today, wearable technology seems like a natural fit at MIT. But 30 years ago, it wasn’t an obvious direction for a newly minted researcher — computer scientists worked with computers; social scientists worked with people. "Applications like facial recognition or user interface just wasn’t something they did," Pentland says. "Wearable technology is inherently social, where computer technology isn’t. When the Lab opened, mainline computer science people thought anything including social aspects was pretty far afield."

Convincing them otherwise proved a hard sell. Research funding, for one, was hard to come by; from the start, normal academic channels were closed off — why fund such quixotic research when more serious pursuits are in the works? Pentland instead had to appeal to industry sources like FedEx and Eriksson — anyone whose business, in his estimation, could benefit from a wearables boost. Younger recruits "got it," but to higher-ups, his approach remained "flakey" at best, he recalls. What’s more, the technology to accomplish Pentland’s vision remained pure science fiction. You could use space satellites for beavers; for humans, you needed a device to seamlessly weave into everyday life —"on you, in your glasses, on your clothes, in your exact position," Pentland says. But in the 1980s, not only was there no wireless; there was no internet. Computers were, in Pentland’s estimation, "the size of a pizza," — actually, he corrects himself, gesturing expansively with his hands, "even bigger that that," as though it were not just the pizza, but the oven, too.

Pentland might be credited as the grandfather of wearable tech, but he does have a few predecessors: more than a decade before he arrived at MIT, mathematician Edward Thorp and computational theorist Claude Shannon devised an intricate contraption with the admirable goal of cheating at the roulette table. The device, which was the size of a cigarette case and captured speed data on both the wheel and the ball, relied on two switches in the wearer’s shoes: one press turned on the computer, the other press initiated the timing. A musical tone would sound in the bettor’s ear to signal when the ball had three or four revolutions left — he would (naturally) be wearing a hearing-aid-like device, attached to the computer by wires camouflaged to match his skin and hair.

Though Thorp and Shannon’s invention was ingenious, it remained unwieldy, capable of performing only a single task. It would take much more to make wearable technology both widely functional and widely usable. And that "more" came from the first place dedicated exclusively to the creation of wearables: the Wearable Computing Project, inaugurated by Pentland upon returning to MIT in 1986, and then formally launched as its own entity in 1998 under the auspices of Pentland’s lab.

The first wearable prototypes that looked anything like those of today emerged from the lab in the early 1990s. And by 1998, Pentland’s "wearables closet" had grown, he recalls, to include "glasses with a private, full-resolution computer display; a health monitor in a watch that records my temperature, heart rate and blood pressure; a computer-in-a-belt with a wireless internet connection; a lapel pin that doubles as a camera and microphone; and a touchpad or keyboard literally sewn into a jacket."

"A lapel pin that doubles as a camera and microphone; and a touchpad or keyboard literally sewn into a jacket."

Those creations were devised by Pentland alongside a rotating roster of some 20 students in a lab that looked mostly as it does today: an open space with desks that seem almost an afterthought nestled among various scale models, walls that aren’t where they’re supposed to be, gadgets and hardware of whose function I have only a hazy or nonexistent knowledge.

That, and students waltzing around with hardware on their heads and bodies: the earliest wearable prototypes, after all, weren’t quite as wearable as they’ve since become. Those fitness bracelets that monitor everything from your heart rate to your sleep cycle? They were once incorporated into watches as ugly as they were limited in functionality. The identity-badge-embedded sensors that Pentland and his students have used to change how businesses — from call centers to investment banks — organize their workspaces and run their meetings? These were initially possible only with a wearable vest. (An early prototype hangs in the lab. It looks much like a fisherman’s getup, except with wires that connect to intricate contraptions rather than tackle.) Even the smartphone — a type of wearable itself, Pentland points out, since we carry it and access it as frequently as we would a watch (he pats his own pocket, where his invariably resides) — was built from scratch here in the early 2000s, back when smartphones didn’t yet exist in the outside world.

And Google Glass? Suffice to say that its earliest developers, led by one of Pentland’s star students, Thad Starner, went by the nickname of Cyborgs: the monitor-cum-keyboard contraption covered over half their faces and included wires running alongside their bodies to a computer and a one-handed keyboard. They called their pet wearable Lizzy — a reference to the original nickname for the Model T Ford, "Tin Lizzy." Lizzy would eventually become Google Glass itself, when Larry Page and Sergey Brin tapped Starner, in 2010, to lead their new wearable project.

The Human Dynamic Lab now boasts some 50 graduate alumni. And Pentland himself is no longer a dark horse — he’s actually more of a campus darling. Everyone I talked to refers to Pentland in the superlative: a "ridiculous" adviser, according to Thad Starner (in the best sense of the word); not just a brilliant researcher, but a brilliant mentor, according to former student Ben Waber. And Pentland’s energy — especially when it comes to students — is unflagging. "I don’t think he sleeps a ton," Waber confesses.

The dedication shows. Of Pentland’s former students, roughly half are tenured faculty at various institutions, and the rest are working in industry, either leading research groups or heading up their own companies. And the overwhelming majority of those companies was co-founded, sponsored, or advised by Pentland — each devoted to bringing a separate facet of wearable tech to the general public.

The latest Pentland project to garner significant attention is the sociometer. It’s a deceptively simple device roughly the size of a card deck that’s equipped with an accelometer to measure your movement, a microphone to capture your voice, Bluetooth to detect other sociometers nearby, and, finally, an infrared sensor to tell when you’re engaging those nearby people face-to-face — capabilities that Pentland believes can be used nearly everywhere: in medical settings, to determine if someone is depressed or sick; businesses, where companies can gauge employee happiness and productivity; and think-tanks and entrepreneurial environments, where sociometric data can help badge-wearers maximize personal and team creativity and innovation. Pentland insists that, to mitigate privacy concerns, most sociometers record only voice and speech patterns instead of actual words. He’s been developing the device for close to 15 years, and has already had people wear it for weeks at a time. One study revealed that the sociometer helps discern when someone is bluffing at poker roughly 70 percent of the time; another found that a wearer can determine who will win a negotiation within the first five minutes with 87 percent accuracy; yet another concluded that one can accurately predict the success of a speed date before the participants do.

Pentland has now amassed over 100 metrics that often tell you more about a person than their actual words

Over the last decade, sociometric data has advanced far beyond these initial demonstrations. Using data collected over dozens of human studies, both in labs and the field, Pentland has now amassed over 100 metrics that often tell you more about a person than their actual words. In voice and posture, you can read the signs of depression and happiness, engagement and boredom. In the frequency and nature of an interaction, you can decode signals of job satisfaction and productivity. You can tell when a group is likely to innovate and when it's apt to become mired in inertia. You can, it seems, even predict the onset of Parkinson’s.

Pentland’s newest spin-off, Sociometric Solutions, is now working on developing the sociometer even further. As of 2013, the device has been used by dozens of research groups and companies, including members of the Fortune 1000. Last year, MIT alum Waber (who’s leading the venture) and Pentland conducted a study alongside a research group at Cornell University to see if they could push the sociometer’s capabilities to the next level. By analyzing nothing more than the tone of someone’s voice, the team accurately predicted the level of cortisol in their saliva — indicating how stressed they were now, and how stressed they were likely to be in the near future. "From an evolutionary point of view, it makes a huge amount of sense," Pentland says. "You can imagine back in the day, when you were about to go hunt a mammoth, it would be good to know who was feeling good and who was feeling sick and who was enthusiastic and who wasn’t. So, we developed these signals, not language but something older than language." And back come the beavers. "It’s like watching beavers from outer space, like Jane Goodall watching gorillas. You observe from a distance."

At last, Pentland can not only count his beavers from space, but predict where they will go, how they will interact, what might happen to them in the future — and how all of those outcomes can, in turn, be improved. In 1998, he predicted that "[wearables] can extend one’s senses, improve memory, aid the wearer’s social life and even help him or her stay calm and collected." With the sociometer, he envisions that they’ll do even more than that: the wearables of the near future could improve collective intelligence, the way society functions on the broadest level.

But a future where wearable technology yields objective data that improves our lives and our society isn’t a guarantee. Just as Pentland’s work on wearables was met with distrust among his MIT colleagues in its earliest days, wearables continue to be met with skepticism, much of it deserved, today. Businesses ranging from a Seattle cafe to a Vegas strip club are banning devices like Google Glass entirely. In April, yet another hapless Glass wearer — a 20-year-old journalist from Business Insider walking in San Francisco’s Mission district — had the device ripped from his face and smashed on the asphalt.

In part, such unease can be explained by an underlying distrust in technology intruding on human interaction. We look at our phones more than each other, check our Twitter feeds rather than engage in real-world interaction. Aren’t Google Glass or sociometers just a step in that same direction, devices that limit our sociability and harm our intelligence? After all, it’s simple for me to miss what you’re saying if I’m pulling up a reference from your last sentence or distracted by an alarm I’d set earlier. And yes, there is the fear of memory erosion — the so-called Google effect. If you can constantly access everything, why remember anything?

According to Pentland, though, those arguments ignore much of what these devices enable. Tethered mobility might mean more, not less, social grace. You remember everything about your last conversation rather than feeling your way through slipping facts. Rather than grasping to recall someone’s name or where you’ve met, these memories are at your fingertips. "You can have a much better social life," Pentland points out. "I remember your name, your kids’ names. We’re all happier." Wearables can also prevent the multitasking problem posed by the smartphone’s advent. Rather than pulling out a phone and interrupting your conversation, you don’t even need to avert your gaze. "You can actually pay attention to what you’re doing," Starner, of Google Glass, says. "It doesn’t eliminate multitasking, but it makes it much safer."

The wearables of the near future could improve collective intelligence, the way society functions on the broadest level

As for the memory effect, we’ve always had transactive memory, Pentland points out, relegating certain bits of information to family or colleagues rather than having to remember it all ourselves. That ability frees us to engage in other mental tasks, and the facts that do emerge are more likely to be accurate. As Starner and I speak, he seamlessly uses Glass to check a date for one of his studies, reading back with accuracy a fact that may have otherwise been hazy. (Starner unwittingly illustrates the opposite when I mention his mentor’s early beaver project. "I thought it was ducks," he muses. "Beavers makes more sense, I guess.")

But the problem of privacy remains — and, as Pentland is first to point out, it’s a serious one. "Some of the scenarios are downright scary," he says. "It doesn’t seem at all science-fiction-like to me. It’s very possible." Any technology that can capture and transmit your surroundings mid-stride can be abused. We’re more attuned than ever to the data we leave online — digital bread crumbs, Pentland calls them — but the data we give to a sociometer or a device like Google Glass is potentially far more damaging. Pentland sees the benefits of his technology as limitless; but so too are the possibilities for abuse.

Glass opponents are already worried about images being captured unwittingly or confidential information being revealed. But the dilemma goes far deeper: someone with malicious intent could harness reams of the high-level signals we emit — real-time behavioral patterns, like pulse rate or voice or simple movement patterns — which might be more troubling than anyone having access to specifics. Animated only a second earlier, when he’d been pointing in excitement to the brand-new Google Glass on one of his office shelves, Pentland grows serious. He knits his fingers together in front of his chest and leans forward on the couch, as if to underscore the importance of what he’s about to impart. His eyebrows rise. His voice falls and slows. "The thing is, I can read most of your life from your metadata," Pentland says. "And what’s worse, I can read your metadata from the people you interact with. I don’t have to see you at all." He continues, "People are upset about privacy, but in one sense they are insufficiently upset because they don’t really understand what’s at risk. They are looking only at the short term." And to him, there is only one viable answer to these potential risks: "You’re going to control your own data." He sees the future as one where individuals make active sharing decisions, knowing precisely when, how, and by whom their data will be used. "That’s the most important thing, control of the data," he reflects. "It has to be done correctly. Otherwise you end up with something like the Stasi."

"It doesn't seem at all science-fiction-like to me. It's very possible."

But the capabilities Pentland’s described are enough to make one worry. Even if we think we’re controlling our data, how can we ever know? If our books can disappear off Kindles in a matter of seconds, if we can swiftly find ourselves locked out of personal email accounts, if we can wake up to discover that the NSA has had the capability to listening to our calls for years, how can we ever trust the level of data collected by Pentland’s wearables — the sociometer in particular — to be safe?

Pentland sighs. It’s a valid concern. And yes, I should worry — a lot. But with close supervision, he believes, and a strict set of guidelines going forward, we can avoid many of the worst case scenarios. We can establish rules before someone breaks them, be proactive instead of reactive — Pentland currently works with the World Economic Forum to advise some of the world’s most important political and business leaders on such regulations. His vision of the future: individuals as data freelancers, who can plug into or out of the network at will, and can share or not, as they choose. "It doesn’t have to be frightening," he says. "It can be empowering."



Photography by Scott Brauer
Maria Konnikova is a contributing writer for The New Yorker online