The premise of facial recognition app NameTag could have come from any number of science fiction stories. Start a conversation at a cocktail party. Look deep into the eyes of a stranger from behind a pair of glasses, and take a picture. Flicking your eyes away, check his Facebook account. His hobbies. His criminal record. Start a conversation about your shared love of David Lynch, or escape to the bar. The internet will tell you what to do. "With NameTag, your photo shares you," reads the site. "Don’t be a stranger."

Face recognition technology has been under development since the 1960s, and its use has expanded in the past decade, accelerated by the September 11th terrorist attacks. Even before the attacks, security staff at the 2001 Super Bowl drew public attention when they scanned visitors’ faces to find known criminals (Time magazine dubbed it the "Snooper Bowl.") These were top-down, often covert forms of surveillance based specifically on law-enforcement databases.

Google-glass-hands-on-stock-3-1_2040

Google Glass face recognition offers something different. Arriving after the rise of social networks and smartphones, it’s the logical culmination of Facebook and Google’s "real name" and single-login policies: not only should you have the same robustly informative identity across the web, you should take that identity back into the real world with you. And, at least for now, people hate it. Articles that didn’t outright call NameTag "creepy" still noted that it "seems to cross some pretty serious privacy boundaries," even when just drawing from publicly available information. Google itself had banned face recognition in official Glass apps while it works out a privacy policy. Ultimately, designer Kevin Alan Tussy delayed the app’s release at the behest of Senator Al Franken (D-MN), who cited the need for better rules around facial recognition software. But the decision hasn’t allayed fears about ubiquitous facial recognition.

"You should be able to walk into the bar and be safe."

"If you’re in a bar, you should be able to walk into the bar and be safe, not have to be immediately identified by someone who could be targeting you," says Eric Schiffer. Schiffer’s company, Reputation Management Consultants, recently launched a service called Anti-Glass with a simple promise: pay a monthly fee of up to $300, and people won’t be able to connect your face with your online presence. It’s an extension of his company’s reputation management services, and while he doesn’t go into details, he promises that "in 90 percent of cases," people using Glass-based face recognition apps won’t be able to identify a subscriber.

It’s hard to say how well Anti-Glass actually works, let alone how many people could justify the cost. But Schiffer is only one of the people working on methods to thwart the technology as a matter of principle or profit. Researchers in Japan have taken a more direct route, fitting a pair of goggles with near-infrared LEDs that confuse cameras. Stop the Cyborgs, a group that opposes not only Glass but big data and ubiquitous recording in general, offers posters promoting "Glass free zones" where the headsets are banned. A tool called TagMeNot generates QR codes that can be worn or placed as stickers. If read, they express the subject’s wish to have the image kept offline or to opt out of face recognition. NameTag offers an opt-out system, but to use it, you’ll have to first create an account with ties to a valid social media profile — its privacy features, including a mandatory "information exchange" that tells subjects who has searched for them, all seem predicated on the idea that huge numbers of people will sign up.

Marc Rotenberg, director of the Electronic Privacy Information Center (EPIC), condemns the idea that "privacy survivalism" is the only option. "I don’t think people should have to go around wearing sunglasses or mustaches and masks in order to protect their identity," he says. He says features built in by manufacturers, like the Glass "recording" light that can be disabled by tinkerers, are "also not very practical." Instead, EPIC, along with other companies and organizations, hopes new facial recognition rules can clarify the issue. In February, the National Telecommunications and Information Administration — part of the US Department of Commerce — kicked off a series of meetings to develop a voluntary privacy standard, trying to create something that companies will agree to and privacy advocates will support. But the process won’t be fast. Participants will draft guidelines through June, then reconvene in September to look at feedback, with no final date set for implementation.

Nametag_slide2_medium
A promotional rendering of NameTag, running on a mobile phone. (NameTag)

While the NTIA is still making its rules, questions about social networks and face recognition are not lost on the government. In 2012, several months after Google announced Project Glass, the FTC published a set of best practices for face recognition, urging social networks to "prevent unintended secondary use" of images by making it impossible to scrape them in bulk. It also warned against using facial recognition to identify people a user didn’t already know, unless they had specifically chosen to be identifiable. "Once a person has been identified to a stranger, he or she cannot be un-identified after the fact," said the agency. "A consumer’s face is a persistent identifier that cannot be changed in the way that a consumer could get a new credit card number or delete a tracking cookie."

The developers of face recognition tools, however, argue that the right to be unidentifiable doesn’t trump the real benefits the technology can provide. NameTag’s Tussy describes his product as a way to turn the world back into a small town, and Gifford Hesketh of FaceFirst — a company that develops facial recognition tools for businesses and law enforcement — uses the same metaphor. Before mass urbanization, the default model was places where "everybody knows each other and there’s probably very little privacy," he says. "You drive down the street and somebody knows where you are at any given point in time." New communications technology could be bringing that world back, for better and worse.

As things like NameTag are meant to show, facial recognition can serve good ends. While some have talked about the potential for stalkers, Tussy points to tools that identify violent criminals or sex offenders (another app, "CreepFace," is under development specifically for this purpose.) The ideal outcome for NameTag is a less awkward version of the Bump business card exchange, a way for people who have already created profiles to share contact information, a resume, or just random trivia about themselves. You could still spin a dystopian scenario out of that, but it would at least be a consensual dystopia.

"We have a very positive mission here."

Hesketh bristles at the idea that FaceFirst is cavalier about the ethics of facial recognition. "We started our company because our goal is actually to improve people’s lives and make the world a better place," he says. "We have a very positive mission here." FaceFirst offers a tool that alerts landlords if a blacklisted person gets into a property, as well as one that can spot people a store has previously identified as shoplifters. By creating a reliable way to prevent future theft without involving the police, "it actually becomes a form of progressive law enforcement," he says. "When they stop these people now, they don’t have them arrested for a petty misdemeanor crime."

Hesketh, mind you, doesn’t think something like Glass can necessarily pin names to passerby with any level of consistency, at least not right now. "In terms of laboratory-controlled accuracy, the recognition, the matching algorithms are extremely accurate," says Hesketh — they’re better than humans at focusing on subtle details that are hard to change with plastic surgery. But that accuracy is dependent on having high-quality, well-lit photographs, particularly for the database of face templates. And the more a database grows, the more complicated finding a match becomes. He cites the Boston Marathon bombing manhunt, where facial recognition couldn’t identify the perpetrators from surveillance footage, as an example of how things can go wrong in real life.

5710806409_9c25666c8f_b_medium
Senator Al Franken (D-MN) has pushed for higher privacy standards on Google Glass. (Flickr / John Taylor)

There are ways around the current limits, and companies like Google and Facebook are working hard to find them. NameTag promises that with a good shot, you’ll find a match "a very high percentage of the time." But successfully creepshotting from afar in real time, the sort of thing Schiffer warns of, seems far from a certainty.

In some ways, both Glass and Anti-Glass remain in the domain of highly politicized science fiction more than fact. Glass is entering its second year in a limited beta-testing period, and Google’s ban means users aren’t going to stumble across facial recognition apps in its store. If you really want to figure out who someone is, you’re probably better off finding some identifying detail or taking a picture and trying to run it through facial recognition software later. What Glass promises (or threatens) is a seamless connection that could erase the barriers between our various selves: the professional on LinkedIn, the comedian on Twitter, the flesh-and-blood person meeting a stranger for the first time.

Hesketh sees many complaints about privacy and social networking as disingenuous or naive. "When you submit your information to Facebook or to Google+ or something like that, you should just expect that other people are going to see it," he says. "But I’m an old computer and computer security guy. So I learned long ago that networks are insecure." Privacy scares about Facebook, after all, rarely stop people from quitting it. But the slow rollout of Glass means that for now, most of us aren’t in its network. We’re just under its microscope, and we’re growing increasingly aware of what it might see.