Skip to main content

    This tech generates realistic avatars from a single selfie

    This tech generates realistic avatars from a single selfie


    Veterans from DreamWorks and Lucasfilm have their eyes on VR

    Share this story

    If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

    Today’s high-end virtual reality rigs are really good at tracking the movement of your body, allowing you to play and socialize in a way that feels natural. But so far the big players in VR hardware haven’t come up with a way to capture your face with the same kind of fidelity. Avatar creation in VR has been handled similarly in video games: the user creates a custom look by dragging and dropping features from a menu.

    Today, a startup called is introducing a system that claims to generate stunningly realistic avatars from a single still image. The tech uses a combination of computer vision, machine learning, and cutting-edge special effects from the film industry. You can check out examples of avatars for Angelina Jolie, Will Smith, Rihanna, Kanye West, and Elon Musk. Each avatar was created from the single still image included in the video.

    The company’s co-founder and CEO, Mahesh Ramasubramanian, is a veteran of DreamWorks Animation where he worked on films like Madagascar 3, Home, and Shrek. Loom’s CTO, Kiran Bhat, helped build Lucasfilm's facial performance capture system and was the R&D lead for facial effects on The Avengers and Pirates of the Caribbean.

    "The key to building believable digital characters is to extract the perceptually salient features from a human face in 3D: for instance, Mark Ruffalo's version of the Hulk in The Avengers," said Bhat in a press release. "The new suite of computational algorithms built by will democratize the process of building believable 3D avatars for everyone, a process that was previously expensive and exclusive to Hollywood actors benefiting from a studio infrastructure."

    There is a lot this system still can’t do well: hair, teeth, and fine lines in the face, to name a few. All the avatars shown have the same range of expressions, and certain emotions produce odd, artificial tics in the examples shown. Still, its a nice jumping-off point, a way to bring a highly customized avatar into a virtual setting with only a small amount of work, one selfie, required of the user. You can check out my avatar here. And no, you don’t get to decide if they add hair.

    And is far from the only startup working on this kind of tech. ItSeez3D announced a very similar technology last week. And a team from Pinscreen and the University of Southern California showed off a version powered by a deep neural net. Neither one, however, demonstrated a way to make the avatars move in a convincing way, a feature where’s expertise in visual effects may give them a big advantage.

    There is likely serious money to be made by the team that can make this tech work well. Facebook is betting that virtual reality will be the next big computing platform, and it wants to make social interaction in VR feel as lifelike and intimate as it does in the real world. uses similar language to describe its efforts. “The magic is in bringing the avatars to life and making an emotional connection,” said Ramasubramanian. “Using's facial musculature rigs powered by robust image analysis software, our partners can create personalized 3D animated experiences with the same visual fidelity seen in feature films, all from a single image.”