Skip to main content

A Meta prototype lets you build virtual worlds by describing them

A Meta prototype lets you build virtual worlds by describing them

/

‘Let’s go to the beach’

Share this story

Meta is testing an artificial intelligence system that lets people build parts of virtual worlds by describing them, and CEO Mark Zuckerberg showed off a prototype at a live event today. Proof of the concept, called Builder Bot, could eventually draw more people into Meta’s Horizon “metaverse” virtual reality experiences. It could also advance creative AI tech that powers machine-generated art.

In a prerecorded demo video, Zuckerberg walked viewers through the process of making a virtual space with Builder Bot, starting with commands like “let’s go to the beach,” which prompts the bot to create a cartoonish 3D landscape of sand and water around him. (Zuckerberg describes this as “all AI-generated.”) Later commands range from broad demands like creating an island to extremely specific requests like adding altocumulus clouds and — in a joke poking fun at himself — a model of a hydrofoil. They also include playing sound effects like “tropical music,” which Zuckerberg suggests is coming from a boombox that Builder Bot created, although it could also have been general background audio. The video doesn’t specify whether Builder Bot draws on a limited library of human-created models or if the AI plays a role in generating the designs.

AI text-to-art tools are increasingly accessible, but mostly 2D

Several AI projects have demonstrated image generation based on text descriptions, including OpenAI’s DALL-E, Nvidia’s GauGAN2, and VQGAN+CLIP, as well as more accessible applications like Dream by Wombo. But these well-known projects involve creating 2D images (sometimes very surreal ones) without interactive components, although some researchers are working on 3D object generation.

As described by Meta and shown in the demo, Builder Bot appears to be using voice input to add 3D objects that users can walk around, and Meta is aiming for more ambitious interactions. “You’ll be able to create nuanced worlds to explore and share experiences with others with just your voice,” Zuckerberg promised during the event keynote. Meta made several other AI announcements during the event, including plans for a universal language translator, a new version of a conversational AI system, and an initiative to build new translation models for languages without large written data sets.

Zuckerberg acknowledged that sophisticated interactivity, including the kinds of usable virtual objects many VR users take for granted, poses major challenges. AI generation can pose unique moderation problems if users ask for offensive content or the AI’s training reproduces human biases and stereotypes about the world. And we don’t know the limits of the current system. So for now, you shouldn’t expect to see Builder Bot pop up in Meta’s social VR platform — but you can get a taste of Meta’s plans for its AI future.

Update 12:50PM ET: Added details about later event announcements from Meta.