A popular app for 3D artists just received an accessible way to experiment with generative AI: Stability AI has released Stability for Blender, an official Stable Diffusion plug-in that introduces a suite of generative AI tools to Blender’s free 3D modeling software. Third-party plug-ins offer similar functionality, but Stability AI’s own implementation will likely be more polished, and the company is promising regular updates.
The add-on allows Blender artists to create images using text descriptions directly within the software — just like the Stable Diffusion text-to-image generator. You can also create images using existing renders, allowing you to experiment with various styles for a project without having to completely remodel the scene you’re working on. Textures can similarly be generated using text prompts alongside reference images, and there’s also the function to create animations from existing renders. The results for the latter are… questionable, even in Stability’s own examples, but it’s fun to play around with crudely transforming your projects into a video format.
Stability for Blender is completely free and doesn’t require any additional software or even a dedicated GPU to run. Providing you have the latest version of Blender installed, all you need to get Stable Diffusion running inside it is an internet connection and a Stability API key (which you can get directly from Stability AI). Installing the plug-in is relatively straightforward, and Stability has provided several tutorials to walk through how to use its various features.
Just to be clear, though: Stability for Blender doesn’t generate 3D models, just 2D images you can use in various ways, such as reference material (3D generative AI is in its infancy, but there are models that do this, like Google’s DreamFusion, Nvidia’s Get3D, or OpenAI’s Point-E.) Overall, this is just a neat, easy-to-use tool for inexperienced artists looking to experiment with generative AI in 3D.