Adobe Photoshop is getting a new generative AI tool that allows users to quickly extend images and add or remove objects using text prompts. The feature is called Generative Fill, and is one of the first Creative Cloud applications to use Adobe’s AI image generator Firefly, which was released as a web-only beta in March. Generative Fill is launching today in beta, but Adobe says it will see a full release in Photoshop later this year.
As a regular Photoshop tool, Generative Fill works within individual layers in a Photoshop image file. If you use it to expand the borders of an image (also known as outpainting) or generate new objects, it’ll provide you with three options to choose from. When used for outpainting, users can leave the prompt blank and the system will try to expand the image on its own, but it works better if you give it some direction. Think of it as similar to Photoshop’s existing Content-Aware Fill feature, but offering more control to the user.
I haven’t been able to try Generative Fill myself, but did get a live demonstration. It’s simultaneously impressive and far from perfect. Some of the objects it generated like cars and puddles didn’t look like they were a natural part of the image, but I was surprised how well it handled backgrounds and filling in blank spaces. In some examples, it even managed to carry over features from the photograph being edited, such as mimicking light sources and ‘reflecting’ existing parts of an image in generated water.
Such feats won’t be a huge surprise for creators familiar with AI image generation tools, but, as ever, it’s the integration of this technology into mainstream apps like Photoshop that bring them to a much wider audience.
Apart from the functionality, another important element of Firefly is its training data. Adobe claims that the model is only trained on content that the company has the right to use — such as Adobe Stock images, openly licensed content, and content without any copyright restrictions. In theory, this means anything created using Generative Fill feature should be safe for commercial use compared to AI models that are less transparent about their training data. This will likely be a consolation to creatives and agencies who have been wary about using AI tools for fear of potential legal repercussions.
Generative Fill also supports Content Credentials, a “nutrition label” system that attaches attribution data to images before sharing them online, informing viewers if the content was created or edited using AI. You can check the Content Credentials of an image by inspecting it via verify.contentauthenticity.org, where you’ll find an overview of information.
“By integrating Firefly directly into workflows as a creative co-pilot, Adobe is accelerating ideation, exploration and production for all of our customers,” said Ashley Still, senior vice president, Digital Media at Adobe. “Generative Fill combines the speed and ease of generative AI with the power and precision of Photoshop, empowering customers to bring their visions to life at the speed of their imaginations.”
Generative Fill isn’t available in the full release of Photoshop just yet, but you can try it out today by downloading the desktop beta app or as a module within the Firefly beta app. Adobe says we can expect to see a full release onto the public Photoshop app in “the second half of 2023.”
Adobe has been injecting AI-powered tools into its products for some time now. At Adobe Max last year the company rolled out some new Photoshop features like higher-quality object selections that are powered by Sensei, another of Adobe’s AI models. Firefly is already being used in Adobe Illustrator to recolor vector-based images, and Adobe also said that it plans to integrate Firefly with Adobe Express, a cloud-based design platform rivaling services like Canva, though there’s still no confirmation on when that will be released.