Photoshop’s AI assistant is now live
Adobe‘s rolling out more new generative AI-powered tools today. Its Photoshop AI assistant is now available in public beta, and there are new editing tools in Firefly.
You’ll find the AI assistant in the web and mobile versions of Photoshop only, not the fully fledged desktop version, and you’ll need to join beta testing if you haven’t already.
Article continues below
How Photoshop’s AI assistant works
You can choose to have Photoshop’s AI Assistant apply edits automatically or guide you step by step so you can learn the process. And in the Photoshop mobile app, you can use your voice, simply telling the app what edits you want to see, making it even faster.
You can also draw over images, and then use Firefly or a third-party AI model to interpret your scribbles and turn them into new objects in the image, like Microsoft Paint‘s scribble-to-image tool. Adobe’s calling this AI Markup.
Also from today, Firefly Image Editor now includes Generative Fill, Remove, Expand, Upscale, Remove Background. Plus, some Photoshop-branded tools are now in Microsoft 365 Copilot following the addition of Photoshop in ChatGPT in December.
Is Photoshop’s AI assistant useful?
Suggested use cases proposed in Adobe’s blog post include ‘turning a vacation photo into an epic memory’ by removing unwanted people or objects, and ‘making your portrait pop’ by getting ‘step-by-step instructions on how to add a soft glow, boost colors, refine lighting or transform the background with simple prompts’.
The Photoshop AI assistant was one of Adobe’s AI projects that I was more excited about. It seemed to me to be more in line with what generative AI should be about: simplifying and speeding up workflows and helping creatives use their tools more efficiently rather than generating artificial imagery.
Alas, the execution concerns me a little and has me wondering if it’s not at least partly intended as a way to encourage people to burn through more generative credits – the pricing of which keeps changing.
For now, the assistant is aimed at hobbyists rather than professional creatives. In Adobe’s demos, many of the edits that the AI assistant proposes involve using generative AI in some way: why not add confetti to this photo of people celebrating? Why not add some flowers with Nano Banana? How about changing the colour of the subject’s dress and adding some extra details in the background?
It seems Adobe is keen to have us believe the era of realism is over and to make it seem impossible to conceive of a workflow that doesn’t involve using generative AI.
It also follows the introduction of third-party AI image generators in Photoshop as well as third-party tools like Topaz AI Upscaling in Lightroom. That move already had some creatives thinking that Adobe’s vision for its legacy apps is to turn them into UI wrappers for multiple AI models. The danger of that is that it could end up relying on third-party tools rather than trying to innovate on its own.
As for the impact on our pockets, Adobe played a massive role in the shift from software as a one-time purchase to monthly subscriptions. Generative credits feel like a way to take that even further towards pay pay-per-use.
Until April 9, paid subscribers to Photoshop on web and mobile get unlimited generations when using AI Assistant irrespective of the number of generative credits that come with their plan. Those using the free version of Photoshop on web and mobile get 20 free generations to start with.
First Appeared on
Source link