Firefly, Adobe’s family of generative AI tools, is out of beta testing and ready for commercial use. That means all you creative types now have the green light to use it to create imagery in Photoshop, to try out wacky text effects on the Firefly website, to recolor images in Illustrator and to spruce up posters and videos made with Adobe Express. I’ve been doing some of that myself.
And now we know how much Adobe’s artificial intelligence technology costs to use. Adobe includes credits to use Firefly in varying amounts depending on which Creative Cloud subscription plan you’re paying for, but it’s raising subscription prices in November.
If you have the full Creative Cloud subscription, which gets you access to all of Adobe’s software for $55 per month, you can produce up to 1,000 creations a month. If you have a single-app subscription, for example to use Photoshop or Premiere Pro at $21 per month, it’s 500 creations a month. Subscriptions to Adobe Express, an all-purpose mobile app costing $10 per month, come with 250 uses of Firefly.
“We don’t want anyone conserving [credits] or creating from a place of scarcity or feeling like they’re rationing,” said Deepa Subramaniam, vice president of marketing for Adobe’s Creative Cloud subscription.
But take note. Adobe will raise its subscription prices about 9% to 10% in November, citing the addition of Firefly and other AI features along with new tools new and apps. For example, the all-apps annual subscription increases from $55 to $60 per month, and a single-app subscription increases from $21 to $23 per month.
In my experience with Firefly so far, it’s generated some very cool effects — but I’ve also seen its limitations. It’s a cloud-based service, so there’s reason to expect Adobe will make good on promises of improvements as it retrains Firefly for better results.
UBS analyst Karl Keirstead estimated in a report Thursday that Adobe will generate $400 million to $500 million in new revenue from the price increase in the company’s next fiscal year. He had expected Adobe to charge for a standalone Firefly subscription, though, not to have it folded into the overall Creative Cloud prices. His firm “We … wonder if this says anything about Adobe’s confidence in a more direct Firefly monetization approach,” he said in the report.
Generative AI’s impressive abilities to mimic human output burst into the public consciousness in 2022 with the arrival of OpenAI’s ChatGPT, a text-based chatbot. Generative AI tools trained on large swaths of data make plenty of mistakes, but Adobe’s customers could prove more forgiving since many of them are exploring ideas. Generative AI is better with flights of fancy than literal truth.
Customers with paid subscription plans will be able to continue using Firefly if they blow through their monthly allowance, but it’ll be slower, Subramaniam said. Those who are on free tiers get a taste of the technology with 25 uses per month. Those who expect to blow through their caps can pay $5 per month for an extra 100 Firefly usage credits starting in November.
Adobe pays stock art contributors for AI training imagery
Also notable about Adobe’s approach: It’s paying Adobe Stock contributors whose imagery was used in training Firefly. Adobe will pay out a “meaningful” bonus annually, she said. The payout is based chiefly on how often a contributor’s images were licensed by customers but also the total number they’ve had accepted into the stock image licensing business.
“This is an opportunity to provide a new revenue stream for our contributors,” Subramaniam said.
Previously, Firefly was available only in beta versions of the software, and Adobe forbade its use in commercial projects. To sidestep copyright problems that may deter commercial customers from using AI, Adobe trained its Firefly AI on images from its own corpus of Adobe Stock imagery and public domain images.
Firefly is coming to Adobe’s Premiere Pro video editing tool later this year, too.
I’ve tried out Adobe Firefly AI
In my testing, Firefly often was able to capably blend imagery with existing scenes, either inserting elements with the generative fill tool or widening an image with generative expand. It sometimes can match a scene’s lighting and perspective, a difficult feat, and even create plausible reflections. It’s particularly adept at reproducing busy environments like foliage.
But it also often produces distortions or weird problems – for example, an elephant with a second trunk where its tail should be. Often you’ll have to reject a lot of Firefly duds and try different prompts to get useful results, and so far at least, it doesn’t look likely that MidJourney fans will abandon that rival tool for generating AI imagery.
You often can get better results by breaking generation down into multiple steps. For example, in the parachuting hippopotamus image above, I first prompted Photoshop to generate a hippo against a blue sky, then expanded the image to give it more sky, then added the parachute.
Images labeled as AI-generated
Plenty of people are alarmed by “deepfake” AI copies of real people and impressed with realistic AI images like the Pope blinged out in a puffy jacket. To help combat the problems, Adobe is using a technology called content credentials that it helped develop to improve transparency.
Images created using Adobe’s tools will be labeled as AI-generated using content credentials, Subramaniam said.
“That’s really how we’ll bring some trust and some transparency into the process to demystify all of this,” she said.
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.