OK, let’s be honest. Using generative AI to create large chunks of text is certainly interesting, but there’s something particularly compelling about entirely new images being generated from a simple text prompt.
Not surprisingly, GenAI-powered graphics tools have been a source of fascination for the last year or so as a result. What’s interesting now is that just as the early experiments with text-based GenAI tools are quickly evolving from interesting exercises to essential real-world applications, so too is the world of generative graphics.
At this year’s Adobe Summit, the graphics and imaging leader, Adobe (NASDAQ:ADBE), is evolving its initial set of Firefly image creation tools into a suite of business-focused applications. From content production and localization tools to custom models and integrated AI assistants, the new offerings are intended to give businesses several different ways to integrate GenAI graphics into their environments.
One of the biggest announcements from the event is the debut of a new application Adobe is calling GenStudio. What’s interesting about it is that GenStudio pulls together two sets of capabilities that Adobe has been developing over the years – its widely recognized image editing and creation tools and its ad campaign management and reporting tools – into a single application.
GenStudio can leverage the image creation and editing capabilities of an enhanced version of the Firefly tools and then also track how effectively the assets that it creates perform against the metrics that an organization sets up to measure.
On the content creation side, one of the things that Adobe has added is the ability for companies to easily customize the tool by training it with as little as roughly 20 of Adobe’s images or existing graphics.
These custom models will let companies create new content and other marketing material with their signature elements and unique graphic style, which could prove to be a huge time saver for existing staff. As part of the new Firefly Services, Adobe has created about 20 APIs that organizations can tap into as part of this effort to help them leverage these new capabilities.
These custom models also help overcome a potential issue that has kept many organizations from using GenAI-powered imaging tools: copyright issues. In fact, a study by TECHnalysis Research showed that those (70% of respondents) who hadn’t done much with GenAI cited copyright-related concerns as the key deterrent from doing so.
Many of the other web-based GenAI tools essentially ignore brand copyrights and make it easy for other individuals (and, theoretically, companies) to illegally use copyrighted logos, images, and other materials.
Adobe, however, has been a strong advocate for honoring copyrighted material within their Firefly tools and even helped start the Content Authenticity Initiative (CAI) to ensure that copyrighted materials weren’t being used to train image-based GenAI models. (This is a problem that recent tests with most of the other major GenAI image tools, such as Midjourney, Dall-E and Stable Diffusion, have clearly shown.)
If you’re the company behind a particular brand, however, then you’re obviously going to want to use that brand and its related materials as you create new content. That’s exactly what the new Firefly custom models enable.
In addition, Adobe has new customization and personalization tools that can simplify the ability to do things like localize content for a particular market via language translation, background replacement, and other similar tricks. The tool can also essentially automate the process of adjusting content for the different requirements of various platforms.
Another new capability that Adobe is introducing to its suite of image creation and editing tools is integrated AI-based help in the form of Adobe Experience Platform AI Assistants. So, instead of having to figure out exactly how to do something in a sophisticated tool like Adobe Photoshop or even how to speed the creation process in something like Adobe Express – which is designed to make it easy for even non-graphics people to create impressive looking designs – you can simply tell these assistants what you want done and they will do it. The proof will be in the real-world testing of these capabilities, but certainly from a theoretical perspective, these AI assistants are a very important step forward.
On the content management and tracking side of things, GenStudio also incorporates new tools to see how well any GenAI-generated content is performing in the markets or regions to which the new content is being targeted.
This is a critical capability, because while it’s great to make the process of creating customized, on-brand content easier, if it doesn’t work effectively in the real world, then the efforts to create this customized content are all for naught. GenStudio also integrates asset management features, workflow and project management capabilities, reporting tools, and more.
Finally, on the Firefly side of things, Adobe also unveiled an intriguing new capability that allows it to quickly structure an image similarly to an example you provide. Previously, Firefly could use a reference image to essentially learn and “inherit” a style that it could use when it generated an image.
However, this new Structure Reference feature lets you create something that’s laid out similarly to an example image you provide. So, for example, if you have an image with an object lying on its side or a person in silhouette, you can make sure the generated graphic will have those same basic structures.
It’s a classic case of an image being worth a thousand words because using only text-based prompts to get a similarly structured and laid out image has typically been little more than an exercise in frustration.
As we’ve seen from other vendors, 2024 is proving to be the year when GenAI capabilities move from fantasy to real-world production. Adobe’s new suite of offering and enhanced features for Firefly and its new GenStudio are yet another great example of this phenomenon and highlight that the world of generated graphics is also entering into this more practical, productive phase.
Disclaimer: Some of the author’s clients are vendors in the tech industry.
Disclosure: None.
Source: Author
Editor’s Note: The summary bullets for this article were chosen by Seeking Alpha editors.