AI in Adobe Projects: Practical Stories
Discover how AI transforms digital experiences in Adobe projects through real-world applications. Learn how AI enhances content creation, site validation, and project planning, driving efficiency and reducing costs. Dive into stories that showcase innovative uses of AI tools in Adobe Experience Manager, Adobe Commerce, and Edge Delivery Services.
Hi, my name is Mark McConnell, and I am a software developer at Ensemble.
During my time at Ensemble, I have worked with many different clients to deliver solutions across a variety of Adobe products, often working alongside some of those product teams. So it is nice to see some familiar faces here.
Today, in continuing with the theme of this year’s Developers Live, I will be presenting three different stories, each illustrating how we have incorporated AI concepts and tooling into our own delivery practices in order to produce more robust and timely solutions.
Although each story and prototype is unique, it is my hope that through this talk, I can demonstrate how we have leveraged Adobe AI tooling and our own homegrown AI capabilities to meet our clients’ needs more efficiently, most often by cutting down on the associated costs of planning, testing, and development. Lately, much of our attention has been focused on the sports industry, as we have been looking to build out content supply chains alongside Adobe in order to service our potential clients’ needs.
For our first story today, I would like to cover one such prototype, which integrates some of Adobe’s own AI technologies with external services such as Frame.io to generate high-quality promotional material for sports organizations. The goal was to create assets comparable to those already produced by their marketing teams, while doing so in a fraction of the time and at far less expense. The application is first kicked off within Workfront, where content authors fill out an intake form which sends the data over to Fusion.
Within Fusion, depending on the intake form and type of request, we can then call on the AI APIs needed to complete the job. Afterwards, pushing the asset over to Frame.io and AEM on approval.
Content authors are then free to capitalize on the built-in integrations between Adobe Express and AEM assets by making any last-mile edits or translated copies of the generated asset.
You can imagine how we could just as easily kick it over to Gen Studio for immediate use as marketing material.
Although this is just a prototype, thanks to the ever-growing suite of AI tools available and their easy integration using Fusion, we can already do some pretty cool things.
Our first example here involves generating a promotional poster from a provided PSD template.
Using the Photoshop API, we make use of our approved assets, providing those which match the team, player, and branding of the request.
Photoshop then isolates the players from those original images, adding them as well as other text content onto the template.
Finally, we add in the background of the poster itself, generated using Firefly from our provided training assets.
With the correct data in place, you can imagine how big of a win this is for content authors, outputting marketing material for any team, player, or brand in only a few moments. The application also assists content authors with editing and preparing video for release across multiple platforms and devices.
Here we have an original 16 by 9 promotional video with a lot of movement in it.
You can see golf cart moving across frame, Tiger Woods was moving across frame just earlier.
But of course, by leveraging the reframe API, we can crop any video to specific dimensions while keeping the subject centered in frame, something that would otherwise be hard without AI assistance, particularly when working with this type of more dynamic sports footage. As you can see from the output, the aspect ratio has been changed and the video cropped without losing focus on what is important. Not every requested feature was quite so straightforward to implement, however. Here we are tasked with animating the sky within a static image while keeping the subject itself stationary.
Because this is a common form of promotional asset, we again need this to work regardless of the golfer, course, or weather conditions in that original image. No single AI tool was quite able to output exactly what we were looking for. As you can see, the initial results were sometimes a bit off.
There it looks like it traded out its feet for some wheels, so that is not what we are looking for. This is Fusion, though, so we can easily add in the Photoshop, Firefly, and Reframe APIs into this workflow, separating the process into multiple steps and playing into the strengths of each AI tool to get the correct result. Here you can see that entire workflow within Fusion. Again, the idea here is to break this out into multiple steps while working towards our finished asset.
To get rid of the issues we had seen with the golfer being animated, we would like to first remove them from that image altogether.
In this segment of our workflow, we retrieve the asset and call upon the Photoshop API to generate a mask over our golfer. We then leverage the generative capabilities of Firefly to remove the golfer using that mask before filling in the now empty space.
On the left-hand side here, you can see that mask the Photoshop API has returned for us. On the right-hand side is that output from Firefly after using the Image Fill API to replace that masked area with a prompt.
The prompt itself being to replace the area with matching scenery, natural background, green grass, blue skies, and all while adding no new objects in. With our new asset in hand, we can again try to animate the background.
This time, with no golfer in frame, we no longer need to worry about any of the issues we had seen earlier. Here, we simply call on Firefly’s generative video API, asking it to bring our static image to life by animating the sky.
Here you can see the result. We get some nice dynamic clouds passing by overhead, the shadows they would cast onto the green, and even some sway to the trees to give it a nice, more windy effect. With the background video in hand, it is now time to add our golfer back. We again call on the Photoshop API, this time removing the background from the original image in order to isolate our golfer.
Passing this and our generated video into reframe, we can merge the two, finally getting our desired output.
On the left-hand side here, you can see the golfer image the Photoshop API has returned for us. On the right-hand side is again that generated video and finally, here is the two merged after calling on the reframe API. We now have our animated background and static golfer with none of the issues we had seen earlier.
So again, by stringing together multiple AI APIs with Infusion, we were able to overcome the limitations of any one tool, while still implementing the more complex work foes asked for by our clients. We are not quite done yet, though. We still want to make this Asset immediately available to content authors. Here, we run some validation checks on the Asset before sending it over to Frame.io for final review.
Since completed and approved marketing Assets are pushed to ADM, content authors can then take advantage of its native integration with Adobe Express and the suite of AI tools it offers, enabling them to translate our outputted Asset into up to 46 different languages for distribution across all markets.
Or make any last mile edits. The example below demonstrates how an author could swap out the original T-marker by using the remove object function, before using the generate object function to put something else in its place. And while there is a number of other features within this prototype I could demonstrate, the general approach is always the same. To leverage AI-powered APIs, to automate content supply chains efficiently and in ways not otherwise possible, while orchestrating the entire process with Infusion to best handle each scenario. Let’s now take a look at the next tool in today’s demo and see how we have invested in developing AI tooling to more efficiently validate sites migrated to Edge delivery services. So what had inspired this application’s development was an earlier site migration to EDS we had worked on, consisting of over 100,000 pages.
Given the enormous volume of content, manually QAing the site would have taken around 420 workdays.
This meant validating each individual page was not a possibility, and an incredibly small sample was drawn from instead.
Another challenge was that while the site and its pages were composed of the same content, such as text, images, video, and iframes, the layout and design of most site features to improve the overall look and feel. This prevented us from making the simpler one-to-one comparison that could be made without using AI tooling.
But this also made us ask, with more and more clients switching to Edge delivery services, how could we accurately validate that all pages and their content had been correctly migrated? The solution, as you can probably guess, was using AI. The application itself was made using Cursor to help develop the REST API backend, and lovable to set up the front-facing dashboard used to manage comparisons and view the actual reports.
It was OpenAI specifically, however, that was able to overcome those challenges presented earlier, performing quick page-by-page comparisons and generating summaries in concise, plain text. The application is able to handle bulk comparison jobs by taking in a CSV file of URL pairs, each corresponding to a legacy page and its equivalent on the new site.
With this information, it gets to work, leveraging multiple OpenAI models to identify CSS selectors and perform targeted comparisons across site elements, perform the text comparison itself, and of course, generate a summary for review should a difference be detected.
We did eventually get to put this application into practice when we were tasked with redesigning and migrating a 700-page site to edge delivery services.
Assuming that a QA resource would have taken around two minutes per page, we estimated that this effort would have taken approximately three business days.
Using our AI-assisted validation tool, however, this took only 59 minutes to run and generated a clean report, allowing stakeholders to review and address any detected differences. And here you can see that dashboard’s homepage, showing some of the more targeted comparisons that were run on that same project.
Each of these were run off of a different CSV file and compared different pages from our site. Let’s take a look at the comparison of both sites’ FAQ pages first and open it up.
Up top, we show overall data for the comparison job across all pages, indicating how many passed, failed, what the percentage difference was on average.
Down below, we show the results from each individual page comparison, providing the path, status, URL links, and a View Details button.
This allows you to drill down further to analyze specific differences across pages. So we’ll click into that next. This dashboard defaults to a screenshot view, where users are given a side-by-side comparison of the two pages as they appeared when captured.
Given the layout has shifted, however, it can still be difficult for a QA resource to spot the differences between the two when manually reviewing. Here you can see the old and new versions of the press release page being compared.
You’ll notice that among other changes, the related news feed has shifted from the top to the bottom of the page.
Again, by using GPT for Turbo, our application was aware of this and performed the content comparison with this in mind.
However, since content differences were detected and a manual review is required, we can still make this process as efficient as possible for our reviewers.
Using OpenAI’s GPT 4.0 model, we provide the reviewer with a more straightforward summary of the differences. As you can see from the generated summary above, we alert the user to the changes noted across site elements. In this case, that same related news feed from earlier.
Here it points out the change from related press release to simply related, the fact that different related news articles are now featured in that feed, and that the publication dates on those articles were not originally displayed. The dashboard also, of course, includes a text view of more standard text-based differences noted across those corresponding site elements, allowing for quick review and correction by content authors. Again, by only alerting our QA team to page differences when they occur, and facilitating that review process, we can greatly reduce the time spent validating a migrated Edge Delivery Services site.
This gain in QA efficiency compounds as we scale up the number of pages for review. Returning to that original 100,000-page project that had inspired this effort, we could now QA the entirety of this site in roughly six days, something that would have taken a team of 10, nearly 42 calendar days to complete. Beyond just validation, we wanted to look at other practical ways to harness the power of AI to handle Edge Delivery Services migrations more efficiently. The idea was to create another AI-assisted analysis tool, this time to help with project planning and early estimates. The goal was to crawl an existing site, collect screenshots, and determine components from those screenshots.
Afterwards, the analyzer would break these down into corresponding blocks, determine complexity, and lastly, generate a summary report which would provide a breakdown of all blocks, page wireframes, and additional feature analyses. In the crawling phase, we looked through the site from the user-provided link, using different methods to build out a list of all page URLs.
We then used Playwright to inspect each one and take full-page screenshots, afterwards uploading them to Google Drive. During the analysis phase, we leveraged Anthropix API and its underlying Claude Sonnett large language model.
Here, we input those screenshots from earlier and prompted to analyze all components, while restricting its response to an exact JSON format we can easily parse. With each image response, we can provide more and more context to the API, appending this to each subsequent request so it can better keep track.
Once all screenshots have been processed, we refeed these and our collected information back in one more time for good measure and additional polish.
Here, you can see the block analysis for one such report.
In the left-hand image, you can see after analyzing four different pages, our tool has mapped the site features present on each to corresponding EDS blocks, and the different child elements they are composed of. In the right-hand image, we can view the different block compositions of each page analyzed, seeing which would need to be included in order to recreate the original page.
There is also an option to drill down further and view the full wireframe for any of these pages. So, we will pull that up next.
In the first image, you can see that fully expanded wireframe for the homepage, which helps to illustrate the layout of the blocks themselves. In the second image, you can see the website feature analysis report, which highlights different technical requirements beyond what is immediately apparent from just taking a look at the blocks.
Under the user management category, you will notice the analyzer has noticed the welcome and account links in the top navigation, indicating that there will be some sort of account functionality that should be considered when estimating the overall cost of development. In generating this entire analysis, we cut down on the early-stage costs for EDS projects in several ways.
By automating much of the early investigative steps a solution consultant would normally handle, by providing developers with a list of blocks, their constituent parts, and additional technical requirements, and by offering content authors wireframes from which they can reconstruct pages from those newly developed blocks. Then, of course, this presentation would not be a proper homage to Love Actually if we did not tie together the themes from each story, or in our case, prototypes, together nicely at the end. Whether we are using AI to generate high-quality marketing materials, validating every page of a new Edge delivery services site, or estimating the level of effort needed to create one, by incorporating existing AI tooling into the Adobe Text stack, we have been able to produce accurate solutions that would otherwise be impossible to implement, while again, reducing the associated costs of planning, development, and testing. Again, thank you for giving me the opportunity to speak. If you would like to get in touch with us for anything, please reach out. Thank you.
This session — Love Actually: Three Practical Stories of Using AI in Adobe Projects — features Mark McConnell (Ensemble) sharing three real-world applications: accelerating prototypes and Content Supply Chain with Generative AI Services, an AI-powered migration validator for AEM Edge Delivery Services, and an agentic approach to automated site evaluation and reporting. Recorded live from San Jose.
Special thanks to our sponsors Algolia and Ensemble for supporting Adobe Developers Live 2025.
Next Steps
- Continue the conversation on Experience League
- Discover upcoming events