AI-Powered Development for Adobe App Builder Extensions in Commerce
Discover how Adobe Commerce leverages AI to transform app development and streamline customizations. Learn about automated code analysis, intelligent migration reports, and natural-language prompts that generate JavaScript code quickly. This session highlights the innovations in Adobe Commerce’s tooling ecosystem, helping teams modernize legacy systems efficiently and reduce technical debt. Join the conversation and explore upcoming events to stay updated with the latest advancements.
So I have two pieces of good news. One, I am your last barrier until lunch. So after me, we’ll take a lunch break.
Second, I’m Matt Johnson. I’m just excited to really talk about the next evolution for Adobe Commerce development, specifically how we’re leveraging AI and our extensibility layers to really innovate the development of app builder applications and maintain customizations within commerce. So going back for a little bit of history lesson, way back to June, 2025, and Adobe Commerce released Adobe Commerce is a cloud service, which is a fully managed SaaS version that really takes the place of our older platform as a service. And the reason we went with this model is that our platform as a service required infrastructure maintenance from the customers, customers consistently having to update the core applications, make sure they’re putting all of the security fixes in, things of that nature. And so now what we’re looking at is a fully cloud native multi-tenant service. And what this means is that you’re always up to date, you’re always secure.
But one of the fundamental differences is that our customizations that were previously made in process in our commerce core application are now all done in an out of process fashion via app builder, API mesh, and different out of process frameworks here. And so not only that, but this also brings in an immutable database where customers who were running their commerce instances on their own EC2 instances, for example, would have full access to a database, they could add custom tables, entities, all of that now needs to be done in an out of process fashion. So with this, we do also have significant integration into the entire Adobe experience cloud.
As part of Adobe commerce as a cloud service, we have our edge delivery storefront powered by AAM, product visuals that are powered by AAM assets. We have extremely robust catalog channels and policies. And of course, we have app builder and API mesh. And what this means, what this sets the stage for is a fundamental way of thinking of how to move customizations from point A to point B, from PHP in process to JavaScript out of process. So again, historically, all customizations have been done directly in the commerce core in PHP. This means that during regression testing, you’re having issues trying to see what’s going on there. You have resource contention between what’s going on in the core and what you could actually do out of process. And so with the new API first model, that changes everything, any extensions, customizations, again, that’s all set out as microservices and applications. And to put this in context, the typical commerce customer will have 134 customizations on their storefront.
When we were first going through this journey, we had this hypothesis that some of our simpler customers that might have one storefront, one store view, would actually have significantly less customizations and be fairly simple. That hypothesis was wrong, where we definitely, you can see the numbers here, there’s a delta between that lower complexity and the higher complexity, but it’s not nearly to the extent that we originally anticipated. And so with this, we look at how do we take every customer that has at least 100 plus customizations and move them over in a streamlined fashion.
And just for reference here, keep in mind that’s average, we have some customers that have 500 to 900 customizations here. So a little ways back, we put out these concepts of starter kits.
And so what are starter kits? They’re really innovation accelerators. So they’re blueprints for how to actually build out these out of process integrations. And whether you’re integrating with a backend system like an ERP, a PIM, a CRM, or going straight to the checkout process, to tax, to payment, to shipping, these are actually, I should say, I guess, pre-scaffolded pieces here. We have code snippets, APIs, all of these pieces to scaffold these projects together. And what this actually does is build based off of Adobe best practices here. So you’re not taking everything just from the beginning, stitching it all together without any guidance. So how does this all tie together with what we’re doing with AI and the MCPs? We’ve heard all of these terms today.
So the first thing I wanna really talk through are the model context protocols that we’re enabling. And we really have two out of the gate. And the goal here is to streamline your app builder development. And while I’ve been talking a lot on migration from our old Commerce PaaS monolith to Adobe Commerce as a cloud service, I do want to be clear that this is not only a migration use case, but this is also a net new application development use case. So first off, our MCP for developer assistance. And this is connecting to our commerce repositories, APIs, our documentation, and our starter kits, all to put this together so you can build your own intelligent assistance to answer questions, generate code snippets, refactor code, and along that process, generate documentation as you go.
Then we have our MCP for app builder workflows, which I’ll be showing a demo of shortly.
And what this does is takes natural language prompting, a bit like what we’ve seen some of our friends showing earlier, and generate app builder compatible JavaScript code from those prompts.
And this is all based off of Adobe recommended patterns, testing, security, and compliance frameworks. So how is this packaged together? The MCPs are packaged as a plugin.
And so this is designed to be a bring your own LLM model.
So connect your preferred LLM and power it to make your assistance.
Now, we’ve currently tested primarily with cursor, with GitHub co-pilot, with Gemini, and the Claude models. We have been seeing a better degree of success with the cursor models, but are continuing to go down the list there. So talking about how this can actually make the workflows smarter.
So one of the most powerful aspects of these MCPs is the ability to recommend and again, leverage those starter kits, the templates, and the sample applications that are going to be the foundation for code generation. So instead of starting from scratch here, what your MCPs are gonna be doing are analyzing the requirements and suggesting the most relevant starter kits to build on top of. From there, you’ll get your boilerplate, your scaffolding all set up to build your customizations on top of. Now, the MCPs follow four distinct protocol phases here.
And while not all of these are super exciting, they are very, very critical to actually develop the way that you want.
So first, we have requirements analysis and clarification. This is making sure that you have the proper use cases for what you’re trying to build. And a very real example here is in the commerce world, we have extensions on our marketplace that plug into commerce instances. And each extension has a series of customizations. So let’s say an extension can have 10 customizations that are packaged as part of that. And what we see is that many of our customers will purchase an extension and they might only need two of the 10 items. So if we’re trying to rebuild all of that functionality, let’s build what we care about. Let’s make sure we’re building those two use cases, not all 10. It doesn’t necessarily need to be an apples to apples comparison. It needs to be generating the code to do what you want it to do. We then have the architectural planning and user approval. So once we understand what needs to be built, is how is it going to be built? What APIs are we going to be using? How are we going to scaffold all of this? We then go to phase three, the actual code generation and implementation on the app builder side. And through this, context aware documentation along the way. And one of the standout benefits here is how they streamline and improve the testing process. And what do I mean here is that the MCPs are automatically scaffolding the test cases along with the actual code generation. This allows for incremental validation along the way, so you don’t have to wait for full end-to-end tests to realize, aw bollocks, something’s up here. What do I do? But test iteratively through the process.
With this, your documentation is consistently being updated along the way. And you are removing any orphaned artifacts or things that you no longer need. Now what to avoid here before I get to a quick demo. And this is sometimes the best lessons here are what not to do.
And so I know I had mentioned previously that some of those phases aren’t necessarily the funnest, but they are definitely very important. So number one, don’t skip that clarification phase. You want to make sure your use cases are aligned and accurate.
Don’t skip testing after each feature. Again, make sure you’re testing incrementally with these things.
We’ve seen some of the hallucinations, some of the funny output of all of these AIs through this. I think every single person here knows what that means.
We want to make sure that we’re not adding needless complexity. It’s not just add code to say, OK, well, I wonder what this does. Make sure that you actually have a root cause there. And then at the same time, you can easily declare success with edge cases or sample data. But make sure that you are actually testing with production-like data along the way. And again, don’t forget the runtime cleanup, which the MCP really does guide you through.
And so now I’m going to go into a demo, a recorded demo, given that some of the generation and actually takes about 10 minutes if we were going to go straight through this. I’m going to go through a quick demo of this MCP and then show you a brief sneak peek onto another piece that we’re working on.
So go ahead and kick that off. We’ll begin this demo by exploring how a few AI tools can enhance developer productivity while building out of the commerce extensions with App Elder.
This is the Commerce Integration Starter Kit, a good starting point for developing extensions. After we check out the code, we’ll open it up in Cursor, a well-known IDE for software development enhanced with AI.
To get started, we’ll have to enable Cursor with the tools it needs to understand how to work with out of the commerce.
Then we’ll give it a task, create an extension that sends an event to Halo ERP system whenever an order is placed on the commerce platform.
Once we provide this prompt, the AI will ask us a few follow-up questions. Is this a SaaS or a PaaS commerce backend? Should the event trigger before or after the order is placed? What kind of event do we want to send? Is the Halo ERP system single or bi-directional? Are we dealing with a UI or a headless application? Or is state management required between invocations? Once it collects all the necessary details, the AI will propose a plan to move forward with code generation. Once we give it all that information, it’s gonna give us a certain plan that it was thinking of to move forward for code generation. We have to verify this, and in this case, I’m comfortable with the plan that AI has given me. So let’s actually move forward with code generation. I’m gonna instruct it to move forward, and then start creating all this code. This phase actually takes longer. The AI needs to process a lot of information. It takes in all the context that we have provided up until now all the existing code, the user prompt, and all the rules we have built inside cursor to understand Adobe Commerce and Commerce Extension development. As you can see, it also gives us a more detailed structure of how the code is written, the architectural diagrams, gives us resources to understand resources around troubleshooting, testing, debugging, monitoring, and the logical next steps to move forward. Whenever it also writes code, it creates docs, JS, generic, and inline, and puts them into the appropriate sections. Once we have all these details, let’s actually move forward with testing it locally.
At this point, we can use all the NCP tools that come with this package. NCP is a combination of all the tools necessary for development.
In here, instead of running the same command that I would usually use as an application developer, I would rather do it from the chat window. I’m going to write the same thing in natural language, run this project locally, and as you can see, the AI agent was able to pick up the necessary NCP tools to run this project locally and create a terminal and do all of this without actually going into the jargon of creating a local development server.
You can also see that it can invoke the extension. It did give us how to invoke this from the browser, but for the sake of the demo, let’s actually stick to Purser, and I asked it to invoke the same action that we just created. The first time it tried, it failed because it was not able to send the order details. The second time, it realized that was the reason, and it actually added the order details the way that the extension understands and redid the same test. Now that we have verified that our extension actually works, let’s move forward with deployment. This, you would actually run a few deployment commands in terminal, but again, for the sake of the demo, let’s stick to Purser. I’m going to ask it in natural language of how to deploy, and I was instructing it to ask me for the confirmation before moving forward. I went through the plan that it has provided. It works best in this case, so I’m instructing Purser to continue with the deployment, but also make sure that you only deploy the newly created runtime action and not every one of them.
We can see that this worked. The deployment was successful, and the deployed action URL is provided to us. Hello. Now, we are looking for the availability of this to be January of the upcoming year, but I’m sure a lot of folks being this Developers Live would like to get their hands on this, so I would like to shamelessly plug a lab at 3.10 today hosted by the Dolsit narrator there, Ray Vanth, and my colleague, Steven, to actually use this tooling to create a storefront customization, specifically a ratings app. So please feel free to join that at 3.10 today.
And before I leave you, I wanted to show one other sneak peek on a co-innovation that we’re working on with AWS. So everything I’ve been talking about from the natural language translation of use cases to the JavaScript-compatible app builder code is the right-hand side of the equation. What about the left-hand side of the equation if we have custom code and breaking that down into its constituent use cases? Any customer who is looking to migrate onto one of Adobe services, in this case, Adobe Commerce, often needs a plan of attack, right, to understand how the heck do I even tackle this problem? So what we’re working on with AWS is the ability to take out all of the customizations for an existing commerce code base, break that down into its constituent use cases, and then provide a logical recommended path on how to sequence a migration using all of these tools. And we’ll play about a minute demo of that just to show you what that’s looking like. Migrating Adobe Commerce module from platform as a service to cloud service typically requires weeks just for the initial assessment. But with Kiro, an agentic IDE by AWS, we can generate the comprehensive assessment report in just a few minutes. Here is the SSO module. Rather than manually hunting through scattered files and folders, trying to understand events, plugins, and dependencies, we let the AI do the work. We provide the AI agent with a clear mission, the SSO module, and generate a complete migration assessment.
The agent works autonomously, creating the code, understanding business logic, identifying dependencies, and mapping patterns to cloud native equivalents.
Here’s the result, a comprehensive migration assessment report displayed alongside the PHP code base.
The report captures all the critical details, like the events, plugin analysis, the different extension points.
It is directly mapped to the source code. It identifies complexity levels. It also recommends a phased migration approach. What would take two to four weeks is delivered in few minutes.
This is the power of AWS Agentic AI meeting Adobe Commerce. Autonomous intelligence that doesn’t just assist your work, but it performs it. So, with that, thank you all. Feel free to fill your bellies. We do have everything on Experience League. Please follow, ask questions, and we will be happy to answer and hope to see some of you at the 3.10 Lab.
This session — AI-Powered Development for Adobe App Builder Extensions in Commerce — features Matt Johnson, Senior Product Manager for Commerce Cloud, showing how Adobe Commerce is building an AI-driven tooling ecosystem to help teams identify and modernize legacy customizations faster. See how automated code analysis, intelligent migration reports, and natural-language prompts generate App Builder–compatible JavaScript code in minutes to streamline modernization and reduce technical debt. Recorded live from San Jose.
Special thanks to our sponsors Algolia and Ensemble for supporting Adobe Developers Live 2025.
Next Steps
- Continue the conversation on Experience League
- Discover upcoming events