Custom shipping rates with AI tools - Implementation and testing

Learn how to implement and test AI-driven custom shipping rates. This tutorial guides you through environment setup, code cleanup, and extension deployment. You will also explore testing processes, including carrier registration, configuration updates, and storefront verification to ensure accurate shipping rate integration.

Who is this video for?

  • Technical and Solution Architects
  • Backend developers and engineers
  • Implementation engineers and technical consultants

Video content

Agent finalizes implementation, cleans unused code, and prepares the project for deployment.
Credentials and environment are configured to deploy and register the new shipping extension.
Storefront testing confirms the external shipping rates appear and function as expected.

Transcript

Hi, this is Russell with Adobe. This is a recorded demo from an Adobe engineer on how the AI-powered agents, along with our starter kit, can create and execute on the generated implementation plan. We’re gonna pick it up where he’s starting the implementation plan and showing the different skills that are used. Based on that, and by switching to a different skill, the agent creates the implementation plan. At this stage, we instruct the agent to continue with phase two. It begins using the architect skill, and at the same time, it uses the MCP tool to access the RAG knowledge base and run several relevant queries. The main goal is to get a clear understanding of the requirements.

The AI agent asks multiple questions, and once it finishes, it produces an implementation document.

Part of this document includes creating the integration diagrams. Once the architectural phase is complete, the agent is ready to move on to implementation. Before starting implementation, we consider two approaches, a direct implementation or generating a detailed implementation plan. The detailed plan would outline the steps and track all required tasks. Based on the complexity of the project, it recommends creating the plan. That seems like a good idea, so I accept the recommendation and select option A.

At this point, another skill comes into play, the developer skill.

It loads into context and begins generating the artifacts that the other skills identified and documented in the requirements. We can see the implementation plan it produced, and then it presents the details of what it’s going to implement and asks for confirmation.

The agent shows the plan again for confirmation, and once we approve, it proceeds with implementation. When it finishes, it reports that the implementation is complete and suggests the next steps. We configure an environment variable and move toward deployment. It also suggests performing a cleanup. Since we started from the checkout starter kit, there is some scaffolding and a few runtime actions that are useful examples, but there’s also code we don’t need for this project.

We ask the agent to clean up that unneeded scaffolding and proceed to the next phase. It scans the source code folder and presents a plan for what it will remove and again asks for confirmation. We approve, and the agent removes any unnecessary code.

It then presents a final report, confirms that everything required has been cleaned up and outlines the next steps. The next logical step is to configure the file and continue with deployment.

We enter the values that file needs and then ask the agent to check the entries we added. It suggests an extra action. We want to make sure we are deploying to the correct organization and workspace, so we ask the agent to verify where we are currently connected.

To do this, it uses an MCP tool that wraps the AIO command, and it shows our active connection. The workspace is not the right one for deployment, so we ask the agent to switch to the correct space. It uses the MCP tool again to make the change. Now we’re ready to synchronize the host credentials into our configuration file. The checkout starter kit provides a script for this.

After that is complete, we can register the shipping carrier so that our Adobe Commerce Store will recognize it. At this point, we are ready to deploy. The agent asks for permission, we approve, and it builds and deploys the application. It provides feedback on the endpoints, where the application is available, the runtime actions, and some testing guidance. There is one remaining step that the agent identifies, configuring the webhook. We ask the agent to set that up for us.

Now that the application is deployed, we run a test. We can see that the webhook is configured, and we also manually configure and register the admin UI SDK. After opening the application, we see the admin UI SDK that was created.

There is a troubleshooting phase we can skip.

Earlier, the agent made a few mistakes when generating the parcel, so we captured logs from the developer console and asked the agent to fix them.

Eventually, the application is working properly. Once we save the configuration details, such as the service URL and API key, we can verify that everything works in checkout. When we open the storefront again and navigate to checkout, we should see the new shipping methods updating from the external system. If we scroll, we can see these are shipping methods provided by the mock API.

And that completes the code generation. Well, that’s it on this session for creating custom shipping rates with Adobe’s AI tools.

There are a few more videos on this topic that you can find on Experience League, and I hope you come back to Experience League to learn more about Adobe Commerce, as well as all of the other Adobe products.

recommendation-more-help
3a5f7e19-f383-4af8-8983-d01154c1402f