Okay, then let’s get started. Hello everybody and welcome to this talk about AM content in commerce and how you can use the Adobe developer app builder to integrate it with your commerce solution. My name is Markus Hark. I work as a software engineer for the AM commerce team out of Germany and you can reach me on my twitter account. The menu for today and what you can expect from the session. So I will give you a very quick introduction about the AM commerce, the commerce integration framework and how you can use them finally app builder to build a custom integration for the e-commerce solution of your choice. This can be either an extension to Adobe’s own Adobe commerce, it can be a home-grown commerce services or it can be an official product. So the AM commerce integration framework, very quick overview. So we have AM on the left side as the leading experience management solution and you have your commerce solution on here on the slide on the right side. The blue thing in the middle which brings everything together is the common integration framework or short SIF. SIF basically consists out of three billion blocks. One is a set of so-called SIF core components which are AM components, server-side client-side which allow you to build out your experience and to design and develop your storefront application. The second building block are the SIF offering tools which is a set of management tools for your business users. So we provide the product console different kind of pickers which you can use in the components to make the life of your business people easier. This allows them to use AM components on a page directly add product data to it or to assign content or experience fragments content fragments to products. And the third module of this entire integration pattern is the integration layer and this is what this talk is about in detail. So how does the high-level architecture look like? Very important for the architecture and for the entire integration actually is GraphQL. GraphQL is the central point which is where the integration is put around and we standardize using the Adobe Commerce GraphQL schema which means all the queries executed by either the components or the offering tools will follow the Adobe Commerce GraphQL schema and will use objects out of the schema. The integration via GraphQL is between AM and Adobe Commerce is direct communication and direct integration for the most simple setup. The app builder and the ideal runtime layer is not used here but it can come into the picture quite quickly if we talk about extensibility patterns and use cases where you either want to extend the AM side of things or you want to extend Adobe Commerce side of things. For both of these options the app builder and the ideal runtime is the perfect solution. And same important ideal runtime is also used for third-party integration. So if you integrate your commerce solution either off the shelf for example you have partners building integration for SAP, commerce or salesforce commerce or you integrate your own commerce services then the IWA runtime is the choice for your project and on this platform we then go into a logical integration doing all the data mapping between your endpoints and etc. And this is what we’re now looking into detail. So the architecture in the first diagram this is the architecture for AM connected together with Adobe Commerce. On the AM side it’s a plastic AM site deployment basically. So you will have an AM publish, off-source. This diagram is taken from Adobe AM experience manager as a cloud service so typically a CTN, yeah it’s patch up, nothing special about it. And with SIF you deploy the core components on off and publish and you also have client-side components. They live directly on the website within the browser so they are written in JavaScript. In our case we’re using React here. Both of these components either server-side or client-side they need commerce data. And the commerce data they are getting this via a GraphQL call to the Adobe Commerce solution. In this diagram it’s Adobe Commerce hosted by Adobe so we also provide fast DCDN which is transparent for all calls. But this is basically a very simple setup. On all of these levels on the commerce side and also on the AM side we have caching for optimizing the performance but using the number of calls. So you have fastly caching on the commerce side. You have AM dispatcher in the CTN on AM as well and also the SIF core components and the underlying client which actually handles the communication has a built-in in-memory caching as well. Additionally on the AM offer not shown here in the diagram we have the offering tools and they also require commerce data and it’s basically followed the same data flow for the offering tools as well. With only one exception that the offering tools can also get preview data from the Adobe Commerce solution. For example if you build a new experience for products which are not published yet you can already see them on the offer and create your launch for example while you cannot see this kind of data on the publish and on the lifestyle. If we now look into the integration using the Adobe App Builder this is how it looks like and the important piece which is new in the picture is the IOM time. The AM side of things looks basically similar. Nothing changes. It’s the same set of components, the same tools etc but the new piece here is the IOM time platform which acts as a GraphQL server and the GraphQL endpoint for your e-commerce solution. It’s basically a wrapping layer which you can deploy and build your integration on top of this platform. It’s used for integrating with third-party commerce system as I said homegrown or off the shelf and also can be used to get rid of Adobe Commerce if you build extensions. For example if you have your inventory data coming out of the logistics system for example and not from the commerce solution then you can easily build an extension which will reroute the requests for inventory data to the logistics system and then on the GraphQL layer this is all merged back into one result and sent to the client. For the consumers of the GraphQL API, in this case the AM components, they will not even see this data coming from different systems. For them it’s totally transparent. So if we now do a little bit deep dive, how does the actual request flow look like? First is the request comes in. So the end users coming to your website and opening the page in the browser. This request will go to the CDN and AM dispatcher and assuming this is not cached there so it will hit the AM publish and the AM publish will render the page which contains the AM SIF core components. They need GraphQL commerce data from the IOLayer. The data flow to be is basically very similar it’s just it’s a client-side component which is rendered directly on the website and this component also needs commerce data. It goes via some tunneling also direct to the IOLayer and to the GraphQL endpoint. In both cases the GraphQL endpoint gets the request. It will invoke a set of runtime actions which implement GraphQL resolvers. They get the data from the commerce backend system. This happens on integration in the integration area. They transform the data back and it can be sent back to the client. Additionally as I said if you have any extensions where you extend the GraphQL schema or you replace existing portions of the schema with your own implementation even with Adobe commerce or with your custom solution an extension can be built as well and it follows similar pattern like integration. So it’s also a set of actions which are deployed on the runtime platform. With that let’s now look what’s inside this box and for this I give you a quick overview about the CIF GraphQL reference. What is the CIF GraphQL reference? CIF GraphQL reference is an app builder project which we provide as a public GitHub repo. It’s meant to be a starter and a reference project showing you how to build an integration with UI commerce solutions using Adobe one time and the app builder. It’s a set of examples, best practice codes, how we based on feedback also from our partners think the integration can be built in an ideal way. It’s implemented as a set of IO functions written in JavaScript. You also get with this dump of the Adobe commerce schema so you have everything basically you have you need to start building your integration. Important to know it’s not a full implementation of the entire schema so it will not work with all our CIF core components out of the box because they might need portions of the schema which are not covered by the reference implementation and the rationale behind this thinking is this will very likely also be the case for your project. So if you build a project very likely you are not going to implement the entire IW commerce schema because it simply covers use case which you very unlikely need for your project. For example the easiest case is you just show off your products on the website so basically showing what’s in your catalog. You need portions of the schema, you need the categories, you need the products and you need queries for product search for example but you don’t need to implement card or my account or other areas of the schema because for you for this kind of use case they wouldn’t even be needed. So it’s very easy to implement and get quick results from this kind of pattern without first starting working a lot of code to implement the full schema. Talking about schema we are talking about GraphQL and we talk about resolvers. So how does this integration and the reference implementation actually look like? The key point here is the so-called dispatcher action not to confuse with the AM dispatcher for the AM colleagues here. It’s just the same name but has nothing to do and this action acts as a central endpoint for all the incoming GraphQL records. The dispatcher needs to implement so-called resolvers which actually doing the heavy lifting and the hard work for GraphQL based on the schema. So each portion of the schema you want to implement you need your resolver for and you have three options to implement these resolvers. First option is you have to do a lot of building directly into this dispatcher action just as a different JavaScript class and a set of functions. This is the easiest one in the method where you very likely will use most and it’s also what we use most in our reference integration. The second option is you delegate the resolving part to a separate IOO action. The difference here is the action will cover a portion and a subset of the schema and it’s only responsible for this part of the schema. It will run in a separate Node.js project because it’s invoking an external function. The advantage of this pattern is the extra separate IOO action can also overlay local resolvers which are building into the dispatcher action. For example if you build an integration for an off-the-shelf product and provide a lot of standard resolvers for the basic use cases as part of the patch actions your customers or your users still can go down and overlay portions of it without changing the core code and build in a separate IOO action for resolving certain parts of the schema. This can be extra parts of the schema where you extend it or you can also replace existing resolvers which are directly implemented if you want to customize the resolver action here. And the last option which is actually not shown here on the diagram is the implementer with the remote resolver. This type of resolver most likely makes sense if your commerce solution in the services in the backend already provides GraphQL APIs. Then the remote resolver is the easiest solution because you can basically delegate all the resolving to your endpoint and you just need to take care about the data mapping back to the Adobe Commerce schema. And that’s basically it. The benefit of this resolvers is and patterns is that they’re very flexible. You can mix and match whatever is needed for your project and when you build out the integration. If you implement these resolvers you basically need to be aware of two very important data flows which are actually important for each and every GraphQL implementation. The first data flow is the introspection. The introspection is a special type of query which is done by most of the GraphQL clients anyway. If you use a local GraphQL tool and you try to open the documentation of your GraphQL endpoint you can do an introspection query to basically ask the server, hey what is the full schema you are supporting and what can I do with this schema? So it’s telling the client what does the API actually look like. And this introspection query works like the following. There’s an incoming request and the incoming request as I said goes to the dispatcher and the dispatcher basically iterates through all the local and forwarding resolvers. For each resolver it asks the resolver what part of the schema it supported. The request is forwarded as the same introspection query for all the registered remote resolvers and they will basically do the similar thing. They will answer with a JSON object which contains all the portions of the schema which they support. This goes back to the dispatcher and the dispatcher assembles the full schema which is supported by the implementation. In this case it’s a category, it’s a product, it’s a card query and the mutation to create an empty card. This is then aggregated and sent back to the client. It’s also internally cached within the dispatcher action so if the second dispatcher, the second schema query comes in the dispatcher will respond from the cache because typically the schema doesn’t change too often and you only change it for example if you do redeployment so you could easily cache it. Now the client has an idea and understanding how your schema looks like so the next following action will be a query to get actually data from your endpoint. The query execution flow looks like this one. The incoming query hits again the dispatcher action. The dispatcher analyzes the query and finds out which resolvers must be called for this query. This can be local resolvers or it can be remote resolvers. If it’s remote resolver it starts basically the IO function and calls the external action. Then independently if it’s remote resolver or a local one they are responsible for the actually heavy lifting so they will call the backend service. Your commerce services typically they get some JSON data from that service or from a database. It might also be XML data depending on your APIs get this data back from the service. They also are responsible for the object mapping so whatever they get back from the proprietary commerce API they have to map it back into the graphical objects of our schema so they may map the product object from your backend system into product object which is aligned with our schema. This is all going back into dispatcher and dispatcher again will as simply define a query result and send the JSON string back to the client. This concludes the two important data flows you need to know but this is a lot of theory so let’s get our hands dirty and actually look at the code and look at some demo. So what do you actually need to get started? You need basically four things to start and play with the project and also to start your own implementation. So first you need an IO project so you need to have an account register and put a project in the IO so you can use actually the app builder. Then you need the CLI tool which allows you to communicate from your command line from your local environment with the IO one time and deploy actual code. Then you go to our GitHub project clone it and you execute one app one if you want to do a local development or app deploy if you want to directly deploy to a one-time server in mind. And with this you’re actually ready to play with it and while we’re talking about play let me switch to the demo so we can have a look how this looks in real life. So you can see my am I will start the entire show the demo with showing you the entire flow from the am to the components to the graph endpoint and then finally to the code. So let’s start with am this is my am connected to my IO endpoint services and I have my product console here and I can go to the product console and navigate my product catalog which is loading now. I can see my categories I see a bunch of products for example I can also navigate categories and I’m picking this earrings now which I can also open in the console and see all the product details. All these attributes are coming now live from my e-commerce solution they’re not stored in am not cached or anything that’s just live calls to the commerce backup. I can use now the same product also on my am website in this am environment I deployed our Venya demo store which is a simple b2c a e-commerce store full functional with everything you need to build and start your am project and on my landing page I cleaned it up it’s empty so let me add a new component on the page. So I’m adding a product teaser here and for the product teaser I want to show the product so let me find one I can have multiple options here let me go to my product search it’s integrated with the method final and search for my earrings. Okay that’s the one I would like to have used by a teaser so I can just like and drop them on the page and they will show up here. An alternative way would be using the product picker which is available via the dialog so I can go here once the figure is available and can also navigate my catalog using the product the result is finally really the same. With this actually now the interesting part what happens under the hood what is going on between am and my ION point for this I already deployed my ION project the same reference project I’m talking about on my either way I work on and with this I we also include you a craft iul client which makes it very easy to get started and to play with the craft and point so it’s this is connected to my ION actions it has the same product data in my case I have no real e-commerce system they are just reading data out from an cosmos db database so I can do product queries here as well and I can do the same query which I did in the UI and searching for earrings and I’m getting back some data here as well so if I’m interested in one dedicated product like a query which is done similar than by the product user I can for example this one this one is filtering all the products by giving stu and getting a little bit more metadata and description compared to the other career so I can execute all these queries and can get product data from my commerce solution uh so how does this actually work and how does it look like you have always seen some diagram now let’s go and look into the ION project uh I checked out the project here it’s the sif co-reference as I told you and it looks like from the layout like a typical f builder project so it’s all the boilerplate the typical project comes with so the very most important is the config file which I already have open because this is the manifest file which describes actually my application so the two important parts here are the actions and the web directive they are pointing to the folders which actually hold in my code I’ll show you this in a second and then there’s the manifest description which defines all my runtime actions and here you will find the dispatcher action the one I showed you in the diagram before you will also find a dedicated cartosolver same like in the diagram you find a sif schema which is used by the introspection queries and you find a few helpers like an input function because as I said it’s connected to a database so we have a simple input where you can just provide JSON file and import this fill your database uh the dispatcher action has as I said implemented all the resolvers either locally or delegated to another action since we have the card action here which is already implemented we need to tell the dispatcher about the remote schema so the dispatcher gets an input parameter which has then a list of reference actions in this case pointing to the card can be in the same project like in my case it can also be in different packages in different projects as long as they are deployed on your one-time account in the same namespace they do break and with this you tell the dispatcher which remote actions actually are available talking about dispatcher let’s look at the code dispatcher is actually not really complicated function it has a little bit of boilerplate initializing some locking etc etc and then it starts to register all the resolvers so it gets the metadata from the cache or it builds up an object which has all these the server’s registered so it’s queries for the configuration and for each of the local resolvers it’s asked them to provide their schema and it basically does the same if you scroll a little bit down for all the local results so in this case the card presolver will be covered by the first loop because it’s an offloaded to an extra separate action and the local resolvers will be registered below in this case our endpoint supports as i said only portions of the schema which is products category category list and custom attribute metadata that’s basically all our reference implementation supports as such of the schema so let’s assume there’s a card product query coming in like i did in my graph client am the product query will use or will be resolved by the products resolver and for this one each of the resolvers returns an object in this case is a product object which is according to the schema very important to know and it’s something we also recommend for a custom implementation to get the actually data from your e-commerce solution we recommend to using a load pattern the loader pattern is actually quite simple because you can offload the actually data loading it doesn’t matter if it’s from an endpoint or from your rest service or if it’s from a database or from a file system whatever and you can asynchronously load the data and you can also easily parallelize the data across multiple loads so the products loader products resolver gets the products loader to retrieve the actual products and what does the loader do also not very complicated let’s have a look the loader basically in the register and the constructor registers the loading function which is doing the hard work and the functions are getting a key the key is the input which is used by this loader and typically the key is the search parameters in our case what we provide so if we query for product and supporting a different uh search types like do a full text search or filtering products by an object the key contains the information it’s actually an object providing this kind of data so we use the key and we hand over this to an internal search product function and this is where all the the work happens so if a query comes into the product functions we analyze the parameter and let’s say in this case it’s a search so we do either a search or we filter a product by its product name which is quite similar to a search so in this case we get the search parameter the query we look up these queues for the search term and like if i write my earrings here then this will go to the first one we look up the products which maps matches the earrings query and then we return all the products which we find in our database as a promise which happens here and this is basically it and if you integrate your custom e-commerce solution let’s say you do some west calls to your commerce endpoints you actually only will change this portion of the code to get data from your backend instead of from my database and then you’re already ready to build the integration it’s not really super complicated with this i complete my demo and switch back to my slides so we have the app builder project and now you actually are ready to build your integration in the app builder project there’s actually a little bit more which i didn’t show you in in the ide so you also get tests pre-generated and we also provide tests for the actions you also get a set of github workflows which allows you allow you to directly deploy your code to the io project and to deploy to the server to make your life easier and to make your life getting started easier but it’s just a proposal that you can use it if you have other tools like jenkins or circle ci you can use it for example as well with this i’m coming to the end of my talk so what did you actually learned and what is available for you to build the integration with your custom e-commerce solution so we have the app builder project ready it’s all on github you get the cli it’s a standard app builder project it includes the wio sdk it includes state sdk which we use to connect to a database if you don’t need it then you can do your own communication to your commerce system and it includes the graphcal schema files which you can use to getting started and you have everything ready at your fingertips to start the project and with this i invite you to check out the project on github we have a forum on the adobe experience league pf3 to connect with us if you need help or have questions by building your commerce integration a few extra resources as i said everything is github the documentation is available on the adobe am cloud service commerce solution documentation page and the form is available in the experience thank you very much for joining this talk feel free to check out the experience leak go there check out our documentation and connect with us on the form thank you very much and i guess we have one or two minutes left to have some questions thank you very much