AEM Rockstar Headless
Our presenters will ‘compete’ to be the Adobe Experience Manager Rock Star 2022 by presenting a solution to a pre-provided problem statement that each must solve. The audience is given the opportunity to ask questions and vote who is the next Rock Star!
尦為bably � Is it Josh? Is it Matthias? Now we don’t know. Drum roll. Matthias, congratulations. You look like a winner.
Welcome. Glad to be with you for the 8 AM Rockstar. Can you believe it? Already the 8. I’ll be this time just virtually. I’m looking forward to the next in-person meeting that we can have. Here you can see from the gondola on top of the Basel office, we have a patio up here. And a couple of years ago, we put a gondola up here, which is sort of a cool place to have a meeting or to welcome you to this session. Now, most of you likely know me. My name is Yamashai Pite. I run AM engineering and been doing so for 20 years. For those that don’t, welcome to the team. Welcome to the partnership. I look forward to this session. Now you just heard Michael and Cedric talk about the innovation and the massive customer growth that we have seen and usage growth in our headless functionality, which I think is really cool. And if you haven’t tried it, I encourage you, please try that. Be the best of both worlds, the head full, the headless. And we keep on innovating. We have some of the largest customers there. So that’s really cool. Now, over to the AM Rockstar. This year, we have two types of candidates. On the one hand, customers for a denomination, that’s Palo Alto Networks, near and dear to my heart, given that I lived in Mountain View for a couple of years. And from the partners, we have Bounteous and E-Pan. Woo hoo, applause. Looking forward to how it’s going to play out. And for that, I hand it over to our only spectacular Darren. Darren, it’s all yours. So thank you, Jean-Michel, for doing that nice intro for us. Usually, you see in our AM Rockstar event is we take some or we solicit ideas from the community. It could be any number of things. It could be a screens thing. It could be headless, it could be head full, it could be whatever. And it’s kind of hard to judge an apple to an orange or whatever. So for this headless and with dead live, we came up with the headless challenge. So everybody you’re going to see today is working off this paragraph you see on the screen here. So you, the audience, are basically the customer in this case. So go through, watch each of these presentations. There’s three of them. Go through each one of them and then use your own criteria. So thinking about that, I mean, what is the criteria? The reality is, since this is a virtual session and it’s really up in the air, there really is no criteria. But if you want to think about it, you can think about it as each of these presentations, what is the feasibility of their design to that big paragraph of requirements? And also, presentation skills. Today’s devlog may have had some technical challenges early on, so don’t count that against us. But just people doing their best. I mean, giving bonus points for doing cool things. And my favorite, and is still true, is if you have a family relation to any of the participants today, I’m sure they’ll love your vote. But if you don’t vote for them, that’s probably something you guys should probably address. So today, our participants come in three categories as Jean-Michel. Well, actually two categories, three different presenters, teams of presenters today. So we have Bounteous, EPAM, and Palo Alto Networks. And this is the order in which they’re going today. So in a moment, Steven will get ready and start off the event. Kind of order of events today is he’ll go through his presentation, and he’ll hand it off to the next person, and so on and so forth. And then at the end, you should stick around because you get the chance to vote who is the 2022 ADM Rockstar. And Kaushal will take over at that point. And we’ll put the URL up there. We’ll give you a couple of tidbits about the next ADM Rockstar that’s coming in a short period of time after that. So with that, I’d like to kick things off. And it’s go time, Steven. So whenever you are ready, I will stop sharing. I am ready. Thank you for the introduction, Darren. My name is Steven Carter. I’m a Senior Technical Director at Bounteous. And I’d like to talk about using AEM for headless and using it not just for content fragments, but additionally using it for AEM sites. That’s what I’m going to be talking about today. So first, I’m going to quickly review what we’re trying to accomplish, although Darren just shared that, so it’s probably pretty fresh in your mind. So I’ll go quickly through that. What are some things that add friction to accomplishing that today? And how can we reduce or eliminate that friction? And then I have some demos to go through. So first, what we’re trying to accomplish is the theoretical customer wants to expand their digital footprint. They want to emphasize reuse and developer flexibility. They want to retain the AEM authoring experience because their author is already accustomed to it. And they want to expand their content to B2C commerce sites and mobile apps plus point of sale devices. Additionally, we want to use AEM as a source of truth for this content. We want to include assets in the design. And we want to utilize GraphQL for the ultimate solution. Now, one thing I want to note, too, is that this is not a new ask. This is a sort of ask that we’ve heard from customers for the last few years, especially. So what’s adding friction to doing this today? Well, first, I think that just the paradigm that we’ve been using for AEM has been very server focused. We’ve been building a lot of our sites with HTL. And that doesn’t necessarily immediately translate into presenting your site with a spa. So AEM today provides a lot of resources, libraries, documentation, and example projects, which they allow a developer to create a spa and utilize GraphQL for content fragments. And there’s a lot of great things that are being done there. But we’re still focused on HTL. That’s kind of the norm that I’ve seen. So to migrate a site that’s been written with HTL to a spa requires either rewriting the components now that they have a spa so that you have a new rendering method. And then you also have to handle things like routing, caching, and script CSS delivery. Not hard things to do. But if you have a large code base that’s been created for a few years in HTL, then there’s just that much more work that you need to do. So if you instead decide to go with a spa at the beginning of your AEM journey, then you can deal with those paradigms there. It isn’t as much of a rework later on if you do want to change up how you’re handling this. So when you’re using AEM for a spa, there’s, like I mentioned, a few hurdles you need to jump through. You want to make sure that your components have their content exported with the SlingModelExporter. I’ll show an example of that if you haven’t seen it before. Your legacy components might not have been developed necessarily for having the SlingModelExporter sending their configuration across. You want to make sure that all of the detail that you need to render that component is being surfaced in that way. And you also need to make sure that routing support is there. And I’ll talk a little bit more about that as well. So AEM already has GraphQL delivery for content fragments. But as far as I’m aware, I’ve done some digging. I don’t see that the site itself can be exported with GraphQL. So how can we reduce or eliminate some of these friction points that I’ve mentioned? For the most part, using tools that have already been developed. So first, a little background. I’ve been using React. I’ve been using AEM for seven years and React for the last five or six of that as well. It’s definitely something that I enjoy. I’ve also used Angular. And that’s a fine system, but very happy with React. And I think there’s really nothing holding us back from rendering our components with React versus HTL. And then if you’re concerned about having server-side rendering being lost with HTL, you can still enable server-side rendering with React so that nothing really changes for the customer. So that would improve your experience on first load and also would allow you to fully support SEO. And you could also turn server-side rendering on or off for a different portion of your website if needed. And you can serve all of your content and configuration to your sites via GraphQL. So here’s the one big important point that I want to make here is that if you want AEM to be your single source of truth, then it should be your single source of truth, both in terms of, hey, this is the content that’s being delivered, but also in the way that you access that content. So obviously, everything is backed by the JCR. So my idea is instead of having that model.json exporter for your spa and using Sling models directly within the extra syntactic sugar there, you should use, if you’re going to have multiple places consuming the content, they should all be consuming it the same way. That way, at every stage, you know that your content is correct. Even if you test that once, you know that it’s going to be getting that same content on the other places. So that is the concept I’m thinking of single source of truth twice, both in content and in its content delivery. So some other quick suggestions, which I’ll kind of briefly touch on here. Right now, the AEM model manager libraries are pretty browser dependent. And so it might be useful if those could be tweaked so that you could provide a module to them to say, oh, I’m in a browser, handle browser context, or handle, here’s the API for handling a browser. But oh, here I’m in a mobile context using this library. Maybe there’s a handler for that. And then also, you can develop custom React hooks or the Angular equivalent to request data for the components given a single path. So here’s a path to a component. Give me the data. So now you can start putting those things into Adobe Target. You can have personalization and very targeted pieces that can come right from your sites instead of needing to be a content fragment. So you get more of that author flow. Let me move on real quick. So I’m going to go through three main demos. The first one is, as I’m sure most people are aware, AEM already has single page app support. And that is being accomplished with the .model.json exporter. And there’s some example code that you can find being provided from Adobe. And their libraries are very good and handles a lot of this directly out of the box. So it gives you an editor experience where I’ve created a simple thing here where you can this looks identical to how things operate when you’re using HTML, when you’re using the core AEM experience. And that renders just fine. And you can do navigation and go to other pages. Everything is great. So just talk about this diagram real quick. So you’re getting your data from the JCR through .model.json for the components. Assets are being served from AEM. And your spa code is hosted by AEM. So the next item is introducing this optional idea of GraphQL. And I’ll get to something later on. I don’t think it needs to be a proxy. You could do this within AEM. But just for the purposes of this demo, I’ve created a small GraphQL proxy that basically consumes that .model.json and then shows you all the detail with GraphQL itself. So the data for your site and the assets are coming from AEM. And then you can externally host or even put the spa code inside of AEM. But you can really separate that out. It doesn’t need to live within AEM. So you can have this loosely coupled line here where your spa code doesn’t need to be hosted there. It can have a release cycle that is independent of your AEM release cycle. And then that’s going to the browser. So that is actually right here. This is just a regular React site. And I’m using GraphQL to grab all the data from the server. And so these should look pretty similar. If I flip between them, maybe a little bit of styling differences. But everything’s still supported here, as well as the responsive design. There we go. It’s supported in both of these. That’s just using the layout mode from within AEM. Here’s a quick view of the GraphQL. So you can see here I’m requesting a component at this given path. And then I have some fragments here to determine. I don’t know exactly what type might be coming back from this path. So I have several types laid out here. And these are all strongly typed. So AEM component, I’ve got a few that are supported here. So for text, I can see what types are there. And so this gives you a strongly typed system for using on your development side. You know exactly what types each thing are. You know what fields are available. And you don’t have to request all of them. If you are just showing maybe a teaser for another component and you only need the text value, that’s all you need to grab. You don’t need to get all of that back over the wire. So if you’re on the, I’ll just run this to show that’s working. So running that, I’m getting the allowed components to that path, all the data that’s inside of there. And this is all with GraphQL. So if I took off, let’s say, this components list and reran that, then you’ll see that I’m only getting the little bit of data here that I needed. So I can even take even more of this data off. Oops. Or not. Live demos. All right. And then also, you can get the full page model back. Just FYI, I’m going to provide a link on the last slide. But I have also put out just this very rough alpha example of creating this GraphQL proxy to call the model.json and get the data. You can see it’s pretty straightforward. And it’s easy to add additional component types to it if you want to be able to export your site with GraphQL. Obviously, you want to do a little bit more work on it. Like I said, it’s very alpha written very quickly. And then the final thing that I wanted to go over was that once you’ve done that, you can actually grab the data from any system. Point of sale, mobile app. You’re just getting data. And you can choose how you want to render it. So that’s what I’ve got here. I’ve got a mobile app that’s running those same pages and has navigation along the bottom here. So you can also see that doesn’t need to have. I’m taking that same nav component, but I’m rendering it along the bottom here. So you’re able to make these changes as needed for your data. And you can also see that one last thing to demo here is that let’s say I change something here.
Just add a quick little thing there. Reload my app. And I can see that these changes are on each of these sites right away.
All right, so that was a lot of stuff to demo there. What I’m thinking is an ideal future state or what would be possible is instead of the .model.json, let’s get that full type safety within AEM. I think that starting to use GraphQL directly within AEM for the site content as well will be important to being able to quickly scale out to all these different channels, as well as optionally turning on server-side rendering so that nothing really changes for your customers or your caching. You can still be caching HTML regularly. You don’t have to worry about the search engine optimization concerns. And that should be a great experience. So just to recap, I think that my main goals are we want to have one single source of truth for AEM for content delivery, both in terms of how your main site is using it and how your ancillary sites will consume it. If your main site is able to use it, then you know that all of your other sites will be able to use that same data. And secondly, I think that having GraphQL exporters for AEM sites is going to be really important and impactful for the development flow and also helping to reduce bugs with content types and ease of development for knowing what types are available, as well as having the ability to do your sites even external to AEM, which can be a great improvement for front-end styling experiences. Without having to run all of AEM, you only need people who know React and enjoy working with React. And that concludes my presentation. Like I mentioned, here’s a link to that code. And I think we’re going to be taking questions maybe at the end in the Q&A. So if there’s any questions for my presentation, then I’ll get to them after this. This is not only applicable to Fullspa. You could do a portion of the site, or you can even take the content from an existing site that’s in HTML. As long as you have the .model.json exporters, you can render a portion or pieces of that elsewhere as needed. All right. Thank you. I’m going to pass it off to Samantha and Sandeep from EPM now. All right. Yeah, thank you. Wonderful presentation. Thanks, Steven. All right. So I’ll quickly go over the introduction. This is Sandeep Maheshwari speaking. I’m a developer and solution architect in EPM system. I’m certified AEM and cloud architect having more than 17 years IT experience, 13 plus in AEM and contract management platform. Presently working in EPM system based out in United Kingdom. And in this presentation, I also have Samantha Pacheira, who is also a developer and solution architect working in EPM system, having more than 14 plus years experience in IT and also in AEM. OK, in terms of agenda, we want to cover the solution, high level solution overview. We’ll talk about the solution architecture, full stack, and the backend. We’ll be talking about the different IoT use cases and in between. And in the end of the session, we’ll do a couple of great demos as well. This is the challenge scenario. High level, I am assuming we all know about it. So I’ll quickly skip this slide. So let’s talk on the solution overview bit from the highlights perspective. The solution which we are proposing, it is based on Mac design architecture, which is microservices API first cloud native headless architecture. For the scalability purpose, we selected the cloud technologies. We selected like a serverless cloud functions to provide the gate scalability needs. For API first, we decided to go with this GraphQL Federation approach in terms of combining AM and Magento responses. We designed the architecture, which is like the channel agnostic and which should support all different channel types, including the traditional conventional devices and IoT devices. We have decided to go with this headless client, which is asynchronous and reactive stream support. We also thought about the preview functionality because this has been an issue with the front end application, which doesn’t support the preview functionality. So we decided if we decide to go with Next.js or any other front end based technology framework, how the preview functionality can work. For infrastructure as a code, we decided as a serverless architecture. So the proposed architecture, it will be like the Terraform deployments. For caching, definitely GraphQL persisted queries will definitely help in terms of improving the performance. And dispatch is needed for the greater performance that we could consider for the caching. And integration, we have done the commerce integration framework with AM, which is like the AutoBox. So in terms of technology stack, we have chosen Next.js as a presentation and also the supported client libraries, including GraphQL client and AM headless client. For the content management, we have chosen Adobe Experience Manager, which is AM as a cloud service. And we are using SDK in the local environment. Cloud technology, we are going with this AWS. But cloud technology could be anything. This we will see in the demo as well. E-commerce system, we are going with this Adobe Magento because this challenge was for the national consumer client. And we decided to go with Adobe Magento as an e-commerce platform. OK, so this is the full stack proposed architecture. So here are the three different layers of this architecture, which is the client and the middle layer, which is the GraphQL API layer. And then we have the presentation. We have the backend system. In the backend system, we have AM, AM which is serving the static content, and Magento which is serving the dynamic content, like the product data to the client. So if user is coming from the different devices, like for example, the traditional devices or the IoT devices, the single point of entry is like this Apollo Federation gateway, which is based on the API gateway. And this Apollo server, which is deployed on the serverless Lambda functions. And this Federation gateway has an integration with other Lambda application as well, like for the AM content Apollo service. And this e-commerce Apollo service, these are all different Lambdas. So how is this execution happening? Like there is a user coming from traditional devices through this client. Client, we have decided to go as a Next.js app. And this Next.js app is deployed on Lambda node on the AWS side. And this Apollo client is doing all the interaction with this API gateway layer. So once the request is coming to the API gateway, it is passing the request to different microservices. Also the execution, also the entry point is same for the IoT devices as well. When the user is trying to use different IoT devices, the request is coming to same API gateway. And from this API gateway, the request is going to the different microservices, like AM and Magento. And this Apollo Federation gateway responsibility is to combine the responses from the different system and present it to the client. At this moment, we are using AWS as a server, as a cloud platform. But it could be anything like the GCP, Azure, or even Adobe. I can replace this pink box, which we are considering as a middle layer. We looked into the different alternatives as well, like in place of Apollo, Lambda, and API. If you have to build this graph layer, what are the different alternatives as well? So we came up with this AWS AppSync, which is out of the box. And also we came up with this GraphQL schema stitching. But we found out this Apollo Federation gateway is more structured way, which is actually fulfilling all the requirements. So the different clients can even individually call the different microservices. But within the proposal or in the current architecture, we are deciding to go through the gateway. In this, the performance will be very good in terms of, because we don’t need to actually make individual calls. We just need to make a single call to get the response. So from the high level, from the back end perspective, how this framework goes, we have this asset author. Asset author gets the image for the products from some different channels. So we are seeing from the Adobe Creative Cloud, asset author gets the image. Asset author uploads the image to the AM author. Once the image has been uploaded, there will be some workflow that will trigger and notify the product admin that the product, the image product, has been updated. Product admin gets the image, uploads the image into the Magento against that particular product. In the meantime, the content fragment author also creates some additional content in the author environment. And once this content is activated, it goes to the publisher. And also, the image is updated into the Magento, which will finally serve to the head. All right. So this is the Next Race app, which we have built. It has two components. One is like this hero image and the hero content. And also, we have these product images and also this product, which is coming from Magento. And these, the product images are also coming from AM, but this product data is all coming from Magento. This page has the two sort of integration, but all coming through the single API goal. So what I’ll do, I’ll go to my AM. So this is the content, hero content that is coming through this content fragment. And I can verify, I can go to this Rockstar homepage persisted query. I can run this. I can see, OK, this content is what that is coming on my headless application. So I’ll quickly change the image. I say Save. I do click Publish. So once it is done, I’ll go to my graph. I will explore it. I validate this. Yes, I see this change. And I also go to my publish persisted query. I see this here. So this part is also updated. So once I go to my headless application, once I refresh, I see this hero image has been updated. And also, same way for the product images. We see there are some products are loading fine. But if we notice on one of the products, let’s say, for example, Cruise 12 Analog Watch, there is no product images floating. It is not because we have not actually configured the image there. So the procedure is same. I come to the AM. I go to this Watches section. I just quickly go and upload a file. I already have this image. I’ll go and upload this image.
Quickly, I do click Public Publication. And what I need to do, I need to just copy the reference of this image. And I’ll go to this Magento.
Yeah, so for this demo and for this challenge, we have added a new attribute, which is called AMProductImage. This attribute I just added. So we’ll go to this Cruise 12 Analog Watch product. And if I go to here, I see there is no image loading currently. So if I go here and quickly in this attribute, I just add the image reference, which is uploaded to the AM. I do Save.
And if I come here, yeah, so I see this image is now appearing on the product. So if I just quickly do Inspect. So yeah, we can see this image is loading from AM Publisher. And also, if I click on the Details page, this image is coming from AM Publisher. And also, if we just quickly validate this image, hero image is also coming from AM. And how this whole thing is working, if I quickly open this Apollo client, I open this product query. And the attribute which we have added, AMProductImage, this attribute we are asking from the GraphQL from Magento. And we are getting back this image reference from Magento back. So this is the whole part of the project. So this is the whole part from the presentation standpoint. And from the GraphQL Federation and Gateway, I have already deployed to the AWS platform. If I quickly share.
Yeah, so this gateway code has been already deployed. So we can see all the different services as there. So probably, I can just take the Apollo Federation. So here, I already have the services loading. So I just say query.
If I run this query, I get the data back from AM. And in the same query, if I’m looking for products, and I do rescue.
In the same query, I get the data from AM. And the same query, also, I have the data from Magento. So this is where all the magic is happening, on this Federation and Apollo Gateway layer. This is what we have seen. Next is running as headless, getting the data from AM, and also getting the data from Magento e-commerce system. Images, content is all coming from AM. Right hand side, we can see product data is all coming. Product prices, product information is all coming from Magento. Just for the recap, what we have seen, we have added a new attribute, AM product image in Magento. We have tied the image references in here. And this is how we are loading. So yeah, I’m done for my part. I’ll hand it over to Samantha, my friend. And he’ll show a quick demo on IoT use cases. Over to you, Samantha. Thank you, Sandeep. According to the given use case, it says that the solution should be ready for future use cases across different channels. And so let’s see how it is implemented. So this is the normal content fragment. And here, I have created different variations. So for example, take stop IoT, Adobe screen, or whatever you like to have. And so the content, the GraphQL endpoint, or federated endpoint, it talks to this endpoint of AM. To show the demo how it works, I have created a AI chatbot application, which executes a GraphQL query to both AM and Magento. And this chatbot is actually integrated with an Android app. So let me bring that Android app. So I hope you can see my screen now. So what you see on my screen is actually my Samsung Android phone. And I’m using the screen mirroring. So whatever action I perform on my phone, this will be reflected immediately here. So let me open my Android app. And this app is called AI chatbot. Welcome. My name is Yantra. What can I do for you? I hope you are able to hear the voice which is coming out of my mobile phone. So let me ask some question to the bot.
Show me Rolex. Please wait while I am checking.
It looks like the Rolex presents the new generation of its oyster perpetual watches that brings the new model to the range. So whatever text you are seeing, it’s coming from the AM. And it looks like it’s actually for the presentable format for bot. And this is the query, which is just a Rolex. Here I’m just making a query. And this is the same text which you are getting. So now is the time to know the price of Rolex. So let us ask this bot again. How much is the price of it? Sure. Moment, please.
Here’s the details, $300. So let me go to matching to admin. And then I can log in here. So the Rolex which we’re talking about, it has the price of $300. So now let us change the value of $300. And I can just see it’s actually $700, $ constituencies, let’s pull it out again. I will change the temperature to less than 15 at both sides. And I have users with 30. And let us see theheart rate. So this is the price $300. Let’s change the value of 300. Let’s say because of the inflation, the price has become much higher. So let us make it as 350. So now let us ask this question again.
How much is the price of it? Please wait while I am getting those details for you.
Here’s the details, $350. All right, so here we can see this GraphQL. There is actually able to, sorry. So yeah, here we can see this GraphQL query if I just want to make a call to Magento. And this is the Magento price. It is coming 250. And I’d like to talk about how this chatbot is working. So this is actually a JSON schema behind the scene. And this is, there are many different significant syntax of the keys of the JSON schema. So for example, it creates a graph using the pattern we write here. So for example, when I say, show me something, then it matches the pattern using some graph algorithm. So behind this thing, it creates some kind of graph data structure and like a head and tail. So I hope you’re aware of it. So, and then it’s actually when I’m querying, it actually matches this. So for example, when I say, show me something, and then it goes back to my AM, this is local host in the case. And then when I say how much price of it, then it goes to this Magento endpoint. And it is also context hour. So for example, when I say, show me something, then it stores the variable for the future conversation. For example, when I say Rolex, it sets the variable watch and that is being passed for the later conversation, like this keyword bypassing doesn’t watch for this endpoint.
And so you can also change the text for the bot.
Yeah, so this is for this demo. So I’d like to talk about another thing. So I have written an asynchronous reactive client for this particular use case, because all the calls in the, all the network call in Android app does not work on main thread. So it works on the background thread. So for that, I have asynchronous client and also reactive support as well. And all these, this client is actually able to, all the error handling is done as per the GraphQL query. And also because as Sandeep was showing that GraphQL federated API can be used, but if we do not have that option for you, that GraphQL API did not possible, then also you can chain these, chain the series of calls in executable, completable future. Yeah, it’s nothing new, it’s normal. So yeah, that is just an option for you. And the whole code will be given to you in my GitHub link. So you can have a look and let me know your feedback.
Okay, so this is the useful links. This is GitHub client and this Android app. Also you can see in my GitHub profile as well. And then with this, I would like to say thank you to you with this code that the best way to predict the future is to create it. So thank you, I’ll take some questions if you have later. So then I’m done, so over to you Deepak, thank you.
Thanks for great presentation, Sonant.
So greetings everyone. Thanks for taking the session. Myself Deepak Hitaawat, I’m working in digital team at Palo Alto Networks. Many thanks to entire Adobe team, Darren and Kaushal for giving me the great opportunity to participate.
The topic for the today’s session is headless challenge for major electronic company, which I’m abbreviating as MEC for my demo purposes.
The goal of this demo is to create best in class multi-channel digital experience with AEM.
Most of you are well aware of the delivery model mentioned in this slide, since it’s mentioned that the content authors are well aware of AEM. It means that both the developers and the content authors are working in the same platform that is AEM, known as the full stack or a head full bay. Since we want to expand the digital experiences to multiple channels like single page apps, IoT devices, et cetera, we will be using AEM as a headless solution where backend that is AEM will be decoupled from the client applications. Without any further delay, let me go over the demo and then I will do a quick recap of the solution for benefit of everyone present here. The first step will be to identify the content which is needed across various channels like single page apps, mobile apps, IoT devices, et cetera. So I have identified that I will be needing the information like product information, contact section information, header, photo, articles for that MEC site and also the authors associated to the articles.
So let me show you one of those model and the schema for this model contains various fields like the title for the model, what will be the unique path to identify that, the images for that product. I can also add description features. We can also add the Boolean values like whether it’s new or not. We can also specify the categories using this dropdown option or the evergreen AEM tags we can use.
And we can also specify some other articles associated to this product using the fragment reference. So in this way, I will be creating the schema for my model. So in similar ways, like I have done the schema for all the models which I want. So I’ve shown you one model for the sake of time. The second step is to create the content fragments associated to these models. So I will be creating content fragments starting with the EN folder and then in the later part of the demo, I will be showing how it will be used across the locales. So here you can see all the content fragments I have created in a specified folder which is easy for everyone to recognize. So let me show you one of the content fragment. So using the product model which I’ve shown I’ve created the content fragment where I’ve specified the information like title, image, description, model, and also I’ve specified the tags for tagging and also specified the articles which I want to associate with this content fragment. So this is the second step which we need to do. We need to create the content fragments. And first of all, I will be doing in the EN folder and in the later part of the demo, I will be showing how it will be used across the locales. So now let me show you two of my channels. So this is the single page app posted locally. We can host it on AWS Azure wherever we want. And the second is iOS app. So both of these channels are using the data from the content fragments. So let me show you more of this single page app. Here the information like header information, all this product details information, this footer links, contact section, all we are getting using the content fragments in a single network column. So this is the single network column, which gives me all this relevant information I need opposed to this content as a task service where we used to get the whole of the data or using the rest APIs. This is very effective as only the needful data, which I want, I’m getting in a single column, which is very helpful for all the developer community. For example, I want the information for the header. I want the information, relevant information for all the products. I can minimize the information as per the channel needs. I also have the information for the footers and footer include the contact section. So all the information I’m getting in a single network call and this is being used to render the single page app here and also the iOS app, which I’ve shown here.
So this is all powerful. So how it’s achieved? It’s achieved using the AM GraphQL queries. So what we do, we basically go to the AM powered GraphQL console. We have the query, so it’s very easy. There’s also documentation explorer. We can identify the models of what all schema I’ve defined. So I can see all the fields on the basis of this. I can construct a query. Since I want to have a localized experience, I’m also having a locale as a parameter. Since I will want to reuse the content also. So here I am specifying the various fields like images. I want the information for the different matter. It is also I can get, and also I can get a nested reference. Like for example, contact test was a different model. So that information also I’m able to get it in a single query call. So once we construct the query, we see the results. And if we are satisfied with the results, we basically save it. So it’s a persisted query. The advantage with persisted queries that it is a get request, which can be cached and optimize the performance. We can also add the various headers also, like all the TTL headers and others, which we want to improve the performance of our site. And this data, which we have got, is used by the AM headless SDKs. For example, in the single page app, I am using the headless client for JavaScript. So all this will be available in my code base. And in the iOS app, I’m using the Swift libraries, where we are directly getting this data and consuming it. So now let me again go to the demo. So I have shown you that earlier I created all the information for the English site. I have all the information of the product, all the product details also have information of the other sections. I want to get the information for a localized site. For example, I want to get the data for screen. Currently you will be seeing the missing data because we have only created for EN. So the next step will be, we will be using the powerful assets translation project. We will be adding a new translation project where I will be specifying all the languages I need. For example, here, if you see all the target languages I specified, so Spanish is also in the list. Currently I’m using the Microsoft translator connector. We can use a custom provider also. Then we will be creating basically the language copy of the assets. So for the demo purposes, I’m showing you for all the content in EN.
Then we will be using the code that I need. Then we will be basically starting our translation job. It will take some time and it will show us in progress in the approved state. So once it’s approved, we can mark this job as complete.
So basically what it does, I have translated, let me show you the folder.
So now you’re able to see all the translated data. It translates the metadata, binaries, and it also translated some of the fields. So I can also specify the custom fields, which I want to translate. For example, I specified the contact as headings will be translated. For the products, I want the information like the title to be translated and the description. So that will be also translated, all these informations. So once I’m done with this, we will be basically just publishing all the content so that it’s available to the publisher.
And then if I do a load of my page, I can also see all the translated data.
So you are able to see all the translated data, which I’ve got. And also if you see the contact sections, this title is translated. Also all the product details is translated. So this is a very powerful use case, how the content can be reused across the locations. So if there’s other things like we want to change the content for one of the location, and we want to replicate, we can also adjust the references for the language copy and add to the existing translation project. So as a part of a quick demo, I have decapped how that a single network call, we can get all the data, how it will be used across the channels and how we can also do the reuse of the content using the translation job provided by AEM. And also if you want to support the legacy site, like currently we are available legacy site, there also we can use the components, which are referencing the content from the content fragments. So a single place of content update will reflect content across the legacy site, as well as all the channels like React app, as well as this iOS app and all the other devices, which I want. So let me now go to the solution approach for the benefit of everyone.
So we will be doing the course config just to make sure as per need for the channels, what endpoints you can download this code from the repo and also just make sure that preferably AEM as a cloud instance is preferred, but it’s a supportable also like 6.5.13 plus, there might be some of the things like persisted query console might not be apt, so cloud instance is better. And the approach is like we have created the models, then using those, we have created the content fragments and those we have passed over the HTTP using the GraphQL endpoints. We have created the persisted GraphQL queries so that they are very fast. And then either we can directly use the response from these GraphQL queries, or we can use the AEM headless libraries for like for JavaScript for the reactive, Java libraries for the Android app and others also like Swift library, Apollo client we can use for the iOS app. And then also I’ve shown that we can use assets translation for content reuse and how we will be capturing the global audiences. Like for the more details, these are the steps which I outlined. Two important things to notice, we should allow the endpoints GraphQL execute.json in the dispatcher config, and it must also be configured to support codes. We should also add a separate load balancer for the persisted queries so that we can control multi-channel things. And we can also do a needful CDN conflicts. And as I already mentioned that we can use an appropriate headless client, either for the single page app, Android app, or we can directly use the persisted queries response. I am sure with this solution, it simplifies the scale, the power and robustness of the AEM as a headless solution. And I thank you again for your time and you can reach out to me over the social media. So you can ask me any queries, give me any suggestions. So we can together enhance the headless community and make AEM a better place. Thank you. Wow, good job Deepak. Thank you. So everybody who was in the session, thank you very much for spending time with us this morning, evening, afternoon, wherever you are in the world. I would like to thank the EPAM team, Steven from Bounteous, and Deepak for taking the time. This doesn’t come easy. They have to spend time to prepare for this, which might or might not be part of their job. So thank you very much guys for doing this for the AEM community. Now the voting link is up. We would ask everybody to click on, scan the QR code or go to the URL and put your word for the winner. The voting lines will be open for the rest of the day and then the winner will be announced at the end of the conference in our closing remarks. We will leave this up for five, four, three, two, one.
And then I would like to end the session with an announcement for 2023 Rockstar, which is coming up very quickly. We will be in person at Summit 2023. Please follow Darren or myself on LinkedIn, Twitter, Facebook, whatever social channels you have, and we will be posting more details on when submissions open and what we are expecting from the community. So thank you very much. Carry on to you.