AEM GEMs - Unlocking the Power of AEM Sites: Master the Content Management APIs

Ways of using AEM Sites are evolving rapidly, adding API-first patterns to traditional user interactions. Whether you’re looking to streamline your operations or enhance your automation, staying ahead of such trends is crucial. Join us for an in-depth session where we explore related cutting-edge updates in AEM Sites and how they can empower your content management strategy.

In this session, you’ll discover,

  • Advanced OpenAPI Standards Explore the latest OpenAPI implementations for seamless CRUD operations on AEM structured content.
  • Eventing and Webhooks Learn how Adobe I/O’s eventing and webhook capabilities can automate processes based on content and state changes in AEM.
  • New REST APIs for Translation Automation Get to know the new REST APIs that simplify and streamline your content translation workflows in AEM.

Presenters

  • Mathias Siegel, Principal Product Manager, Adobe
  • Catalina Dumitru, Software Development Engineer, Adobe
  • Lénárd Palkó, Senior Software Development Engineer, Adobe
  • Prashant Kumar Singh, Computer Scientist, Adobe
Transcript
All right. Recording has started. Not just feel free to start. Yep. Thanks, Corinne, and welcome, everyone for attending this, to this, very exciting webinar today. Mastering a Am content APIs and especially, mastering the new APIs, because we’ve been hard at work to modernize and sites APIs for content management, for content delivery. And the team today, Catalina, Leonard and Prashant have some exciting content for you to see how these new APIs work and how they can help you in your development workflows. My name is Matthias. I’m on the product management team for Am sites, and I’ll start with a quick overview of what it is that we want to talk about today. Before handing over to Catalina. So just to point out to you, when you go to developers that adobe.com, you will find a very nice overview of the APIs available for AMS sites. You see a screenshot on the right side and is always had a very broad API footprint. But it’s really important to recognize that a lot of these APIs are being modernized. And we would really like to encourage you to try these new APIs out, even if you’re currently already using existing APIs. Try these new APIs out. They have great capabilities. They have very nice playgrounds. Available on developer A.com where you can, see how they work. There’s also modernized eventing, implemented for a number of these APIs. So a lot of benefits there. From a standpoint of API modernization and in terms of access, also, we are really working on making all of these new modern APIs available to you as developers. As we are developing for internal use as well. A lot of these APIs were developed, for example, to support the new UIs that you see in AMM. For example, the new Content Fragment admin UI, Content Fragment editor UI, all this UI is need. Require new APIs. And as we build these APIs, we, expose them. We wanted to expose them also to you for developer workflows. So you can use them for your own integration purposes with, external systems and not only integration with external systems, but also integration with the Adobe infrastructure is something that we want to, enable with these new API, so you can use them in your developer workflows, for example via eventing with Adobe IO, so that you can take advantage of the entire Adobe API ecosystem and, processes, in order to modernize your own, developer workflows as well. So for this, presentation today, for this webinar today, we have this agenda here. The overview is what we’re doing right here. And then we’re going to talk about the new APIs for structured content management. That is what Catalina will cover. And then we’ll give you a sneak preview into new open APIs for Quantum Delivery and your Rest API that we’re building on edge delivery services for structured content delivery as a alternative to GraphQL, which we have today. There’s a new Rest API coming, so you got some choice there from an API point of view. And then we’re going to talk about the new open APIs for content translation management, which we’ll cover. And then we’ll close with a brief summary. So with no further ado, I’d like to hand it over to Catalina to take you through, the new open APIs for content management.
Hello, everyone. I just popped up the camera to say hi. I will close it during the presentation. So thank you, Matthias, for introducing us. Now and share the screen. And, start with the presentation.
Okay, so. That was already started. My name is Captain and Metro, and today I will present you the open API for content management.
The agenda for today is the one from the slides. First slide will be about the Am sites APIs. We will present some key functions and endpoints. We have. After that, we will have a deeper look into the most recent developments in the area. We’ll have an exploration and we’ll end up with the eventing. A quick look into how they work and how can you use them in order to benefit your business from them.
Now let’s start with the first section, the Am site APIs. I want to have a moment and pose for highlighting a few aspects. First, the content management APIs or the headless APIs as we call them, offer a tool in order to build your business smooth and easy. I’m saying that because with content Management API, so you can integrate the content fragments in various aspects of your business. And for example, you can generate external content and integrated with the content management APIs into the Am instances. More than that, if you combine the headless APIs with the eventing mechanism, you will unlock new levels of automation and that will improve the performance of your system. Very well. And last but not least, I want to emphasize where what Marty has already mentioned that these content management APIs are a replacement for the content segment support in the assets Http APIs, and that the support for content fragments and assets APIs will be probably the commission next year.
With that, I want to go to the next slide. You will see now in the screen all the areas we covered in the content management box. Please pop up your phone, scan the QR code and see more about the documentation. We have some more building blocks of the content management APIs like the primary management, the variations, versioning, model management, but also we covered some advanced areas like searching APIs, tagging, references, workflows, and so on.
In today’s section, we will only cover a few of them. The latest developments and those are the model management APIs. The batching reference and searching APIs. Again, the documentation is still available on the screen, so scan it and go to the official documentation.
For the in-depth exploration part we will start with the model management. I think all of us know that the management APIs offer you a way to manage the content. So the fragments, the models, the content area. But in the latest the releases of Am, we have introduced a new think the UI schema. So the UI schema, it’s a really cool feature that provides you more power of content. So you can provide information about how the content is rendered. And this is really cool. We’ll see in the demo how it works. We’ll have a read endpoint that is a great effort to the possibility to read the current state of the UI schema and also output, which is an update for the UI schema.
Currently we support to a couple of structures the tab structure, the category headlines, collapsible fields and conditional fields. All of these details are captured in the documentation that is on the course. QR code on the screen. Now let’s go to the beer.
For the demo today we’ll start by creating the resources. For example, we need content fragment model or content fragment model for today. It’s a presentation template. We’ll have the speaker name which is a text field. We’ll have the job title, the speaker company, the presentation and title duration, presentation date, categorization, and an enumeration for the categories thick marketing and so on.
So let’s create our model after the model is created. Would you like this model in order to create the content fragment.
So the content fragment will populate actually the actual data actual content into the model we have created. And we have the speaker name, which is my name. We have the job title, the company presentation title, duration, presentation, link categorization, and all the details with actual data. Now let’s create the model. The fragment. This segment can also be visible in the new ad menu. Why? It’s this one created today. And yeah, that is already that part. Our resources now will we can get the UI schema. So by default each model created has a default vertical layout schema. All the models have this vertical layout in which we list all the fields we have properly added. So we have the speaker name company. They are listed all in the vertical layout. In this layout. It’s also visible in the editor. If you look, all the fields are listed one after another in the editor. But let’s say that we want more than that. We want to structure our content. Let’s say we want to organize the content in tabs. And this would be really useful for most of the authors and has to have some structures in the content. We will create the first, tab that provides speaker information. We list the speaker name and speaker information. Then we have the session text. Here we want to add the presentation title duration presentation date, and so on. We’ll also have the third tab. That is a category it has but it’s categorized in category okay. Now let’s update the UI schema. What was before the vertical layout will now be a categorization. Don’t forget to update the tag. This can happen to everyone. So yeah, using the right model we have updated the UI schema.
Let’s go to the editor and see how it looks. So now we have three tabs. All the information information we had before is now structured in a three tabs. Structure. And those would be the tabs. Now let’s say that we want even more. We have we want to have conditional fields. In order to have conditional fields, we might want to keep the tabs we have created before. So we will keep the structure we have created the tabs. You can see that the first category is still a speaker information. The second is the session details. And on the third one the category we have is categorized. And we also have the category. And here we have added the group that has improved. The effect of this rule is. Show. And based of the value of the is categorized. If it yes or no, we will show or hide the last field.
Okay, let’s update the schema again and see how this will look in the editor.
You now will be able to see that toggling on all yes or no. The value of the question is categorized will show and hide the last field.
Going back to the presentation, our next topic under today’s agenda are the references. We all know that reference is not a new area in the content management, but what is new is the way we handle them. During the last month, we have improved our way of retrieving references. We have created a new endpoint, a Get endpoint that is able to retrieve all the parent references. The improvements in this area are the pagination and the caching. More than that, we have a new endpoint that allows you to retrieve a reference three. So a flat list of hydrated references. The benefits in this section include the cycle detection mechanism, and also that is, a flat list that makes the most of the use of pointers. The pointer references from the result. But enough with the talking. Let’s see how this can look in practice. I think it’s better to see an example so everyone can understand exactly the power of this endpoint. So in our endpoint, we provided the idea of the fragment. And here’s the reference three. It’s visible that we have three fields here that have references. We have the first presentation the second presentation the third presentation. All of them reference something. It’s also visible that the first presentation and the third one share a common home reference. They reference the same resource and this resources. But in the references property. So in the references property we show a key value object key being the EU id or the unique identifier of the reference. And it’s only hydrated once. If we expand it, we will see all the details about the content fragment that is referenced. And this is happening only once, even if it’s, a reference by multiple fields. Same for the second presentation. It references another property and it’s again, hydrated in the.
Now next on our list will be the batch of request batching requests is another new feature we have and allows users to trigger multiple requests at once. This will for sure improve the efficiency, because this request will be launched once and execute sequentially and asynchronously. This means that they run in the background and the result will pop up by checking the status. As I said, they had. They have two endpoints. One is the post for launching the batch request, and one is the for checking the status of the request in this request. In the batching request, you can launch multiple requests. And we support http, hdp verbs like post, put, fetch and delete.
The practical example. The visible example of the batching request is this one. So here we want to launch a batch request that contains actually three requests. The first request is for deleting a content fragment. The second request again delete to another content fragment. And the third one create a content fragment. One really cool applicability of the batching request is the bulk operations. With the batching request, you can do bulk operations like bulk delete, bulk publish, bulk, modify, and you can do that in one single core.
Okay. Next on our list is the searching capabilities. So all these resources resources we support, the content fragments, content fragment models can be filtered. Via the APIs. More than filtering, we also support sorting. And the sorting can be done by predefined properties. You can sort by title by created, by modified by created or modified. And of course you can order the creation in the order you choose. Standing on the or descending order, you can see more details about the searching documentation in the QR code on the screen. I let you one second scan the code and after that we’ll go further with the presentation for the content fragment. Searching the filtering criteria we have, it’s basically split into. We have the base criteria which is created, modified, modified or created and published, and also some sub criteria for them after or before a certain date or by a certain user.
All of these have either contextual filtering for context environment searching like the model ID, content, fragment bag, status local, and so on. Here you can combine the filtering as you want. For example, you can filter. You can search for all the fragments that are built based on a model I.D. and they’re published by a certain user. Or you can search for fragments tagged with a certain amount of tags, a certain set of tags, and that are modified after a certain date.
We also have the content fragment model searching. Here we have kept the base filtering criteria, the created, modified and so on. And also we have some other filtering criteria that our model context, for example the configuration folder, the replication status we have again the status in the tags and so on. I really think this is a powerful tool. And I invite you all to try these APIs, not only the ones showing you this presentation, but the ones from the documentation. See what you can do with them and just play around, see how they work. Next topic on our list is the eventing.
And supports eventing. This means that every action you do on domain objects, so on resources like content fragments and content from orders will generate an event. Every operation being created, deleted, modified, published, unpublished or most will generate an event. But here comes the question how do you consume this event? Or how do? What do you do with this event? Well, we have the answer. You can consume them through the Adobe IO. So in your Adobe Developer console you can create a project like the one in the screenshot. And register for the events you want. For example, in the screenshot we have subscribed to the content from invalidation, changing content from an unpublished event and we have configured webhook URL for consuming this event. The way to query is actually a select channel and all the events will be sent to there.
To give a more in this example, this is how it will look. So when we get an event, we will get the message like this one. And we have the type of the event. It’s a content variation, a subtype of the event validation modified. And in the payload we can find all the data needed in order to link this event with a resource from our system.
If we look in the future, we will soon launch the eventing for pages as well. It is already available for early adopters, but it will soon be great for everyone to use.
I will pause now for a second and let you scan the QR codes on the screen. One is for the eventing documentation and the other one on how to consume events.
After scanning, then we’ll jump up to the conclusions.
In the conclusions part, I want to, highlight a few aspects. First, that headless APIs, the content management API are a tool everyone can have in order to build the business in a smooth and maintainable way. Please, again, I encourage you to use them. Play around because the content fragments support for its API will be decommissioned in 2025, so don’t wait too long. Use them and switch to the headless open APIs.
On the other hand, we have the AMM eventing. We have seen the power of an eventing combining the headless APIs with these two, you can have a powerful integration, and you will be able to automate your system by acting in real time upon certain events. To scale your activity, by handling large volumes of events, and also integrate with other systems. So don’t worry too much. Try these powerful tools, the headless APIs. Name eventing and let us know what you think. That was all for me today. Thank you for your attention. And now I pass the mic to my colleague.
Hello everyone. Just jumping in as well to say hi. And we’ll close the video for the presentation.
And. Window.
Okay, so, my name is Leonard Falco, and I’m a software development engineer here at the site. And, today, I’m, excited to give you a sneak peek at our new and shiny content delivery system.
We will cover today the details of, these new, optimized content delivery API. And we are also going to talk about, when to use it versus, the other existing delivery solutions that, he already offers.
Now let’s talk about optimized content delivery, because who doesn’t want things faster, better and more efficient. Right. So we’ve designed this system with, optimized structure for delivery, efficiency and integration, and also moved it closer to the edge to the client, and leveraging this modern architecture, we built it to meet the today’s high performance needs. So you can expect the faster response times. Also less bandwidth usage and. Yeah, basically, you will have no more waiting for, for the content. As for the, AEM open API for content delivery, this is the core of our new delivery system. It operates, as an Http rest API on the M delivery services, delivering structured content in Json format, just like the management API does. It is, lightning fast. It of course, outperforms the M publisher and it, even post processing efficiency. The origin, in case the requests cannot be served from the from the cache. Yeah. It also offers a seamless integration without any, and, needed setup. And, we’ve also optimized caching, with caching, cache invalidation. And, we are ensuring that, your users will always get the freshest content after, content was, was published. The content structure, the Json structure, return is, lean and it is optimized for ease of use on the delivery side. And furthermore, we have, we have a standardized Json schema for the fragment model fields. So everything is consistent and flexible. So it is going to be, making the development easier. And also the integration of these API responses much, much easier. But we’ll see a little bit later how this looks in action. And as already mentioned, that this solution is an ongoing innovation and it is designed to complement basically the existing delivery options, that the already offers.
Now, let’s see, let’s see how an example, request and response looks like from the delivery API. Here we can see a simple Get request, for fragment for a fragment by fragment ID. And as you can see, the content, we have a stream like content structure. With easily referenced fragment fields. They can be referenced by the field name, and each field containing only the value of the field with, no other metadata. So it is, designed for, for delivering the content, and, also for references. It will return the ID of every reference. So it will be easier to use other, API endpoints to get the information for these references if needed. But, we will. See also, we provide the references in the hydrated response as well.
As for the, model field schema, we see here how the standardized Json schema will look like for your content fragment, model fields. This, endpoint basically allows you to take a peek into the blueprint of your, of your content. It will contain each field and the type of each field. So you know what to expect from your content fragments. And, given that it’s a standardized format, it can be used in, in various implementations as well. And since this API follows the open API schema structure, it will offer a documentation similar to the documentation of the management API as well. Maybe some of you are already familiar, with it, but since this endpoint is, only in the experimental phase, yet we don’t have a public, public facing API ready, but, it will be offered for the early adopters that join, join our program. As you can see, it has the standard structure of, an open API documentation. You will see the, parameters that, API can, get. You will get also the response structure for each end point, and also an example of, of what you can receive for, for that API. And in our case we have several categories. We have the fragment delivery APIs with getting the list of fragments and a specific fragment content. Also you can get the model on which, specific fragment was built under the model category. We can get also the list of content fragment models.
Also getting the detail of a specific content fragment. Here we can see the Json, schema response for the content fragment model fields. And also we can get the list of the content fragments created on this specific, specific model. Of course we have support for the fragment, fragments, variations, getting the list of the variations and also the specific, variation. And last but not least, we have the references, APIs, where you can get the, references, from the from a specific fragment. It is aligned with the management, API, and it will offer the, references for a specific fragment, also by returning the reference tree and the list of references as, as a flat list. And in addition to the child references that you can get for a fragment, you will be able to able to get, parent references of content fragment.
Okay. So let’s jump back to the presentation.
Okay. So now the big question, which approach should you use? So, let’s dive into when to use this new delivery API and when others solutions offered by M may be, more, more appropriate. There are a few key solutions to consider, depending, of course, on your needs. There is the delivery open API coming coming early next year. It offers a Rest data format. It has a faster, festive delivery times. And, it offers a fine grained cache control by default. And it is ideal when you need performance and quick delivery without any, any other hassles. On the other hand, if you need more advanced filtering and, maybe a customizable data format, then GraphQL, is your secret weapon. It, it’s been available since 2021, and it can offer basically the same information or even more in some cases than the delivery API. And, as for caching, it has a small downside of not being catchable by default. But, you can achieve caching with the GraphQL as well, by using the well-known persisted, persistent queries.
For, content management needs. The Content management open API, which my colleague presented previously is, your best choice. It supports supports Crud operations on resources and content, eventing. And, it can also, offer you possibilities to retrieve content. And even though it is powerful, it is not optimized for delivery. So you will have longer times, until you receive the, receive the response and finally, as also mentioned, we have the legacy content fragment DNS. It’s, http API, that’s available since 2018. It still serves the basic content delivery, but it will be deprecated probably, in 2025. So, you can already plan your migration to these newly available, newly available solutions. Okay. And, to wrap things up, the new delivery API is, basically all about speed and efficiency, and it streamlines, the content transfer and reduces the payloads, between, servers. And it uses the edge optimized architecture and with, with the down integration and active cache invalidation, it, ensures that, yeah, your content is fresh, accurate and, delivered faster than you can see, a open API for content delivery. So, with, some of our early adopters, we saw improvements on some end points of, even one 100 x, so it really offers, performance boost, for those still using the content fragments in its http API, again, we encourage you to migrate to the new delivery API to, have a future proof system and also improve, improve your performance, of course. And lastly, we are open to more early adopters. If you join the club, you’ll get direct engineering support and early access to all the cool new features that we implement. Plus, you’ll get to provide feedback before the full release in 2025. So if you’ve ever wanted to influence how things work now, now it’s your chance. And, if you have any questions, don’t hesitate to fire them away in the chat and we’ll be more than happy to to answer them. Thank you very much. And I am giving now the mic to my colleague Prashant.
Thank you. Leonard, let me share the screen. Yeah. Whoop, whoop. My screen is visible to everyone. Yes. Thank you. So hello everyone. I am presenting a software developer engineer in, site translation team today. I’ll walk you through the game. Open API for translation management. First, I’ll introduce the translation process and the activities for which we have developed APIs, followed by a use case demonstrating the integration of our proposed solution. Next, I’ll share an overview of the API and explore what you can achieve with this new API. After that, I will demo the key functionalities of the API. Finally, we’ll take a quick look at the translation event. The translation process enables translation of page segments as it contains to create and maintain multilingual websites. It provides a comprehensive platform to integrate the third party translation service for those with the, and execute the translation process. The translation process mainly includes two categories of activities. First is one time configuration. It is an administrative setup that involves integration of translation service models with a right if config, then identification of content for translation by role configuration, and then prepare the content for translation by language. Root structure creation. The next category is day to day activity. This includes creation of translation project, then adding the content for translation to the project’s job, and finally managing the translation project and job.
This activity traditionally requires user efforts on a second basis. The new API allows for the automation of this day to day activities.
Now let me show you a use case demonstrating the integration of a proposed solution to achieve the automated frequent translation in the use case when the source content is updated, update event is generated and sent to IO events. This event is then consumed by the customer’s external webhook service. Please note that this event can also be consumed by the customer’s runtime action. Deployed on Adobe IO, the webhook service collects the updated source content path and filters out the path for which translation is required based on the customer. Does this need then service use the translation API to send the content for translation into different languages on a schedule basis. This is how we can achieve automated frequent translation.
Now I’ll share an overview of the new API and highlights the key functionalities that we can achieve with this new API.
Using the project API, user can now create the project and add the content for the translation.
User can also try and update the properties of the project.
Now with this new API, it is possible to use and add delete multiple content in multiple jobs directly using the project API. We’ve also provided the API at job level to offer flexibility, allowing user to perform similar operations at job level as well.
We have plotted the delete API to clean up unwanted projects in job. Now let me show you the documentation of translation API.
In the developer console IT Experience Manager APIs landing page, there is a translation tile which list all the APIs available as of now.
You can also see the detailed description of all the APIs.
Let us move on to the demo. Now we’ll start with the Create Translation Project and Translate Content API. This API operates in asynchronous mode. This will also look at how to track the status of asynchronous execution using the status API. Let me move on to the client. Yeah. To expedite the process, I have already added the art token in the API to the creation of auth token. It’s consistent and same across all solutions.
In the API payload it is mandatory to provide the title of the project, the TF config, which is used to extract the translation service. Further in translation method. This list language the source content and the list of destination language for which user wants to create the translation job.
Usually has the option to add the list of content that needs to be translated. User can also set the child page processing is true, which means for all the listed content, the API will process the child page and add them into the job.
Additionally, user has the option to set the start translation option as true, which means once the content is added successfully in all the jobs, the start translation operation will be executed. In all the jobs. Let me initiate the request and have a look on the response of it.
As you can see, the response status code is to not to, which means the request has been accepted and submitted for using execution. Currently the request execution is in queued state.
This is the async job at.
Which is used to detect the progress of async execution. Let me copy this ID and add it to the get URL of this status API.
Now you can see that the response status code is 200, and currently the request is in active state, which means that you think processing is still in progress. You can also see that the response payload contains the different executed state and its information. The same developers to track the different execution state of async processing at the time of troubleshoot.
In our case for this request, the first stage is job creation. You can see the couple of jobs has been created for the state for the definition language that has been sent into the project. Then you can see the next stages content addition, and you can see the list of content that has been processed and added into the job. Let me initiate the request and see what is the progress of this async execution. Now you can see that the status is created, which means that you think execution is completed now.
And you can also see that couple of it has been added, such as start translation, the state that the start operation has been executed successfully in all the jobs.
You might have noticed that the response status has been converted through not three, which means the response header will contain the location part of the result that has been executed for this request. It is the translation. Let me copy the ID and have a look on the translation project properties using the Get project.
Now you can see all the properties of a translation project which involves source language, the list of destination language. You can also see the translation method, the translation providers and all the automatic flags as well. You can also see the success properties of the translation job. However, if user wants, they can use the Get Job API to get the full set properties of the translation job. Here you can see all the metadata metadata along with the content list that has been added into the job, the specific status as well as the type of the content.
Now let us move on to the next opportunity that is update translation project properties. Using this API, user can update the project properties of an existing translation project. These are the list of properties that the user can update. Let us explore this API directly from the tool itself.
In this case, what I have done, I have added a couple of more languages into the destination list compared to the previous list. Let us check if the request and have a look on the response object I just want to test is 200, and you can see that a couple of languages has been added into the destination language.
Now moving to the important functionality that is update content in multiple jobs using the new API, user can now add delete multiple contents in multiple job of an existing project. By default, the API will process the content for all the destination languages of the project.
This is already a draft job for a specific destination language. The API will use that draft job and add the content into it as it will create a new job for you and add the content into it.
Similar to the previous API, this API also have the option to start the translation with the addition and deletion of the content is completed. Let us explore this API from the client itself.
In this request, what I have done, I wanted to add the source path into all the destination languages. However, user has the option to process this content only for the subset of languages using the language mask property in which I set currently like a chain, and is compared to all the set of destination languages.
So when I initiate this request, it will process this content only for this particular languages.
For this request, F process the child page as follows. And I have set the start translation as true.
Let us have a look on the progress of this async execution.
You can see that the status is succeeded with 3.3 as a response status code, and you can see the different stages that has been executed. The first is job creation. Because there was no draft job available to add the content into it, then the content ideation stage has been executed. Finally, you can see the start operation has been executed in both the jobs.
You can also see that from the get API. Is it? You can see the couple of jobs has been created and they are currently in approved state.
Now moving to the other important API that is bulk execution of job user can execute any operation on multiple jobs of a given project. These are the list of operations that a user can use or perform using this API.
Let us explore different from the tool. As you can see, most all the jobs are in approved state. What I want is to execute the complete operation on all the jobs of a given project. This API helps me to that. Let us see what has happened to the.
You can see the transition job execution stage has been processed and the complete command has been executed successfully for all the jobs. Even you can see that from the get project API is a.
Now ready. Let us have a quick look at the translation event.
Translation events provide information about the changes and the progress in translation. Job processing. You can process this event according to your project need as they are lightweight, scalable and secure. And most importantly, it helps achieve out of process extensibility.
This means you can avoid custom code directly in leading to a better scalability and robustness. The type of events available as of now are the events when a job is created or updated, and events when a content update starts or completes in the job. Let me show you the documentation inside the translation API documentation, we have a translation event tag which lists all the events available. As of now. Each event payload contains the job, but the job status definition language and few key metadata such as project path, source language, connector name, as well as the translation method.
Additionally, for the content update event, you can see the count of translation object in processed on that particular request.
Coming to how we can consume this translation event, the translation events are available out of the box and by default, or ordered through the Adobe user can directly subscribe them from the Adobe Developer Console.
Inside the event tab, we have a code type which list all the translation events available. As of now, customer can directly register this event for their instance. That’s it for my site. Thank you. I will hand out my mic to the local. Thank you.
All right. Thank you everyone. Let me reshare my screen. This is Matthias again.
Here we go. And we’ll get to the summary. So we hope that you found this, content useful, showing this broad spectrum of really new and modern APIs that we build in AMS specifically for structured content management on cloud service here. There were a number of questions, very good questions about these APIs also, available on on managed services and on prem. They are currently not but we’re investigating that. We do have a desire to make them work on six five, but it always depends on prioritization possibilities. It’s a maybe it’s a maybe for next year. Currently its cloud service. There were also some questions about the links. When the Q QR codes wouldn’t work. Just check out this one link here. That’s in the slides and it’s going to be in handouts as well. It’s really one, entry point to the documentation on developer to the Wacom. Or if you just go to developer Dot, it’ll become I mean, you’ll see it right away there, new sites, APIs and very importantly, want to encourage you again to please use these APIs, try them out. Since they’re open APIs, they’re very nicely documented with code samples and also playgrounds. So you can see how the APIs work and, try to make them work in your code as well. We understand that if, you use older APIs and don’t fix what they broken. But again, since a number of these older APIs over time will become deprecated. And to clarify that term, also when things get deprecated means they will still work, but they just won’t receive any more enhancement.
But we definitely encourage you to use these newer APIs since you will get benefits from these new APIs. As you’ve seen today. There’s a lot of good new stuff in these APIs, so please do try them out. Also, in any of the newer APIs that are still in early adopter mode, please do reach out directly to us. You can get our email addresses, through, the organizers of the jam session, and through code talk through the community. Reach out to us. We’re interested in working with you directly on, for example, the, new, content fragment delivery API on your services, the rest API that Leonard had shared. Very interested, working with you. We’re planning in general availability or or some time, so there still is time to work together. If you’re interested with, in direct interaction with and engineering, work together on some use case that you can also bring live. I mean, we actually have customers who already live in production with these, early adopter uses. So there’s absolutely no holding back, to enjoy the, value of these new capabilities that we’re providing here, whether they’re early adopter or generally available. So hope you found that out helpful. We do have a few minutes left for Q&A. Happy to chat more here. Either in the chat pod or also if you want to unmute yourself and ask some questions. We did try to answer every question in the chat pod, but I think we can always only give one answer to questions. So if there’s requests for follow ons, please do unmute and we can talk for another couple of minutes here. With that Goran, I think I’ll hand it back to you for, closing and see how we can moderate a kind of, discussion if we still have time. Yeah. Thank you very much, Matthias. And also to Catalina and Leonard and, Prashant, for your presentation and likewise to Christian for answering this many questions in the Q&A pod.
So we’re almost at the end of this webinar. So I would like to, quickly share the link to the ending portal. Your feedback is very valuable. And you can also let us know if you’re interested to be an early adopter. You will be able to post your email.
As well as rate this session and suggest future topics, future, topics for future sessions. All right. Maybe we’ll take 1 or 2 questions since we still have, a couple of minutes left.
Just a second. That’s a long one. I have a question regarding the bulk update. API doesn’t automatically stagger the job to avoid any adverse impact on the environment. Or is this something users are encouraged to manage themselves? There’s also another specific application to that, given that traffic fluctuation can affect the effective update capacity, a fixed value might not always be suitable. Could you please elaborate on the capacity planning aspect of the work update API and how it handles various loads? There was a lot you might want to check it out yourself from the unit, but the top question from past meta. Okay, so for the bulk update operations, the request from the payload will be executed sequentially. So if one of them breaks the other one that that other one will fail, but they are executed sequentially in the background. They’re basically separate requests that are launched once.
I’m not sure if that answered questions. Let me read it again.
Just to to let you know, I have posted the contextual thread link in the general chat. There you can. You will find the recording and the slides and the Q&A within the next few days. If we’re going to end before being able to, answer all questions. So please be sure to check out the Q&A. Latest by Friday problem hopefully by Friday. And then we will have the Q&A included.
And also a friendly reminder that, our developers live event is going to happen, in November 12th and, post reposting the registration link, there’s tickets available for the Earth for early birds, and virtual participation is free of charge. This will be an in-person event in San Jose, so please check it out. Check out if you are interested or want to join or able to join.
Okay, I’m going to close this session. So thanks to to the team for your presentation and answering the questions. Thanks a lot to the audience for joining and for your attention for your, as always, very valuable questions and feedback. And please don’t forget to complete our ending poll for feedback. And if you’re interested in the Early adopters program. And with that, thanks to everyone and have a great day or evening. Bye bye.
Thank you. Bye bye.

Have a question, maybe a comment? Join the discussion in the Experience League Communities!

Key takeaways

  • Introduction of Modernized APIs New APIs for AEM content management and delivery have been introduced to enhance development workflows and integration capabilities.

  • Structured Content Management The new APIs support structured content management, including features like UI schema for better content rendering, improved reference handling, and batch request capabilities.

  • Optimized Content Delivery A new content delivery API offers faster response times, optimized for edge delivery, and supports a lean JSON structure for efficient content transfer.

  • Automated Translation Processes The AEM open API for translation management automates translation workflows, integrates with third-party services, and supports bulk operations.

  • Eventing Mechanism The APIs include an eventing mechanism that generates events for actions on resources, which can be consumed via Adobe IO for real-time automation and integration.

  • Encouragement to Migrate Users are encouraged to migrate from legacy APIs to the new modernized APIs, as older APIs will be deprecated by 2025.

  • Early Adopter Program An invitation to join the early adopter program for direct engineering support and early access to new features, allowing users to influence the development of these APIs.

  • Comprehensive Documentation Extensive documentation and playgrounds are available on Adobe’s developer portal to help users explore and implement the new APIs.

  • Performance Improvements Early adopters have reported significant performance improvements, with some endpoints showing up to 100x faster response times.

  • Future-Proofing The new APIs are designed to future-proof content management and delivery systems, ensuring scalability and efficiency.

Stay in the know!

To receive notifications on our upcoming webinars, please register at Adobe’s AEM User Group.

recommendation-more-help
5f9e433e-d422-4bfd-9e43-c9417545dc43