AEM Assets Microservices - Moving to AEM as a Cloud Service

Learn how AEM Assets as a Cloud Service’s asset compute microservices allow you to automatically and efficiently generate any rendition for your assets, replacing this role of traditional AEM Workflow.

Transcript
Hello everyone. My name Amol Anand and I am a Principal Cloud Architect with the AEM Engineering Team at Adobe. Today, we will be talking about AEM Cloud Service assets and microservices.
We will first cover an overview of the microservices architecture, talk about Asset Compute Workers and how to configure them and what post-processing workflows are. Then we will go over how Asset Ingestion works in AEM Cloud Service and the options available to bulk import assets into AEM Cloud Service. We will then end with some useful links. Let’s get started. One of the big changes in AEM Cloud Service is how it uses the externalized datastore which uses Azure Blob Storage, assets that are uploaded or downloaded from AEM. Now do not stream through AEM’s own JVM, but instead get delivered directly to and from the blob storage. This frees up AEM to do other tasks and allow AEM to scale, to handle large volumes of data. Similarly, asset renditions, which took up valuable processing power in previous versions of AEM, is also externalized into what is known as Adobe Asset Compute Service. The right-hand box on the diagram called Asset Microservices are the services that are driven by Adobe Asset Compute, which is built on top of Adobe I/O Runtime. This is a set of microservices available to AEM to perform static binary transformation driven by Adobe’s internal services or by third-party services. The service workers transform the binary and upload the new asset rendition back to the datastore directly. This allows large volumes of assets to be uploaded and processed in AEM at scale. This approach basically replaces the old workflow steps we used to add to the DAM Update asset workflow to create renditions. Now let’s look at how the Adobe Asset Compute Service works. The first step is AEM will reach out to the Asset Compute Service to process a job. Typically, this is to create a rendition of an asset. Next, the Adobe Asset Compute Service will pick the right worker, either one of the out-of-the-box ones or a configured one or a custom worker if that was configured. Third, the worker gets the binary from the cloud datastore directly, uses any Adobe or third-party services and creates a new rendition. Then the new rendition now gets uploaded back to the cloud datastore directly. And then a synchronous event gets triggered back to the Adobe I/O journal associated with this AEM instance. AEM periodically polls the journal for events and then consumes the data to finish the process. It is important to note that the custom workers are essentially an individual action within an Adobe Firefly application. Now let’s look at how the plumbing works. The Asset Compute SDK is a shared library that provides data access, eventing, error handling and monitoring. This is the backbone of Asset Compute. The externalized datastore provides source asset rendition and pre-signed URLs to the worker so that it can read and write back the binaries. The custom worker Firefly app needs to be in the same organization as the AEM Cloud Service instance and it has usually has a manifested YAML annotation of required Adobe offset to true. The SDK also triggers the asynchronous event automatically when a rendition is created or if the process failed for whatever reason.
There are three different options available for rendition creation using Asset Compute. The first is out-of-the-box default renditions that are used by the asset’s UI and a large preview web rendition. The second is configuration driven where users can configure custom file formats and resolution of different images. The third and the most sort of custom option is to use a Firefly app to create custom workers that can be used when the out-of-the-box or configuration options are not enough. You can call other Adobe services, third-party APIs et cetera from your customer worker. Now let’s look at how this shows up in an actual AEM instance. Okay, so here’s an asset I have in my AEM Cloud Service instance. And let’s look at some out-of-the-box renditions that get created. So this is without any configuration. We have three different thumbnails of different sizes meant for the asset’s UI and a large preview, 1280 x 720 rendition. So in terms of, you know, creating new renditions for an asset in AEM Cloud Service, we use processing profiles to define that. So let’s look at how we can define that. I have, I’m going to my Tools, Assets, Processing Profiles. I have two Processing Profiles here. I’m going to edit the one that says Custom Massive Compute Processing Demo. So let’s look at this one. So as you can see here, I have three different renditions being defined here. So the first one is just based on configuration. It’s a 1000 x 1000 width and height, extension is PNG. I’ve given it a quality and then you can include certain mind types. So we want this to run for images but not for like applications and videos for example. So this is one of the configuration based renditions. And then I have two renditions here that are based primarily on the Asset Compute Workers. So Custom Asset Compute Workers built in Adobe Firefly app. So the first one is called Inverted Colors. And the end point is basically the URL that you get when you publish your Firefly app. You can just point it to that. The second one here is a sepia filter. Again these two have an extension of PNG. We’ve pointed it to the specific action URL and you know, we’re including it only for my type image. So these are the three sort of different renditions that we’ve created. We can create more. So you can add new for both the custom ones as well as the default configuration based renditions, as many renditions as you need to this processing profile. So one of the things to remember is the way we apply processing profiles is you can go to a folder. So you can look at, we can site here, let’s go to the properties.
And if I go to Asset Processing, I can see that I’ve, you know, selected this processing profile, the Custom Asset Compute Processing Demo. And let’s go back to this image right now. It has these out-of-the-box renditions. So an easy way to just kind of rerun that processing profile is to just hit Reprocess Assets. And I can pick the right processing profile. I’m going to reprocess these. So this would be similar to if you had uploaded these images directly to this folder that already had that processing profile. It would pick it up and just run the processing profile and create any renditions that are related to that or have been defined in that processing profile. So let me refresh my page. And then now I see all the different renditions show up here. So I see the configuration based 1000 x 1000. I can see the old thumbnails. I can see the new sepia filter rendition, the inverted colors rendition. These were the two custom ones and then all the other ones that already existed. So it becomes easy to essentially configure your various renditions without having to create time, update asset workflow steps to create multiple renditions that we used to do back in the day. This makes it much easier to essentially use processing profiles and then configure all the different renditions that you need including custom ones built with Adobe Firefly.
So now let’s look at the different considerations that you need to keep in mind as you use Custom Asset Compute Workers. Only one processing profile can be applied to a folder. To generate more renditions, you can add more rendition definitions to an existing processing profile. Generating a new asset is not supported through this mechanism. Only generating rendition to support it. You could potentially generate a rendition and then swap renditions as a post processing step if you really needed to. But the goal here is to create as many different renditions as you need for all the different use cases that you have. Currently, the file size limit for out-of-the-box metadata extraction is approximately 10 gigabytes. As mentioned before, the Firefly app and AEM need to be in the same organization for it to work. And the last consideration here is you cannot edit the standard metadata that comes with the asset using custom applications. You can only modify custom metadata. So you can process a binary. You can call any third part APIs in your Custom Asset Compute Worker and then get some information and then save it as custom metadata on that asset but not change the existing metadata on it.
I’m not going to go through each step here in how to build a Custom Asset Compute Worker but you know, we have documentation and a lot of examples available on GitHub. These links will be provided to you as well. Instead, what I would like to do is focus on the common issues that you might face. To test the Asset Compute Worker locally, you need to install Docker node and some other tools first. Make sure your AIO-CLI version is 7.1.0 to follow the steps listed in the previous slide. Since Asset Compute uses cloud storage to read and write asset binaries and the local AEM Cloud Service SDK does not have cloud storage by default, this causes a problem. You need to bring your own Azure or S3 credentials and configure them in the .ENB file of a Firefly app in addition to any private key that you got from step four in the previous slide.
The most common issue we see is when the Firefly app is not in the right org as the AEM instance. So always make sure to check whether your AIO-CLI is in the right organization, project and environment while testing.
When you download the JSON from the Adobe I/O console, rename that file to console.json so that the local AIO app can easily figure out which project you want to desktop. Authentication issues that you might run into typically have to do with either a misconfigured private key or permissions to that key. So the process cannot read that private key for whatever it is. These are the steps on how to add a custom worker to a processing profile. It’s relatively straightforward. And what I showed you already was basically a configured processing profile with the endpoints pointing to Custom Asset Compute Worker. So it’s fairly straightforward to keep adding additional renditions based on either custom or configuration based options as I showed in the demo earlier. One of the key configurations to keep in mind while using Asset Compute Workers is that it is an asynchronous process. So the way we used to use old workflow launchers don’t necessarily work the same way anymore. You can configure any workflow to be kicked off after renditions or metadata is ready. This approach basically replaces the old workflow launcher approach we used to use in previous versions of AEM. So using a post-processing workflow when configured basically replaces how we used to use workflow launchers. There’s an easy OSGI configuration that allows us to associate different post-processing workflows based on the content path regex. An example is shown here with the link to the documentation and you can have multiple configurations and multiple different content paths and regexes pointing to different workflow models that you’d like to run based on that regex.
As mentioned at the beginning of the session, the externalization of the datastore has changed how assets are ingested into AEM. We did not need to use AEM’s JVM to stream binaries and waste precious resources anymore. Instead, a simple request to upload is triggered to AEM and the asset is uploaded directly using a pre-signed URL to the cloud storage. The same approach is used for downloading or viewing renditions as well.
There are three common ways to ingest assets in bulk to AEM Cloud Service. The first one is out-of-the-box bulk import tool. Second is HTTP upload APIs and the third is a NodeJS library called AEM-upload. Let’s look at each one now.
The bulk import tool is out-of-the-box with AEM Cloud Service. It imports assets and metadata from external datastores like S3 and Azure Blob Storage. So these would be external to ADM. It is great for migrating content into AEM or assets into AEM and using you know using it for photo-shoot use cases where constant ingestion from other systems need to happen continuously.
The second option is using the Assets Upload HTTP APIs. So this is the standard HTTP APIs available and it’s a three step process of initiating the upload. AEM gives you back the present URL to use, then uploading the binary to the datastore directly using the present URL and then telling AEM that you’ve completed the upload. One consideration here is that for large files or multiple files it will be necessary to split the binary uploads into multiple requests and each part should respect the min and max part size values. This is why a Rapple library like AEM Upload takes care of these lower level details and might be easier to use.
AEM Upload is just a NodeJS library that is really simple and easy to use and makes it easy for external applications or processes to upload assets to AEM. There’s also a command line tool available if NodeJS is not suitable for the third-party system. It really handles all the inner workings of making the three calls, handling any parts splitting and things and other complexities. Here’s some useful links to learn more about each aspect of what was discussed in the session. I highly recommend visiting them to learn more about Asset Compute and Asset Ingestion options. And that’s the end of the session. I hope this was useful for everyone to understand how assets and microservices work in AEM Cloud Service and all the different considerations associated with that. Thank you. -

Workflow Migration Tool

Asset Workflow Migration Tool

As part of refactoring your code base, use the Asset Workflow Migration tool to migrate existing workflows to use the Asset Compute microservices in AEM as a Cloud Service.

Key activities

  • Use the Adobe I/O Workflow Migrator tool to migrate asset processing workflows to use the Asset Compute microservices.
  • Set up a local development environment and deploy the updated workflows. Manual adjustment may be needed for complex workflows.
  • Continue to iterate in a local development environment using the AEM SDK until the updated workflow matches feature parity.
  • Deploy the updated code base to an AEM as a Cloud Service development environment and continue to validate.

Hands-on exercise

Apply your knowledge by trying out what you learned with this hands-on exercise.

Prior to trying the hands-on exercise, make sure you’ve watched and understand the video above, and following materials:

Also, make sure you have completed the previous hands-on exercise:

Hands-on exercise GitHub repository

Hands-on with uploading assets

Explore how to defined and assign AEM Assets Processing Profiles to folders and upload assets to AEM using the `aem-upload` npm CLI module.

Try out assets management

recommendation-more-help
4859a77c-7971-4ac9-8f5c-4260823c6f69