Hello everyone. My name is Amol Anand and I am a Principal Cloud Architect with the AEM Engineering team at Adobe. Today we will be talking about AEM Cloud Service assets and microservices. We will first cover an overview of the microservices architecture, talk about asset compute workers and how to configure them and what post-processing workflows are. Then we will go over how asset ingestion works in AEM Cloud Service and the options available to bulk import assets into AEM Cloud Service. We will then end with some useful links. Let’s get started. One of the big changes in AEM Cloud Service is how it uses the externalized data store which uses Azure Blob Storage. Assets that are uploaded or downloaded from AEM now do not stream through AEM’s own JVM but instead get delivered directly to and from the Blob Storage. This frees up AEM to do other tasks and allow AEM to scale to handle large volumes of data. Similarly, asset renditions which took up valuable processing power in previous versions of AEM is also externalized into what is known as Adobe Asset Compute Service. The right-hand box on the diagram called Asset Microservices are the services that are driven by Adobe Asset Compute which is built on top of Adobe I O Runtime. This is a set of microservices available to AEM to perform static binary transformation driven by Adobe’s internal services or by third-party services. The service workers transform the binary and upload the new asset renditions back to the data store directly. This allows large volumes of assets to be uploaded and processed in AEM at scale. This approach basically replaces the old workflow steps we used to add to the DAM update asset workflow to create renditions. Now let’s look at how the Adobe Asset Compute Service works. The first step is AEM will reach out to the Asset Compute Service to process the job. Typically this is to create a rendition of an asset. Next, the Adobe Asset Compute Service will pick the right worker, either one of the out-of-the-box ones or a configured one or a custom worker if that was configured. Third, the worker gets the binary from the cloud data store directly, uses any Adobe or third-party services and creates a new rendition. Then the new rendition now gets uploaded back to the cloud data store directly. And an asynchronous event gets triggered back to the Adobe IO journal associated with this AEM instance. AEM periodically polls the journal for events and then consumes the data to finish the process. It is important to note that the custom workers are essentially an individual action within an Adobe Firefly application. Now let’s look at how the plumbing works. The Asset Compute SDK is a shared library that provides data access, eventing, error handling and monitoring. This is the backbone of Asset Compute. The externalized data store provides source asset, rendition and pre-signed URLs to the worker so that it can read and write back the binaries. The custom worker Firefly app needs to be in the same organization as the AEM cloud service instance. And it usually has a manifest or YAML annotation of require Adobe Auth set to true. The SDK also triggers the asynchronous event automatically when a rendition is created or if the process failed for whatever reason. There are three different options available for rendition creation using Asset Compute. The first is out of the box default renditions that are used by the assets UI and a large preview web rendition. The second is configuration driven where users can configure custom file formats and resolution of different images. The third and the most sort of custom option is to use a Firefly app to create custom workers that can be used when the out of the box or configuration options are not enough. You can call other Adobe services, third party APIs, etc. from your custom worker. Now let’s look at how this shows up in an actual AEM instance. Okay, so here’s an asset I have in my AEM cloud service instance. And let’s look at some out of the box renditions that get created. So this is without any configuration. We have three different thumbnails of different sizes meant for the assets UI and a large preview 1280 by 720 rendition. So in terms of creating new renditions for an asset in AEM cloud service, we use processing profiles to define that. So let’s look at how we can define that. I have I’m going to my tools, assets, processing profiles. I have two processing profiles here. I’m going to edit the one that says custom asset compute processing demo. So let’s look at this one. So as you can see here, I have three different renditions being defined here. So the first one is just based on configuration. It’s 1000 by 1000 width and height. Extension is PNG. I’ve given it a quality and then you can include certain mime text. So we want this to run for images, but not for like applications and videos, for example. So this is one of the configuration based renditions. And then I have two renditions here that are based primarily on the asset compute workers. So the custom asset compute workers built in Adobe Firefly app. So the first one is called inverted colors. And the endpoint is basically the URL that you get when you publish your Firefly app. You can just point it to that. The second one here is set PR filter. Again, these two have an extension of PNG. We’ve pointed it to the specific action URL and, you know, including it only for mime type image. So these are the three sort of different renditions that we have created. We can create more. So you can add new for both the custom ones, as well as the default configuration based renditions. Add as many renditions as you need to this processing profile. So one of the things to remember is the way we apply processing profiles is you can go to a folder. So you can look at weekend site here. Let’s go to the properties. And if I go to asset processing, I can see that I’ve selected this processing profile, the custom asset compute processing demo. And let’s go back to this image right now. It has these out of the box renditions. So an easy way to just kind of rerun that processing profile is to just hit preprocess assets. And I can pick the right processing profile. I’m going to reprocess these. So this would be similar to if you had uploaded these images directly to this folder that already had that processing profile. It would pick it up and just run that processing profile and create any renditions that are related to that or have been defined in that processing profile. So let me refresh my page. And then now I see all the different renditions show up here. So I see the configuration based 1000 by 1000. I can see the old thumbnails. I can see the new sepia filter rendition, the inverted colors rendition. These were the two custom ones and then all the other ones that already existed. So it becomes easy to essentially configure your various renditions without having to create time update asset workflow steps to create multiple renditions that we used to do back in the day. This makes it much easier to essentially use processing profiles and then configure all the different renditions that you need, including custom ones built with Adobe Firefly. So now let’s look at the different considerations that you need to keep in mind as you use custom asset compute workers. Only one processing profile can be applied to a folder. To generate more renditions, you can add more rendition definitions to an existing processing profile. Generating a new asset is not supported through this mechanism. Only generating renditions is supported. You could potentially generate a rendition and then swap renditions as a post-processing step if you really needed to. But the goal here is to create as many different renditions as you need for all the different use cases that you have. Currently, the file size limit for out of the box metadata extraction is approximately 10 gigabytes. As mentioned before, the Firefly app and AEM need to be in the same organization for it to work. And the last consideration here is you cannot edit the standard metadata that comes with the asset using custom applications. You can only modify custom metadata so you can process a binary. You can call any third party APIs in your custom asset compute worker and then get some information and then save it as, you know, custom metadata on that asset, but not change the existing metadata on it. I’m not going to go through each step here in how to build a custom asset compute worker, but, you know, we have documentation and a lot of examples available on GitHub. These links will be provided to you as well. Instead, what I would like to do is focus on the common issues that you might face. To test the asset compute worker locally, you need to install Docker, Node, and some other tools first. Make sure your AIO CLI version is 7.1.0 to follow the steps listed in the previous slide. Since asset compute uses cloud storage to read and write asset binaries and the local AEM cloud service SDK does not have cloud storage by default, this causes a problem. You need to bring your own Azure or S3 credentials and configure them in the .env file of your Firefly app in addition to any private key that you got from step four in the previous slide. The most common issue we see is when the Firefly app is not in the right org as the AEM instance. So always make sure to check whether your AIO CLI is in the right organization, project, and environment while testing. When you download the JSON from the Adobe IO console, rename that file to console.json so that the local AIO app can easily figure out which project you want to test out. Authentication issues that you might run into typically have to do with either a misconfigured private key or permissions to that key. So the process cannot read that private key for whatever reason. These are the steps on how to add a custom worker to a processing profile. It’s relatively straightforward and what I showed you already was basically a configured processing profile with the endpoints pointing to custom asset compute worker. So it’s fairly straightforward to keep adding additional renditions based on either custom or configuration based options as I showed in the demo earlier. One of the key configurations to keep in mind while using asset compute workers is that it is an asynchronous process. So the way we used to use old workflow launchers don’t necessarily work the same way anymore. You can configure any workflow to be kicked off after renditions or metadata is ready. This approach basically replaces the old workflow launcher approach we used to use in previous versions of AEM. So using a post-processing workflow when configured basically replaces how we used to use workflow launchers. There is an easy OSGI configuration that allows us to associate different post-processing workflows based on the content path regex. An example is shown here with a link to the documentation and you can have multiple configurations and multiple different content paths and regexes pointing to different workflow models that you’d like to run based on that. As mentioned at the beginning of this session, the externalization of the data store has changed how assets are ingested into AEM. We do not need to use AEM’s JVM to stream binaries and waste precious resources anymore. Instead, a simple request to upload is triggered to AEM and the asset is uploaded directly using a pre-signed URL to the cloud storage. The same approach is used for downloading or viewing renditions as well. There are three common ways to ingest assets in bulk to AEM cloud service. The first one is out of the box bulk import tool, second is HTTP upload APIs, and the third is a Node.js library called AEM upload. Let’s look at each one now. The bulk import tool is out of the box with AEM cloud service. It imports assets and metadata from external data stores like S3 and Azure Blob Storage. So these would be external to AEM. It is great for migrating content into AEM or assets into AEM and using it for photo shoot use cases where constant ingestion from other systems need to happen continuously. The second option is using the assets upload HTTP APIs. So this is the standard HTTP APIs available and is a three-step process of initiating the upload. AEM gives you back a pre-signed URL to use, then uploading the binary to the data store directly using the pre-signed URL, and then telling AEM that you have completed the upload. One consideration here is that for large files or multiple files, it will be necessary to split the binary uploads into multiple requests, and each part should respect the min and max part size values. This is why a wrapper library like AEM upload takes care of these lower-level details and might be easier to use. AEM upload is just a Node.js library that is really simple and easy to use and makes it easy for external applications or processes to upload assets to AEM. There’s also a command line tool available if Node.js is not suitable for the third-party system. It really handles all the inner workings of making the three calls, handling any part splitting and other complexities. Here are some useful links to learn more about each aspect of what was discussed in this session. I highly recommend visiting them to learn more about asset compute and asset ingestion options. And that’s the end of the session. I hope this was useful for everyone to understand how assets and microservices work in AEM Cloud Service and all the different considerations associated with that. Thank you.