Best practices for component scripts development and deployment in Experience Manager as a Cloud Service

This session describes the newest best practices that Adobe Experience Manager developers can follow in order to have more predictable application deployments. Introduced as an Apache Sling feature in 2019 and used in AEMaaCS since 2020, precompiled bundled scripts provide developers two major improvements over the classic way of deploying Adobe Experience Manager components: 1. the scripts can be versioned and have explicit dependency chains, like any Java API 2. the script compilation can now be done during the application’s build process, allowing to discover potential errors fast (e.g. missing dependencies, wrong API usages, etc.) We will focus on how developers can set up their projects to provide their scripts as precompiled bundles and use local Adobe Experience Manager Sling Feature analysers to verify that the API requirements are satisfied, helping them to catch any potential error early.

Continue the conversation in Experience League Communities.

Transcript
Hello and welcome to today’s talk about best practices for component script development and deployment in AM as a cloud service. I’m Karl Pauls, a new computer scientist working for the Adobe Basel office, but remotely from Berlin. I’m a long time Apache Software Foundation member, mostly working on Apache Sling and Apache Felix, which is also the direction I’m coming from originally from the OSGI and Java development world. And in Adobe, I work on Adobe Experience Manager, AEM, in general and specifically nowadays on a cloud service. And with me today is Radu Kotescu. He’s also a senior computer scientist working for the Adobe Basel office, and he’s well known for his involvement and development of HDL. And just in general, Apache Sling scripting and AEM scripting, he’s an expert to ask on that. And he’s also working on AEM obviously and AEM as a cloud service. And yeah, so that’s the topic of today. You want to talk a bit about scripts development and deployment in AEM as a cloud service. And as you could see in the chat already, yes, I mean, for now, at least it is targeting things you can do for the cloud service specifically. And yeah, so in that regard, whether you already tried it or whether you’re already developing for cloud services or are planning to in the future, it should be always that we have quite some changes that happened with cloud service, obviously just in general here and there and what’s available. But the main focus obviously being that you don’t have to run your own instances anymore and you don’t really control the deployments that’s managed for you in the cloud. So that’s nice, but it also means some things have changed. But scripts development hasn’t changed that much. And that can also be a bit of a problem because now you don’t get to control your deployments that closely anymore. Basically, all you can do is you change your Git repository and then cloud manager will come and run the pipeline for you over it. That means it builds everything and it prepares everything and then rolls out your scripts. And only then you can see whether they actually worked. So in other words, it would be highly desirable that you know a couple of things as soon as possible and not only running it in the actual AM instance. Now you still can get an SDK so that is still an AM instance where you can use your XDE to develop your scripts and things like that. But again, for the final deployment, it would be much nicer if we know, for example, that scripts are well formed. That’s obviously the starting point so that they compile and don’t contain any errors. But also that the script level dependencies are satisfied. What does that mean? Well, that means that when we inherit from a resource type or when we delegate to something that this is actually something that exists, that we don’t end up not finding a script that we thought would be there. And then last but not least, we also want to know whether our Java level dependencies are well formed. So that means that the packages we reference, for example, for models or OSGI services, things like that, actually are correct that they are exported to us or somebody provides them. But also when it comes to AM with a cloud service specifically, we started to be more clear about what our public API is. So not all packages are actually exported to you at one time just because they are provided by the base platform. So there’s a difference between the public API area and the non-public one. So you want to make sure that your scripts only use that part that they are allowed to use. That means that by and large, really script development and deployment has to become a lot more like what you’re used to from bundles or from Java code that is in OSGI bundles. Rather than what you normally did with scripts. So in a normal view, let’s look at the example here first. So just to re-emphasize what you typically have and you look at an HDL script. So in the first place, you might have a resource supertype from which it inherits from. So that in this case, let’s say we want to inherit from the basic page. So WCM foundation components, basic page and version one. So we have some inheritance given via the same resource supertype. And then we also, let’s say, depend on some Java API model here from the page model via data-slide use. And last but not least, we also do some delegation so that we can delegate to a head HTML, which we get via the inheritance to the basic page. All of that is already embedded in the script or in the metadata we have for the script. However, when we look at traditional packaging and deployment, what we would do is we would put it into our content package, like the application content package, whatever you call it, and put it under the resource type that we want to have it serve, my components page in this case. So you would create a content package, this apps, my components page, page HTML. It will get installed. And then when the first request comes for it, you would first of course compile it. So if there’s a script error on that level, that would blow up at that point. And then it would go out and at one time try to find an export of a bundle for the Java API that we’re using. And also to try to satisfy the inheritance and delegation to the basic page and the head HTML. So we could imagine in this case, we have somebody from lips providing the basic page and it has an HTML. So it works. But if nobody would provide that, it would fail at the latest possible moment when you trigger the first request to it, which is as I outlined previously, not that good if you have to wait for a pipeline to deploy all of this to at that point find out that it isn’t working correctly. So what we did more recently, Radu and I for the most part developed something in Sling, which is called bundled scripts. It is available now in AM as a cloud service as well. And those bundled scripts can also be precompiled bundles. It’s a small difference, but it’s not important at this point. The main idea is that you put your scripts into OSGI bundles and you also precompile them while you do that. So you end up having the precompiled scripts classes to bytecode in the bundle. And that has several benefits because, well, first of all, we precompiled the classes, so we already know that they do compile and they are there. But then because we make them bundles, we can give them correct import package headers. So at one time then when you deploy your bundle and resolve it, the OSGI framework resolver can look at it and say, okay, well, this one is importing in our case, right, the model page. And nobody’s providing it, so I will refuse to resolve the bundle. So we already know that a little earlier that our bundle might have a problem. And last but not least, there’s also a generalization of import and export package or the import and export package header of OSGI available in OSGI, which is called requirements and capabilities. So again, it’s a generalization, so it’s same idea, but it’s just undefined or not clear what the packages are as a target, right? You can make up your own thing that you require and then somebody else can just provide it. It doesn’t have to be packages, it’s just some level above it. And we use that to actually map script dependencies onto requirements and capabilities. In other words, for our component that we looked at, we would take the resource supertype and make that a requirement. So somebody else has to have a capability for that resource supertype as a resource type, basically. So that the OSGI resolver then when it tries to resolve the bundle can already go out just like with import and export packages and verify that all the dependencies of this bundle are satisfied or not. So that’s nice because again, we don’t have to first hit specific script to see if it will work. We can, for the most part, take three levels of uncertainty out of it just by having it a bundle. However, that would still mean that we get this information when the bundle gets resolved. And that’s a little late again, because that would mean it needs to be in a running instance. And we really would like to have that information even earlier outside of the pipeline, outside of a running instance, preferably when we build it. And that’s possible with this mechanism as well. We introduced in the last developer conference actually at the beginning of the year, the AAM Analyzer Maven plugin. So the AAM Analyzer Maven plugin is an open source project which you put on GitHub. And it’s a Maven plugin you can add to your project. And if you created your AAM as a cloud service project this year from an archetype, you already will have it hooked up. Otherwise you could easily edit after the fact. And it really just encapsulates a couple of checks or analyzers that either are for best practices, but also just really check that the deployment is well-formed. And these are in big parts at least, the same checks that we run in the Cloud Manager Pipeline for a cloud service. But you just get to run them locally and doesn’t need to have a running AAM or anything like that. It just happens as part of your build. And then when those analyzers pass, you already know that for quite some aspect of your application, it’s well-formed and will deploy in the cloud. So bundle scripts fit into that very nicely because the analyzers already check import and export headers of bundles. So you will know whether you actually have an import for something that is not provided at runtime and for AAM as a cloud service. So that part will be an error. And for script level dependencies or requirements and capabilities, these are also checked. By default here, they will be warnings. So it’s not looking super nice here in that case, but get the job done. So that tells us that we had a script that has a requirement, in other words, resource supertype, in this case to a non-existing resource, which nobody is providing. So if we would get, for example, this warning, we would know, OK, we likely made a mistake in the resource supertype. Either we forgot to also deploy it or it doesn’t exist in general and you have to go back and fix that. It’s possible to make this an error instead of warnings too if you want to. But by default, that level would be a warning. So it’s really nice to have that hooked up and checked specifically when you do bundle scripts because then you get all those checks ahead of time without AAM. And now how can you do that? You can also turn your development to bundle scripts and test them. That is something that Rado is going to show you now next. I guess I have to switch and… Oh. Thank you for that, Karl. Let me switch to my presenter window now. OK. Hello, everybody. Thanks for joining our session one more time. And thank you, Karl, for the introduction. Let me take over to show you the nitty-gritty details and also obviously showcase our work in a tiny demo. OK, so let’s have a look first at how the project structure has changed or if indeed it has changed or not. You’re familiar that usually you deploy your component scripts into the UI apps content package together with current libraries, dialogue definition, edit configurations and all of the other files that you need to define your application. Traditionally, you would pack that using the FileVault plugin into a content package. So you would end up with a zip that you would have to deploy. We thought that it would be nice that we wouldn’t force developers to change their workflow. So we decided to attach a secondary artifact to the same project. So nothing changes that much for you. And this happens in a relatively simple way. You are going to have a profile that’s activated by default for you if you use a project archetype, version 31. That’s going to set up the whole project structure. You still have the same familiar HDL compilation, which before was performed only to validate the syntax of your of your HDL scripts and check any direct Java API usage, but without delving into versions or imports and stuff like that. So we’re just checking is this API available right now in my class path. Then we have another plugin that’s going to be executed to extract the capabilities that Karl mentioned. And that happens by looking at the project, the file system structure of your project and defining what kind of capabilities you provide. At the same time, we inspect the doc view XML file. So the familiar dot content XML files to figure out if you’re using any resource supertypes. In addition to that, there’s also a metadata property that we introduced, which is called staying required resource types. And here you can add additional resource types that depend on usually the ones which you would delegate but you would not inherit from. And after all this is prepared, obviously we use the BND Maven plugin to generate a bundle for you in this case. So your UI apps content package project is going to produce the content package zip and now also a jar a bundle that uses the pre compiled scripts classifier. Everything is going to be assembled into your old content package and everything is going to be set up into the correct deployment folders. Let’s have a bit more detailed look at what changes are in the plugins, if any. So you’re already familiar with the HDL Maven plugin. Its configuration stays mostly the same. And if you have a very sharp eye, you might notice that there is one option missing, which is the one that defined a package prefix where your HDL classes were generated. On AM as a cloud service, if we want to work with pre compiled scripts, it’s very important to keep the package name, the one that’s derived automatically from the path in your project. So if your script was in apps, my project components, that’s going to become the package of your generated classes. And the runtime, that’s very important because there is a bundle, maybe those things service resolver that’s going to map your pre compiled bundle scripts to an actual servlet. In order to find the executable that’s behind the servlet, we need this mapping. If we go forward now, I told you something about the BND Maven plugin, which is going to generate the bundle for you. The BND Maven plugin can also work with additional plugins developed for it. And that’s what we see here in this example. If we look at the BND configuration, we see that we’re going to generate a bundle name header for our project, which is basically the project name plus the precompiled scripts suffix. We generate the bundle symbolic name, which is the name of your project plus the precompiled scripts classifier. And then here in this plugin declaration, we enable the bundle script scanner plugin. This one comes from the scripting bundle Maven plugin from SING. And it’s the one responsible for extracting the metadata for your components. So the provided capabilities, but also the required ones. We do scan the JCR root folder where your scripts are in the content package. And we’re only interested for now in the HTML files denoted by the HTML extension and the docv files, the ones that I told you before, so the content XML files from which we have to extract any kind of dependencies that you might be using. In addition to that, you see a required capability. And this one’s mandatory. Only the bundles that require this name scripting capability in the specified version are going to be at runtime wired to the Apache SING server resolver bundle and have their precompiled scripts at runtime hooked to a servlet that’s going to provide the rendering. We opted for this mechanism because it creates a cleaner dependency chain between the bundles and because we’re kind of OSGI nerds, I guess, more Carl than me. But it’s a nice mechanism for allowing the server resolver to only wire the bundles it needs for this capability and not scan all the bundles available in the system. Obviously, we need to create the jar. And what’s tricky here is that the Maven jar plugin does need a classifier configuration. Otherwise it’s going to override the main artifact from your build. To keep things simple, we decided to go with a precompiled script classifier. And we read the manifest that’s generated by BND to create the jar. Last but not least, if you want to deploy this bundle into a running instance during your local development flow, we provide the additional configuration for the SING Maven plugin that picks up your jar and tries to deploy it to the configured instance. As I hinted before, version 31 of the AM project archetype, which was released last week, already brings support for working with precompiled bundle scripts. So if you would be starting a new project based on the project archetype, with this command that we provide here, your project would already be correctly set up for working with this new feature. The precompiled scripts option is by default set to no. We wanted to make it an opt-in option for now. So when you define the properties for the app that you’d like to create with the archetype, it’s very important to set this precompiled scripts option to yes. Local development, right? That’s very important because we’re still doing most of our development work on AM as a cloud service initially locally, then we deploy to our development instances. I mentioned that there’s a profile that was going to be activated by default. This profile is also called precompiled scripts. It’s active by default. Implicitly, when you would run the build on your generated project, it would also deploy the bundle with precompiled bundle scripts. However, at runtime, if you have a precompiled bundle script for a certain resource type and the script in the repository that maps to the exact same resource type selector combination, the bundled script is going to have precedence. And obviously, if you’re still working on a local instance when you’re doing development, sometimes you might want to go and edit the script directly on the instance with whatever method you want, either CRPD lite or webdapp, doesn’t really matter. And in this case, your changes would not be picked up. So if you’ve already installed the bundle, this precompiled scripts bundle generate for you, then you can either stop the bundle or remove it from the instance. However, if you haven’t done that and you would just like to avoid this and keep your normal development workflow, you can skip installing this bundle with this option that you see here in this command. So skip script precompilation, you have to set it to true and then the profile is going to be automatically deactivated and you can do development as you used to do before. So let’s look at how this works in a tiny demo. Let me switch again my screen sharing. It’s going to take a bit because I have multiple windows here and screens. So I’ll share the screen again and I guess this is a screen that I would like to share. Great. Okay. So I prepared here four tiny examples. Each addressing one kind of missing dependency or a missing Java API. Let’s talk about hidden APIs. That’s something that Karl mentioned that you might have access to a certain API in your local module, but at deploy time, this API is behind a toggle, for example, or the API gateway doesn’t allow you access to it or for some reason your bundle is not allowed to access it. Normally, if you wouldn’t be able to deploy your scripts via a bundle, you would only figure out that you have an issue at run time. So to give you an example, right, we added this link scripting SPI bundle into our UI apps package and in one of the scripts from UI apps, we are already using this bundled render unit API and we want to output the value of a constant. By the way, this is actually one of the APIs that helps us deploy precompiled bundle scripts. So let’s check what would happen when you try to build your project. In the interest of time, I’m going to skip building the UI apps content package that has already been built before. So let’s run Maven clean verify on your all content package and run the same thing. Let’s run the whole content package and run the analyzers and see what’s going to happen. So as I said, that API is hidden. It’s accessible to UI apps when you build locally, but on the running instance, you would not be able to get reference to it. And now the analyzers have started their work. So the error in this case, right, it’s relatively clear. We are importing the org-apaches-link-scripting-spi bundled API, but actually we don’t have access to it. That’s one of the examples that you can of dependencies that are missing dependencies that you can now catch a build time, which before was very time consuming and you would have had to write an integration test for it. This one is a more familiar example. So this is just the missing Java API. That’s something that was available to you before as well. So what I’ve done here in this project, the example Sling model that we will be using, it’s called Hello World model. I just removed the word model so this class doesn’t exist. In our class path, let’s try to build the project to see what’s happening. Again, in this case, I’m building UI apps because we can catch this error at this level. Again, this is something that was also available before. It really depends on how you set up your project, but if you enable the HTML Maven plugin to generate Java classes for you, then the Maven compiler plugin would definitely figure out that you have a problem with your class base. And now let’s go into more interesting examples regarding script dependencies, so resource types. In this Hello World script, I’m trying to delegate to a non-existing resource. But even more important than that, I’m trying to inherit from another non-existing resource in this case. This is a property that I mentioned before that is only used for metadata, so at runtime this one has absolutely no impact. The Sling required resource types property, this only informs the scripting bundle Maven plugin about your requirements. So, again, in the interest of time, I’m not going to build the UI apps project that has already been built before, but we’re just going to run the analyzers on the result and figure out what happens if our resource types that we depend on are missing from the built application. Okay, so we already see the warnings for our resource types. The analyzers found some missing dependencies or missing capabilities. By default, we only report warnings, so you see the non-existing resource and another non-existing resource. However, you can be more restrictive than that. And if you want to transform those warnings into errors, then the only thing that you have to do is enable this missing requirements optional setting and set it to false. And in this other project, I’ve done exactly that. And what’s going to happen is that instead of warnings, now the analyzers are going to generate errors. The difference is that in the previous example, the requirements were marked as optional because the plugin that actually generates the requirements doesn’t know where you’re going to deploy your module. It doesn’t have the whole view like an analyzer does. In the second example, we said, okay, we definitely know where we’re going to deploy. We want all our requirements to be mandatory. Therefore, we set this missing requirements optional to false. And then at build time, we do see the errors loud and clear. So our missing resource types have generated errors that the AEM Analyzer plugin has called the build time. We have around a minute left, so we would be happy to answer any kind of questions if you have some. The feature is documented, and I think you’re also going to have access to the slides. And we can continue the conversation, the forum dedicated for our session as well. Yes, I think we had a question about how we create HTML templates and compile them into classes. But I think that’s potentially something that needs to be answered in the forum because it’s going to be more tricky. And then, yes, I mean, the other question, what happened in the chat session here is its option, right? I mean, it’s something you can choose to enable it. But even if you do nothing, nothing it does should be different from what would happen normally if you had your scripts. Except that will fail earlier and better ways with the analyzers, but it’s nothing that changes anything else. I have to answer to Roy. No, the UI apps build is not so slow, but both Caroline have talked too much. And I would have wasted probably 20, 30 seconds per build with building UI apps was the only reason for which I skipped it. OK, let’s continue our conversation in the forum that was created exactly for this session. And if you’re if you’re interested in the next sessions, please join them and don’t go late there. OK, I think that was it. Good. Thanks a lot. Thanks for attending our session. Ask in the forums if you have something and then. OK, bye. Bye bye.

Additional Resources

recommendation-more-help
3c5a5de1-aef4-4536-8764-ec20371a5186