Last update: 2024-01-25
  • Created for:
  • Experienced

Learn about AEM Dispatcher for AEM as a Cloud Service, focusing on notable changes from Dispatcher for AEM 6, the Dispatcher conversion tool and how to use the Dispatcher Tools SDK.


Hello everybody and welcome. My name is Bryan Stopp. I am a Senior Cloud Architect with Adobe, and today I will be reviewing the Dispatcher as it pertains to AEM as a Cloud Service.

I’ll be going over four topics today. The first is notable changes in the dispatcher. Then I will cover and demonstrate tools that are available to update dispatcher configurations, to meet AEM as a Cloud Service requirements. Next, I will review how to use the dispatcher Software Development Kit to validate a set of dispatcher configuration files, along with demoing different use cases. Finally, I’ll review and demonstrate how to use the SDK to perform troubleshooting steps.

First, notable changes in the dispatcher. The dispatcher in AEM as a Cloud Service now requires configurations to conform to a specific folder structure. The details of this organization can be found on the documentation site. The structure does follow industry best practices, and thus makes it easier to validate, deploy and troubleshoot. As well as adhering to this organization, certain files are immutable. They cannot be modified by development teams. The list of these files can be found on the documentation site or the content of the immutable files are provided in the SDK. Reasonable defaults for those configurations, which can be modified by development teams are made available through the AEM Archetype as well as the SDK. Both of these contain a full single-tenant configuration. One other thing to note about the dispatcher in AEM as a Cloud Service, is that there is a limited number of modules and directives that are available for teams to use. Which modules are available are listed on the documentation pages for the dispatcher. If you’re working on the configuration on the Windows operating system, you should be aware that the use of symlinks is a standard practice for the configuration. These symlinks need to be Linux supported references. A typical Windows shortcut will not link correctly when deployed to AEM as a Cloud Service. Because changes and improvements will roll out to programs automatically, it is important to keep up to date. The AEM Archetype is regularly updated with the latest baseline files. For existing projects, teams can use these files or the files provided with the SDK. I’ve mentioned the dispatcher Software Development Kit quite a bit already, now we’ll dive more into its features in a minute. But let’s just start with what is it? The dispatcher Software Development Kit is a tool that is bundled with the AEM SDK, which is downloadable from the Software Distribution Portal. The dispatcher SDK contains a full basic dispatcher configuration as well as developer tools. One to validate configurations adhere to AEM as a Cloud Service requirements and another to run a configuration in a Docker image, a kin to what is in the AEM Cloud Service Environment. This allows helping with debugging any problems that may arise with the configuration. I will demonstrate both of these tools later in this video. First, let’s talk about a different standalone tool. The Dispatcher Conversion Tool. Dispatcher Conversion Tool is one feature of the repository Modernizer Tool Suite. This tool is intended to help customers convert from a non-AEM as a Cloud Service format to a configuration that will function in cloud service. It’s features include, that it removes files that are not allowed, such as Author or non-published VirtualHosts. It reorganizes files based on the recommended structure of AEM as a Cloud Service. Available virtual hosts and farms are in new available folder with a symlink to those files from the respective enabled directory. It also supports defining or creating Apache environment variables in the configuration file. They will populate these into the new vars file accordingly. Finally of note, is that its separates a farm file into sub-directive files. This allows for ease of maintenance and is a best practice.

When it’s complete, the conversion tool will provide a full report of the updates it made. So let’s take a look at how to set-up and use the tool. First, you must have the dispatcher SDK on your system. Download the AEM SDK and install the dispatcher SDK from that artifact. The process for doing this is on the documentation site. Next, install the Adobe I/O command on interface app and the cloud service migration plugin. This plugin is the method by which we distribute our dispatcher conversion tool to our customers. Again, how to install this plugin is available on the dispatcher documentation site. Next, a configuration conversion file is needed. This is used by the plugin to specify where to perform the conversion, where to find the SDK, where to find the movable files and where to find your baseline files. Finally, of course, you run the plugin. This can be done in either Adobe Managed Services or AMS mode or on-prem mode. Most of the AMS configurations out there follow a standard structure, and therefore we have a different conversion rule set for those configurations, hence different options. Let’s take a look at conversion with a quick demo. So here, I’m going to show you how to run the AEM Dispatcher Conversion Tool against an existing configuration. What I have on my screen here is an existing Adobe Managed Services configuration for an application, the weekend tutorial. So as you can see here in the dispatcher folder, it already has the configurations that were usually used to deploy to the managed services environment. What we’re going to do is we’re going to instead configure the conversion tool to configure them to run and to be able to deploy against cloud service. So I already have Docker and the SDK installed in the plugin. So I did was I created a migration configuration file and I placed it here in the folder. So as you can see here, I have the location of the SDK and its source file, which is also this folder right here. And you can see that my configuration output is going to be or to source from my configuration is here in this particular project source. So this is special module and this subsequent files, this will be used as the baseline and they will be merged with the dispatcher SDKs immutable file structure and output into the target folder. From there, we’ll be able to copy files into the dispatcher folder for deployment to the cloud service. Let’s go ahead and clean out that target folder right now, just to make sure there’s nothing there because the AIO CLI plugin actually reads the files from the dispatcher folder and outputs them into the target folder. So let’s take a look at that run. I’m just going to switch over here to the command line interface, and then we’re going to run the command to actually convert configurations. So it’s going to be AIO and copy of that from over here.

So it’ll be “aio aem-migration:dispatcher-conversion.” Though it’s going to use the files on locally and that AIO folder as for this configuration rules. Now you may see this error on the screen because it can’t figure out exactly what configuration it wants to use. So I need to specify that as you can see in my configuration, I’m using managed services. So I need to specify the type in the command line. And so to do that, we do this, -t equals ams. Once we do that, it will start running.

And as you can see, the conversion is complete and says, here for the output is, in here for the log is. Let’s go to take a look at that log file now. So we flipped over to the target folder and here’s the result on log. And as you can see, I’m not going to go to this whole log file, but you can see that it tells you exactly what it’s doing. And then if we look at the dispatcher output folder, you can see that it converted all the files into a new structure. And we can actually kind of compare them here in the user interface of my development environment to say, okay, well, the original managed services used, it had unhealthy author, flush and different kinds of virtual hosts. Specifically, we want to look at AEM publish and we can publish partial host.

And if we look here, we scroll down, okay, well, weekend publish was a link to the old weekend publish. We’d come down here to the output, you can see that in the available virtual host is just the weekend published virtual hosts. And in the enabled virtual hosts, it is the weekend publish vhost, which links to the available host.

And we can also take a look at those differences by looking at my GitHub desktop, once we change and copy the files over, let’s do that now. So I’m going to delete the existing source folder for my source project and remove all of those. And I’m going to copy the source folder from the target output folder into the dispatcher folder here. So yes, I want to copy those files. And now they’re all in here. I’m going to go ahead and cancel that so that way we can see the differences in like a hub desktop output. And you can see it deleted a number of files. We can also look at what it changed or modified or added. So here’s the new available virtual host link, and you can see that it actually outputs all of the different configuration rules as was found in the original files. And now what we have is a configuration, which is deployable to AEM as a Cloud Service, and it conforms to that organization.

Now that I’ve shown how to convert your legacy dispatcher configurations, let’s look at how to validate them with the SDK. As I previously mentioned, the dispatcher SDK is bundled within the AEM SDK. Documentation pages show how to install this software. The SDK contains a full working dispatcher configuration for you to use. It also contains the validator tool and a Docker script, and image for local development testing and troubleshooting. This does mean that you must have Docker installed to use these features. The validator tool will run and validate a few prerequisite rules of AEM as a Cloud Service. However, without Docker, it cannot execute the runtime checks to validate the files will operate in the Apache HTTP context. Let’s look at the validation steps a bit more closely. There are three phases to validation. First, validate the configuration, which validation rules it applies is dependent upon which mode it is, legacy or flexible. We’ll talk about those more in a minute. Then it validates the configurations are functional. It actually does this by deploying the files to a local Docker image and starting Apache. This validates that the files are syntactically correct within Apache. Finally, it checks that your implementation has not overridden any immutable files.

I’ll look at demo each of those mode now. First, have a view of legacy.

Legacy mode is the capability of the validation tool before the release of July 2021.

In this mode, it only supported a very strict file and organizational structure. For example, regardless of the number of virtual hosts, only one rewrite rule file was permitted. And this file had to be named rewrite dot rules. Additionally, each farm had to use specifically named sub files. Again, regardless of how many farms were used on any implementation, all of them use the same sub files. Let’s take a look at how a validation would raise an error and how you can fix that error through a demo. Okay, so let’s take a look at validating, a legacy organized file structure for AEM as a Cloud Service dispatcher settings. So what you can see on my screen here is a legacy project. It has all the necessary files to function correctly in AEM as a Cloud Service, but we need to validate that it will actually pass the rules within cloud manager to be deployed. I’m going to show you that here in a second, but first, let’s take a look at the files. So there’s default vhosts, and there’s the default vhost link. There’s the rewrite rules and then the rewrite rule file that includes the other, the defaults out-of-the-box. As you can see, we have the normal out-of-the-box farms and cache, and default rule sets.

So I want to first force a failure. So I’m going to do is I’m going to update the virtual host configuration to reference a rewrite file that is not allowed. So I’m going to say, not allowed rewrite rules. And so once I do that, that file doesn’t even exist, but we can, let’s go ahead and create it. So that way we can see that even though it does exist, it’s still not allowed in the legacy mode.

So, okay.

Now I don’t want to add that, thank you. So we’re going to go over here to the command line, and I’m going to go ahead and run the validation. I’m sending in my dispatcher SDK folder. So if you can see here, here’s the dispatcher, SDK unpacked, and then the legacy files are right here in /legacy/src. So I’m going to do is, I’m going to run the dispatcher validation tool so …/legacy/src. And immediately it fails with a not allowed errors. Does not match any known file. In the legacy configuration or legacy mode, certain files must be named and referenced from that virtual hosts. You can find those that information and those rules on the dispatcher documentation. I’m going to go ahead and fix that now, so you can see what it looks like when it actually works. So let’s go back to this configuration. I’m not going to bother deleting that file, it’s not necessary to remove it, but I do want to go ahead and change back the default vhost file to reference the rewrite rules of that, okay, we’re now back to here. We’re going to rerun this command. And as you can see immediately says, there’s no issues found. It was very quickly goes through the rest of the checks. Let’s go ahead and scroll back and look at the actual checks on the screen and the messages. So here it says the dispatcher phase one, we’re going to validate it, which is going to tell you which version of validator you’re using, and that no issues were found. This means that none of the files had any issues from a reference standpoint, configuration rule standpoint, or organization standpoint, so phase one is complete. Phase two is actually running those files within the context of Apache inside the Docker image. So as you can see, it says, okay, well, I’m going to add all the files here. It creates and configures the Docker environment, deploys those files to the Docker environment, and then start Apache. And also outputs the configuration of the full dispatcher to any file that would be parsed and processed by Apache for actually executing requests. This allows you to see the entire dispatcher configuration in one context, rather than having to look at individual files within the organization, the code structure. So you can see here, here’s all the rules and all the configurations and all the filters. We’re going to scroll past all of these and come down to the end of the phase two. Phase two syntax, okay. That means Apache was able to parse and process, and start successfully with the dispatcher module enabled. Phase three is the immutability check. This is to make sure that you have not changed any files that are not allowed to be changed by the development teams. The list of the files is shown here on the screen, and it also validates and tells you whether or not you’ve changed any and which ones it’s checking individually. The ones that come to the end, it says no immutable has been changed. So therefore, the check is successful. And in phase three has finished. This now validates the legacy format of the Apache and dispatcher configuration files. In the next demo, I’ll be showing you how to validate the flexible mode, but let’s talk about that first. Now that we’ve looked at legacy mode, let’s look at the flexible mode. This is the best practice and recommended for configuring your AEM as a Cloud Service dispatcher setup. First, the flexible mode allows for arbitrary virtual host configuration definitions, and each is allowed to reference their own rewrite rules files. Also, each farm may have referenced their own sub files for filters, cache rules, et cetera. This allows for a highly flexible approach to configuring the environment as you see fit. Flexible mode is enabled to the inclusion of a very specific file in the dispatcher module of the source repository. This is indicated on the image specifically, it is the opt-in use sources directly file. This file does not need to have any content, it just simply needs to exist. Now let’s look at an example of running validation in flexible mode. So let’s now look at how to validate a flexible setup for the dispatcher within AEM as a Cloud Service. So as you can see on my screen, I have a workspace that has a flexible dispatcher setup. You can see that is, has the opt-in use sources directly file, that’s used to flag it as a flexible configuration. You can see the flexibility in it because you can see that I have multiple virtual hosts here. I have multiple rewrite rule files here, and I have multiple available farms and enabled farms. Let’s take a look at some of the nuances and differences that are allowed by a flexible mode configuration. So if I look at the weekday virtual hosts, you can see that I’m using a weekday rewrite rule file. And look at the weekend virtual hosts, you can see I’m using the weekend rewrite rule file. Let’s take a look at those files and see what the differences are.

So you can actually see on my screen that the difference between these two files, it’s not much, but it is significant. This rewrite rule file allows each virtual host to use a different proxy pass, to allow for shortened URLs, to resolve to different locations within the AEM content hierarchy. I’ll show you how this works later during the troubleshooting phase of this videos.

Let’s take a look at validating this setup. It’s actually really hard to val or fail a validation in flexi mode because of how lenient it is, what’s the number of, and configuration of files and settings. So I’m going to do is I’m going to update this default virtual host to reference a rewrite file that doesn’t actually exist, does not exist. So you can see what it actually looks like from a failure standpoint. Dot bin, validate.sh …/flexible/src And as you can see, it immediately fails with missing to include file because it cannot find that include. And that would cause a start-up failure within Apache, where to actually allow you to go through that process. So let’s go ahead now and change that, and run it through a full validation, all the validation. So we’re going to validate again, and as you can see, it’s going to in deploying files and you can see that it passed all the syntax validation and all of the different immutability checks. I’m not going to scroll through the entire history of that again, as we just saw it on the previous validation.

And that’s everything there is to do for validating a flexible setup for the AEM dispatcher in cloud service. Next, we’ll take a look at how to troubleshoot problems with the dispatcher configurations.

Validation is a great way to ensure what is intended to be deployed will not cause a build failure. But what about those scenarios where your configuration is syntactically valid, but it’s not functioning as intended? For that, you need to be able to troubleshoot the configuration, that can be done with the SDK Docker image. To get started using the SDK to troubleshoot, you need to make sure Docker is installed. Then you unpack and install the SDK, likely both of these steps are already done if you’ve been validating your configurations locally. In order to troubleshoot the configuration, you need a running publish instance. If it isn’t running, then the requests that hit the dispatcher will not have anywhere to go from there. Once all that is ready, you simply run the script. I’ll show you how to do that in the demo. So let’s wrap up the videos with a demonstration on how to use the Docker image provided within the dispatcher SDK for troubleshooting your Apache and or dispatcher configurations for cloud service. So I’m going to be using the flexible mode configurations I was telling earlier for the validation page or for the validation section of this video, I’m just going to go ahead and use those again, to demonstrate how to configure multiple virtual hosts and troubleshoot them in the log files. So let’s get started. So I’m going to start my Docker container. So I’m going to do, bin/docker_run, and I’m going to start using my flexible mode configurations and that’s here. And then I’m going to tell it where to find my publish instance docker for mac dot localhost 4503, my publish instances are running on my local machine and that domain name will be resolved to the IP address of my laptop in order to find the publish here or the publish instance within the container that dispatcher will be running. So last but not least, I want to listen on port 8080. So I’ll go ahead and start that. As you can see, it goes through the validation phase real fast, and then it builds the image. Once it’s built the image and actually goes ahead and starts the Apache and entails the Apache error log files. So you can see there’s a couple of dispatcher warnings there. We’re going to go ahead and clear out my screen so that you can see the output when I actually access the pages through the dispatcher in Apache.

So first though, before we do that, let’s take a look at what I’ve already got running in my publish instance, so in my localhost 4503 content week. So I have the weekend tutorial running in my publish instance. Let’s take a look. I’m also going to have a different website called the weekday tutorial, which is really just a copy of the weekend tutorial with a different banner and the initial carousel here, and a different title here, so that way I can see it’s obviously something different when I go to validate stuff. Now as you can see, we still have no content here logged because we haven’t actually tested or run through the dispatcher. Let’s go ahead and do that now. Now if you recall, I had set up my dispatcher to use proxy passes so I can use short URL so I can verify my rewrite rules are working. We’re going to do that now. We’re going to go to the weekend, weekend dot localhost port 8080. This should resolve to the weekend website within my virtual hosts inside the Docker container. Go ahead and hit enter on that one, and there’s the content. We’ll go ahead and do that with the weekday as well. And the only change I’m going to make here is the domain name. So weekday localhost. And as you can see, that actually resolves the weekday version into the same pages. Those were resolved by virtue of the virtual host configurations here inside of my Apache configurations here. Which my weekend localhost and my weekday localhost. And because of that, it resolves my rewrite rules individually. And then it also resolves my farms and all the caching rules accordingly. Now once we go back to our log files, we can actually see all of those requests being logged out to the log file here. You can go through and see whether or not there’s a cache miss. You can go through and see different details about the requests that were made. Additionally, the Docker container and the image exposes the dispatchers cache as a mount folder into your host machine. It’s actually in the sub folder of wherever the dispatcher SDK is locally. So in this case, I’m going to go to my, here’s where I have it. I’ve located here in my SDK, and you can see this cache folder. These cache folders are the pages that I just requested from the container. I can actually go ahead and clear out the page a little bit and refresh the weekend tutorial. We’ll go back to the log file and you can say, see that this is a cache hit, that this page, the weekend tutorial was a cache hit. Now I’m going to go ahead and delete that file off of the cache file system. So we’ll go ahead and delete that. Yes, and I want to delete these two as well. Now, if I were to go ahead and request this again, let’s clear out some of the logging, requests this page again, and as I go through that, we can see, and we scroll back up to this. It is now a cache miss, that is the same cache location on your host machine. So you can see what is, and is not being cached during your troubleshooting phases. Now, what if you need to change the dispatcher logging setting? Well, there’s two ways to do that. You can either set it on the command line or you can set it in the dispatcher configuration files themselves. I’m going to use a dispatcher configuration files, a way to show how to do that. I’m just going to stop the container, there we go, clear this out. I’m going to go back to this batch of files and I’m going to go to the variables here. So dispatcher global variables, dispatcher debug. I want to now debug the dispatcher log files. So then I’m going to restart my container. And once I restarted my container, I should see a lot more dispatcher logging. As you can see dispatcher log, dispatcher debug, debug, debug. This allows you to quickly and easily identify what is or issues with your dispatch configuration that can only be seen by a debugging scenario.

Actually, in this case, I want to show you some of the how to test the trace logging, so you can see how to determine using the dispatcher SDK, what rule is failing a request or blocking a request to go through to the publish here. I’m going to clear our screen out here, we have trace one, we want to start the container.

And once that started, I’m going to go ahead and return. I’m going to go ahead and close these tabs because they do generate a few asynchronous requests. And once I’ve done that, I’m going to make a request for the system console. Weekend dot localhost, 8080 system console.

And as you can see, it’s not found, if we go back to our terminal, we can scroll up to the beginning of this log and say, cache action, no, no filter rejects this console. But the question is which filter is rejecting it? And if we scroll down and actually tells you which filter rejected it right here, filter rule entries 001 block to get the system console. So if you’re running into issues, where are you getting 404s in your cloud service environment and you don’t know which filter rule is blocking the request, you can use your dispatcher SDK to determine that by turning the trace logging on locally, without having to update your production or stage environments.

And that concludes this video, here’s a list of references that you can use to get further details on the topics. Thank you for your time. -

Dispatcher Converter

Dispatcher Converter

As part of refactoring your code base, use the AEM Dispatcher Converter to refactor existing on-premise or Adobe Managed Services Dispatcher configurations to AEM as a Cloud Service compatible Dispatcher configuration.

Key activities

Hands-on exercise

Apply your knowledge by trying out what you learned with this hands-on exercise.

Prior to trying the hands-on exercise, make sure you’ve watched and understand the video above, and following materials:

Also, make sure you have completed the previous hands-on exercise:

Hands-on exercise GitHub repository
Hands-on with Dispatcher Tools

Explore using the AEM SDK's Dispatcher Tools to validate Dispatcher configurations as well as running AEM Dispatcher locally using Docker.

Try out Dispatcher Tools

On this page