Hello everybody and welcome. My name is Brian Stopp. I am a Senior Cloud Architect with Adobe, and today I will be reviewing the dispatcher as it pertains to AEM as a cloud service. I’ll be going over four topics today. The first is notable changes in the dispatcher. Then I will cover and demonstrate tools that are available to update dispatcher configurations to meet AEM as a cloud service requirements. Next, I will review how to use the dispatcher software development kit to validate a set of dispatcher configuration files, along with demoing different use cases. Finally, I’ll review and demonstrate how to use the SDK to perform troubleshooting steps. First, notable changes in the dispatcher. The dispatcher in AEM as a cloud service now requires configurations to conform to a specific folder structure. The details of this organization can be found on the documentation site. The structure does follow industry best practices, and thus makes it easier to validate, deploy, and troubleshoot. As well as adhering to this organization, certain files are immutable. They cannot be modified by development teams. The list of these files can be found on the documentation site, while the contents of the immutable files are provided in the SDK. Reasonable defaults for those configurations, which can be modified by development teams, are made available through the AEM archetype as well as the SDK. Both of these contain a full single tenant configuration. One other thing to note about the dispatcher in AEM as a cloud service is that there is a limited number of modules and directives that are available for teams to use. Which modules are available are listed on the documentation pages for the dispatcher. If you’re working on the configuration on a Windows operating system, you should be aware that the use of symlinks is a standard practice for the configuration. These symlinks need to be Linux supported references. A typical Windows shortcut will not link correctly when deployed to AEM as a cloud service. Because changes and improvements will roll out to programs automatically, it is important to keep up to date. The AEM archetype is regularly updated with the latest baseline files. For existing projects, teams can use these files or the files provided with the SDK. I’ve mentioned the dispatcher software development kit quite a bit already, and I will dive more into its features in a minute. But I should start with what is it? The dispatcher software development kit is a tool that is bundled with the AEM SDK, which is downloadable from the software distribution portal. The dispatcher SDK contains a full basic dispatcher configuration, as well as developer tools. One to validate configurations adhered to AEM as a cloud service requirements, and another to run a configuration in a Docker image akin to what is in the AEM cloud service environment. This allows helping with debugging any problems that may arise with the configuration. I will demonstrate both of these tools later in this video. First, let’s talk about a different standalone tool, the dispatcher conversion tool. The dispatcher conversion tool is one feature of the repository modernizer tool suite. This tool is intended to help customers convert from a non-AEM as a cloud service format to a configuration that will function in cloud service. Its features include that it removes files that are not allowed, such as author or non-published virtual hosts. It reorganizes files based on the recommended structure of AEM as a cloud service. Available virtual hosts and farms are in the available folder with a symlink to those files from the respective enabled directory. It also supports defining or creating Apache environment variables in the configuration file. It will populate these into the new vars file accordingly. Finally of note is that it separates a farm file into sub directive files. This allows for ease of maintenance and is a best practice. When it’s complete, the conversion tool will provide a full report of the updates that made. So let’s take a look at how to set up and use the tool. First, you must have the dispatcher SDK on your system. Download the AEM SDK and install the dispatcher SDK from that artifact. The process for doing this is on the documentation site. Next, install the Adobe I O command on interface app and the cloud service migration plugin. This plugin is the method by which we distribute our dispatcher conversion tool to our customers. Again, how to install this plugin is available on the dispatcher documentation site. Next, a configuration conversion file is needed. This is used by the plugin to specify where to perform the conversion, where to find the SDK, where to find the immutable files and where to find your baseline files. Finally, of course, you run the plugin. This can be done in either AWS services or AMS mode or on-prem mode. Most of the AMS configurations out there follow a standard structure, and therefore we have a different conversion rule set for those configurations, hence different options. Let’s take a look at conversion with a quick demo. So here I’m going to show you how to run the AEM dispatcher conversion tool against an existing configuration. What I have on my screen here is an existing Adobe Management Services configuration for an application, the Wacom tutorial. So as you can see here in the dispatcher folder, it already has the configurations that were usually used to deploy to the management services environment. What we’re going to do is we’re going to, instead, configure the conversion tool to configure them to run and to be able to deploy against cloud service. So I already have Docker and the SDK installed in the plugin. So what I did was I created a migration configuration file and I placed it here in the folder. So as you can see here, I have the location of the SDK and its source file, which is also this folder right here. And you can see that my configuration output is going to be the source of my configuration is here in this particular project’s source. So this dispatcher module and its subsequent files, these will be used as the baseline and they will be merged with the dispatcher SDK’s immutable file structure and output into the target folder. From there, we’ll be able to copy the files into the dispatcher folder for deployment to cloud service. Let’s go ahead and clean out that target folder right now just to make sure there’s nothing there because the AIO CLI plugin actually reads the files from the dispatcher folder and outputs them into the target folder. So let’s take a look at that run. We’re just going to switch over here to the command line interface and then we’re going to run the command to actually convert configurations. So it’s going to be AIO, AIO, copy that from over here. So it’ll be AIO, AIO migration dispatcher conversion. So it’s going to use the files locally in that folder as for this configuration rules. Now, you may see this error on the screen because it can’t figure out exactly what configuration it wants to use. So I need to specify that as you can see in my configuration, I’m using services. So I need to specify the type in the command line. And so to do that, we do this minus T equals TMS. Once we do that, it will start running. And as you can see, the conversion is complete and says, here’s where the output is in here for the log is. Let’s go take a look at that log file now. So we switch over to the target folder. And here’s the result dot log. And as you can see, I’m not going to go to this whole log file. But you can see that it tells you exactly what it’s doing. And if we look at the dispatcher output folder, you can see that it converted all the files into a new structure. We can actually kind of compare them here in the user interface of my development environment. Say, okay, well, the original managed services used it had unhealthy author flush, and different kinds of virtual hosts. Specifically, we want to look at AM publish and weekend publish virtual hosts. And if we look here, we scroll down, okay, well, weekend publish was a link to the weekend publish. We come down here to the output, you can see that in the available virtual host is just the weekend published virtual host. And in the enabled virtual hosts, it is the weekend publish vhost, which links to the available host. We can also take a look at those differences by looking at my GitHub desktop. Once we change and copy the files over, let’s do that now. So I’m going to delete the existing source folder for my source project, remove all of those. I’m going to copy the source folder from the target output folder into the dispatcher folder here. So yes, I want to copy those files. And now they’re all in here. I’m gonna go ahead and cancel that so that way we can see the differences in my GitHub desktop output. As you can see, it deleted a number of files, we can also look at what it changed or modified or added. So here’s the new available virtual host link. And you can see that it actually outputs all of the different configuration rules as was found in the original files. And now what we have is a configuration which is deployable to a cloud service. And it conforms to that organization. Now that I’ve shown how to convert your legacy disk to your configurations, let’s look at how to validate them with the SDK. As I previously mentioned, the dispatcher SDK is bundled within the AEM SDK. Documentation pages show how to install this software. The SDK contains a full working dispatcher configuration for you to use. It also contains the validator tool and a Docker scripted image for local development, testing and troubleshooting. This does mean that you must have Docker installed to use these features. The validator tool will run and validate a few prerequisite rules of AEM as a cloud service. However, without Docker, it cannot execute the runtime checks to validate the files will operate in the Apache HTTP context. Let’s look at the validation steps a bit more closely. There are three phases to validation. First, validate the configuration. Which validation rules it applies is dependent upon which mode it is, legacy or flexible. We’ll talk about those more in a minute. Then it validates the configurations are functional. It actually does this by deploying the files to a local Docker image and starting Apache. This validates that the files are syntactically correct within Apache. Finally, it checks that your implementation has not overridden any immutable files. I’ll look at demo each of those mode now. First, a review of legacy. Legacy mode is the capability of the validation tool before the release of July 2021. In this mode, it only supported a very strict file and organizational structure. For example, regardless of the number of virtual hosts, only one rewrite rule file was permitted. And this file had to be named rewrite.rules. Additionally, each farm had to use specifically named sub files. Again, regardless of how many farms were used on any implementation, all of them use the same sub files. Let’s take a look at how a validation would raise an error and how you can fix that error through a demo. Okay, so let’s take a look at validating a legacy organized file structure for AM as a cloud service dispatcher settings. So what you can see on my screen here is a legacy project. It has all the necessary files to function correctly and AM as a cloud service. But we need to validate that it will actually pass the rules within cloud manager to be deployed. I’m going to show you that here in a second. But first, let’s take a look at the files. So there’s default v host, and there’s default v host link, there’s the rewrite rules, and then the rewrite rule file that includes the other the default out of the box. As you can see, we have the normal out of the box farms, and cache and default rule sets. So I want to first force a failure. So I’m going to do is I’m going to update the virtual host configuration to reference a rewrite file that is not allowed. So I’m gonna say not allowed rewrite rules. And so once I do that, that file doesn’t even exist. But we can, let’s go ahead and create it. So that way we can see that even though it does exist, it’s still not allowed in the legacy mode. So okay. Now I don’t want to add that to get Thank you. So we’re gonna go over here. So the command line, and I’m going to go ahead and run the validation. I’m sitting in my dispatcher SDK folder. So you can see here, here’s the dispatcher SDK unpacked. And then the legacy files are right here in legacy source. I’m going to do is I’m going to run the dispatcher validation tool. So legacy source. And immediately it fails with a non allowed rewrite rules does not match any known file. In the legacy configuration or legacy mode, certain files must be named and referenced from the virtual host. And you can find those that information and those rules on the dispatcher documentation. I’m gonna go ahead and fix that now. So you can see what it looks like when it actually works. So let’s go back to this configuration. I’m not going to bother deleting that file. It’s not necessary to remove it. But I do want to go ahead and change back the default vhost file to reference the rear rules file. So okay, we’re now back to here, we’re going to rerun this command. And as you can see immediately says there’s no issues found. Very quickly goes to the rest of the checks. Let’s go ahead and scroll back and look at the actual checks on the screen and the messages. So here, it says the dispatcher phase one, we’re going to validate it. It’s going to tell you which version of validator you’re using and that no issues were found. This means that none of the files had any issues from a reference standpoint, configuration rules standpoint, or organization standpoint. So phase one is complete. Phase two is actually running those files within the context of Apache inside the Docker image. So as you can see, it says, okay, well, I’m going to add all the files here, it creates and configures the Docker environment, deploys those files to the Docker environment, and then starts Apache. It also outputs the configuration of the full dispatcher.any file that would be parsed and processed by Apache for actually executing requests. This allows you to see the entire dispatcher configuration in one context rather than having to look at individual files within the organization, the code structure. So you can see here, here’s all the rules and all the configurations and all the filters. We’re going to scroll past all of these and come down to the end of phase two. Phase two syntax okay. That means Apache was able to parse and process and start successfully with the dispatcher module enabled. Phase three is the immutability check. This is to make sure that you have not changed any files that are not allowed to be changed by the development teams. The list of the files is shown here on the screen, and it also validates and tells you whether or not you’ve changed any and which ones it’s checking individually. And once it comes to the end, it says no immutable files have been changed. So therefore, the check is successful and phase three has finished. This validates the legacy format of the Apache and dispatcher configuration files. In the next demo, I’ll be showing you how to validate the flexible mode, but let’s talk about that first. Now that we’ve looked at legacy mode, let’s look at the flexible mode. This is the best practice and recommended for configuring your AEM as a cloud service dispatcher setup. First, the flexible mode allows for arbitrary virtual host configuration definitions, and each is allowed to reference their own rewrite rules files. Also, each farm may reference their own sub files for filters, cache rules, etc. This allows for a highly flexible approach to configuring the environment as you see fit. Flexible mode is enabled through the inclusion of a very specific file in the dispatcher module of the source repository. This is indicated on the image. Specifically, it is the opt in use sources directly file. This file does not need to have any content. It just simply needs to exist. Now let’s look at an example of running validation in flexible mode. So let’s now look at how to validate a flexible setup for the dispatcher within AEM as a cloud service. So as you can see on my screen, I have a workspace that has a flexible dispatcher setup. You can see that it has the opt in use sources directly file that’s used to flag it as a flexible configuration. You can see the flexibility in it because you can see that I have multiple virtual hosts here. I have multiple rewrite rule files here and I have multiple available farms and enabled farms. Let’s take a look at some of the nuances and differences that are allowed by a flexible mode configuration. So if I look at the weekday virtual host, you can see that I’m using a weekday rewrite rule file. You look at the weekend virtual host, you can see I’m using the weekend rewrite rule file. Let’s take a look at those files and see what the differences are. So you can actually see on my screen that the differences between these two files is not much, but it is significant. This rewrite rule file allows each virtual host to use a different proxy pass to allow for shortened URLs to resolve to different locations within the AEM content hierarchy. I’ll show you how this works later during the troubleshooting phase of this video. Let’s take a look at validating this setup. It’s actually really hard to fail a validation in flexible mode because of how lenient it is with the number of and configuration of files and settings. So what I’m going to do is I’m going to update this default virtual host to reference a rewrite file that doesn’t actually exist. Does not exist. So we can see what it actually looks like from a failure standpoint. .bin validate.sh. does flexible source. And as you can see, it immediately fails with missing the included file because it cannot find that include. And that would cause a startup failure within Apache were to actually allow you to go through that process. So let’s go ahead now and change that and run it through a full validation. You can see all the validation. So we’re going to validate again. And as you can see, it’s going to in deploying files and you can see that it passed all the syntax validation and all of the different immutability checks. I’m not going to scroll through the entire history of that again, as we just saw it on the previous validation. And that’s everything there is to do for validating a flexible setup for the dispatcher in cloud service. Next, we’ll take a look at how to troubleshoot problems with the dispatcher configurations. Validation is a great way to ensure what is intended to be deployed will not cause a build failure. But what about those scenarios where your configuration is syntactically valid, but it’s not functioning as intended? For that, you need to be able to troubleshoot the configuration. That can be done with the SDK Docker image. To get started using the SDK to troubleshoot, you need to make sure Docker is installed. Then you unpack and install the SDK. Likely both of these steps are already done if you’ve been validating your configurations locally. In order to troubleshoot the configuration, you need a running publish instance. If it isn’t running, then the request that hit the dispatcher will not have anywhere to go from there. Once all that is ready, you simply run the script. I’ll show you how to do that in the demo. So let’s wrap up the videos with a demonstration on how to use the Docker image provided within the dispatcher SDK for troubleshooting your Apache and or dispatcher configurations for cloud service. So I’m going to be using the flexible mode configurations I was demoing earlier for the validation page or for the validation section of this video. I’m just going to go ahead and use those again to demonstrate how to configure multiple virtual hosts and troubleshoot them in the log files. So let’s get started. So I’m going to start my Docker container. So I’m going to do bin docker run. And I’m going to start using my flexible mode configurations. And that’s here. And then I’m going to tell it where to find my publish instance. Docker for mac.localhost 4503. My publish instances are running on my local machine. And that domain name will be resolved to the IP address of my laptop in order to find the published here or the published instance within the container that dispatcher will be running. So last but not least, I want to listen on port 8080. So go ahead and start that. As you can see, it goes through the validation phase real fast, and then it builds the image. Once it’s built the image, it actually goes ahead and starts the Apache and entails the Apache air log files. So you can see there’s a couple dispatcher warnings there. We’re going to go ahead and clear out my screen so that you can see the output when I actually access the pages through the dispatcher in Apache. So first, though, before we do that, let’s take a look at what I’ve already got running in my publish instance. So in my local host, 4303 content week. So I have the weekend tutorial running in my publish instance. Let’s take a look. I also have a different website called the weekday tutorial, which is really just a copy of the weekend tutorial with a different banner in the initial carousel here and a different title here so that way I can see it’s obviously something different when I go to validate stuff. Now, as you can see, we still have no content here logged because we haven’t actually tested or run through the dispatcher. Let’s go ahead and do that now. Now, if you recall, I had set up my dispatcher to use proxy passes so I can use short URLs so I can verify my rewrite rules are working. We’re going to do that now. So if we go to the weekend, weekend.localhost for 8080, this should resolve to the weekend website within my virtual host inside the Docker container. Go ahead and hit enter on that one. And there’s the content. We’ll go ahead and do that with the weekday as well. And the only change I’m going to make here is the domain name. So weekday local host. And as you can see, that actually resolves to weekday version of the same pages. Those were resolved by virtue of the virtual host configurations here inside of my Apache configurations here. So my weekend local host and my weekday local host. And because of that, it resolves my rewrite rules individually, and then it also resolves my farms and all of the caching rules accordingly. Now, once we go back to our log files, we can actually see all of those requests being logged out to the log file here. You can go through and see whether or not there’s a cache miss. You can go through and see different details about the requests that were made. Additionally, the Docker container in the image exposes the dispatcher’s cache as a mount folder into your host machine. It’s actually in the subfolder of wherever the dispatcher SDK is located. So in this case, I’m going to go to my – here is where I have it. I have it located here in my SDK, and you can see this cache folder. This cache folder are the pages that I just requested from the container. I can actually go ahead and clear out the page a little bit and refresh the weekend tutorial. We’ll go back to the log file, and you can see that this is a cache hit. That this page, the weekend tutorial, was a cache hit. Now I’m going to go ahead and delete that file off of the cache file system. So we’ll go ahead and delete that. Yes, and I want to delete these two as well. Now if I were to go ahead and request this again, let’s clear out some of the logging, request this page again. And as I go through that, we can see and we scroll back up to this, it is now a cache miss. That is the same cache location on your host machine, so you can see what is and is not being cached during your troubleshooting phases. Now what if you need to change the dispatcher logging setting? Well there’s two ways to do that. You can either set it on the command line, or you can set it in the dispatcher configuration files themselves. I’m going to use the dispatcher configuration files as a way to show how to do that. So I’m going to stop the container, there we go, and clear this out. I’m going to go back to dispatcher files, and I’m going to go to the variables here. So dispatcher global variables, dispatcher debug. I want to now debug the dispatcher log files. So then I’m going to restart my container. And once I restart my container, I should see a lot more dispatcher logging. As you can see, dispatcher log, dispatcher debug, debug, debug. This allows you to quickly and easily identify what is or issues with your dispatcher configuration that can only be seen by a debugging scenario. Actually in this case I’m going to show you something that will know how to test the trace logging. So you can see how to determine using the dispatcher SDK what rule is failing a request or blocking a request to go through. So this is published here. We’re going to clear our screen out here. We’ve got trace one. We’re going to start the container. And once that’s started, I’m going to go ahead and do a few returns. I’m going to go ahead and close these tabs because they do generate a few asynchronous requests. And once I’ve done that, I’m going to make a request for the system console. We end local host 8080 system console. As you can see, it’s not found. If we go back to our terminal, we can scroll up to the beginning of this log and say, cache action, no, no, filter rejects this console. But the question is which filter is rejecting it? If we scroll down, it actually tells you which filter rejected it right here. Filter rule entry 001 blocked to get to system console. So if you’re running into issues where you’re getting 404s in your cloud service environment and you don’t know which filter rule is blocking the request, you can use your dispatcher SDK to determine that by turning the trace logging on locally without having to update your production or stage environments. And that concludes this video. Here’s a list of references that you can use to get further details on the topics. Thank you for your time.