Cloud Manager

Learn about Cloud Manager for AEM as a Cloud Service, and its differences with Cloud Manager for AEM on Adobe Manage Services (AMS).

Hello and welcome. My name is Bryan Stopp, I am a Senior Cloud Architect with Adobe. In this video, I will be reviewing Cloud Manager as it pertains to AEM as a Cloud Service. I’ll be going over four topics today. First, cloud manager programs, their types and their differences. Then I’ll review pipelines. What you need to know if you’re moving from Adobe managed services and how the different types of pipelines differ. In this section, I will also go over the different available pipelines within a program. Next, I will cover AMS, cloud service, automatic update features highlighting those that are most important. Finally, I will review automation features and tools available with cloud manager, including a demo of an Adobe IO CLI, cloud manager plugin capability. Let’s get started. First, programs, production versus sandboxes. The first distinctive feature of a production program is that the stage and prod environments are enrolled in automatic updates. The dev environment is not as it allows teams to manage updates at their discretion. I will cover more details about what automatic update enrollment means later in this video. Next, the type of programs that you can create depends on what is licensed. Based on what is available, different options are presented to administrators during program creation. They may select from sites, assets, commerce, forms, or screens, depending on whether or not these have been licensed and purchased. Another nuance detail about production programs is that in order to access the developer console, you must be in a developer role. Assigning this can be done by org administrators to the appropriate team members. Finally, production programs have the ability for users to administrate SSL certs, domain names, and IP restrictions via a self-service feature. No product support is required on this front.
The image to the right on the slide here is provided to show the context menu for environment management. The choices are limited as this is a production program, and therefore there are more restrictions as compared to a sandbox program. Keep this in mind as I move on to similar programs, as you can see in this image of a sandbox environment management context menu, there are more operations for users. I’ll go through those nuances here. While not presented as a functional difference, it is important to understand that the use cases for a sandbox program are very different than that of a production one. Sandbox programs are intended to be used as a method for teams to prototype AEM projects or deploy proof of concepts to ensure design compatibility. Additionally, they are used by teams to test and validate code and code migration plans as a process of moving to AMS cloud service. The first functional difference of doubt is that sandbox programs are not enrolled in automatic updates. Dev, stage, and prod all must be updated through user action. This is reflected here on the screen capture. The environment context menu has an option to update the environment to update this. You must simply select the option here shown here. Then a deployment pipeline must be activated for changes to take effect. There need not be any changes in the code that is to be deployed. The pipeline simply needs to run. Another difference is the ability for users for the appropriate permissions to delete the stage and production environment. This action is always done in tandem within these environments. So if stage is deleted production will be as well. This operation is not possible on a production program without contacting Adobe support. Once the environments have been deleted in the sandbox, users may create new ones at their discretion. Another distinctive feature of sandbox programs is that they all contain sites and assets. Customer may get forums, commerce, or screened product capabilities as ad-ons.
I mentioned that with production programs, users must be in developer role to access the developer console. This is not the case with sandbox programs. Any user in any role has access to the interface, no special role or permission is required. Sandbox program environments are hibernated after a period of inactivity in order for access or pipeline deployments to operate again, they must be dehibernated. This can be done through the user interface. The menu option, I have highlighted here shows how to accomplish this. It is important to note that you should ensure that the environment isn’t a running state before starting any pipeline. Otherwise the pipeline will fail to deploy.
Finally, as mentioned, previously, sandbox programs do not support the additional self-service features of SSL cert, domain name, and IP restriction management. Now that I’ve covered programs, it’s time to review AEM as a cloud service pipelines. Before moving on to distinctive features of pipelines. I want to cover some differences. If you are moving from Adobe managed services to AMS as a cloud service, it is important to understand these details. As you go through a migration effort. First, code quality rules for AEM managed services are different than that of AEM as a cloud service. There is some overlap in these rules, but there are several specific to AMS as a cloud service that are deemed critical and will fail a pipeline if not met. If you’d like to do further analysis, each set of checks can be downloaded in CSV format from the respective documentation pages for comparison.
The next major difference between these pipelines is the actual output artifact generated as a result of execution. The output of an AMS build is an AEM package or zip artifact. While if similar artifact is created during one step of an AEM cloud service pipeline execution, the final output is actually a system image, which is deployed during the final phase to the target environment.
These features, which are available in cloud manager for Adobe managed services, do not have an equivalent in AEM as a cloud service. There is no dispatcher configuration for flushing the environment in AEM as a cloud service. The containers are created new during the deployment. And therefore there is no pre-existing cash to invalidate or clear. Finally, as there are no CSC roles in cloud service, there are no option for adding a CSC to approve any deployment. Let’s look at some things that are new in AEM as a cloud service pipelines. First is the automatic update feature. I’ll review this shortly.
The other pipeline features available in AEM cloud service are testing steps.
First AEM as a cloud service, will run a suite of product test against the stage environment after deployment to verify product functionality.
Custom functional and UI tests, which are provided by the pipeline source repository are executed against the stage environment as well. These tests are provided by the development team to validate no regressions have occurred or any new capability that is being deployed is functional before it is deployed to production. Finally, another feature available only in AEM as a cloud service is an experienced audit reports. This report will provide insight on configured pages regarding accessibility, usability, and user experience. Diving into pipelines, first I’ll look at production. There’s only one production pipeline per program. As there is only one stage in prod environment per program, the production pipeline consists of a specific set of steps. You can see them here on the slide from validation, build, stage deployment, testing, and finally production deployment. Production pipelines have several configurable options you can pick, which Git repository and branch therein to use this as a source, you can also choose to automatically start a pipeline. Whenever a commit to that branch has occurred. Other configurable features include what action to take when an approval is required. So just when code quality overview or the stage deployment has been completed in a waiting a review and approval for production deployment, both of these can stop the pipeline until some user has approved the steps, or you can also configure them to automatically advance should you want to. Finally, the audit configuration allows for specifying which pages should be included during the experience audit step, which I discussed previously. There are two types of non-production, pipelines, deployment, and code quality. These pipelines are a subset of the steps you will find in the production pipeline. First, we’re going to talk about deployment pipeline. A non-production deployment pipeline is used to deploy a branch of a repository to a target development environment. Each program may have one non-production deployment pipeline configured per development environment. The steps of a non-production deployment pipeline include the same sequence of steps as a production pipeline up to the first deployment step. This is made to the target and development environment. This pipeline has the same set of configurable options as a production for those steps that are included. Thus, a user could configure the good details about automatic builds and which branch as well as code quality override flag settings. The code quality pipelines are used by teams to gather metrics without performing a full deployment. The number of code quality pipelines is not limited by the number of environments. These pipelines are a subset of steps of the non-production deployment pipeline, right up until the build image step. While the build image step is still performed, the image is not actually used to deploy anywhere. The pipeline also supports the same configuration options for configuring the Git repository and initiation details such as automatic building. The major configuration difference to note here is that code quality pipelines cannot have the quality metric gate be set. Instead, this will always be a cancel operation. Then that is to say it will always fail the pipeline. If there are any concern raised by the code quality evaluation. Now that I’ve covered pipelines I’ll review AEM as a cloud service, automatic update features. AEM as a cloud service is delivered to our customers through continuous integration and continuous deployment private processes CI/CD. Releases are made anytime any bug fix or security update is ready. Additionally, with incremental updates, pass validation in new releases, pushed to our customers, incremental updates, maybe new feature or functionality that is not necessarily ready for customer use, but stable and available, and should be included in the product. Finally, once those incremental updates are ready for release and customer operations, the final push to customers is done to make them available. These product updates are automatically rolled out through the CICT process. They flow from the programs, the stage environment to production. In the event of production environment, update fails. A rollback will occur, which will also roll back the stage environment, keeping both systems on the same version. All of this occurs without any customer downtime. The CI/CD process includes numerous validation steps. These include a suite of regression tests perform on the base AEM as a cloud service product. The release process also includes testing with programs, custom code that is the safe, the customer’s code base. The LTD is accomplished using the same production deployment pipeline that is available to customers through their normal operational procedures. Thus, the tests we will include in a validation step will include the product functional tests, customer custom functional test and UI tests. If any of these fail, then the automatic update will be blocked.
Finally, let’s review some automation features the end of which I’ll do a quick demo of some of the tools available to the development teams. I’ve already mentioned this, but it is worth repeating. Each pipeline can be configured to automatically start when a Git commit occurs. If a pipeline is already running and it pushed event is raised, another build will be queued and start immediately. After the current one is complete. Multiple subsequent push events will override the previous such that only one bill will be started using the most recent commit that activated the build request. Cloud manager was designed API First. Because of this, most of the high use operational tasks can be done via the API.
To that end, a postmodern workspace is available to allow development teams to get set up for making API calls quickly. Additionally, if you are familiar with the Adobe IO command line interface, there is a cloud manager plugin, which can be used to make those same API calls and be a command line arguments. The most frequent use cases are approving and canceling a pipeline execution, configuring build or environment variables clearing the pipeline local Maven repository to ensure that the next build uses newly downloaded artifacts from Maven central or other artifact repositories and last, but most importantly, tailing a log file. I’m actually going to go ahead and demo telling a log file right now. Example, or in this demo, I was showing you how to tail a log file using the Adobe IO CLI for cloud manager, I have activated a page and I’m going to monitor the dispatcher log for the activation event. To do so, I simply type in the command and the environment ID, dispatcher and the log file name that I want to monitor.
So I’m going to wait for that event to come through and once it does, we’ll be done.
As you can see, I can see that the dispatcher invalidation event has occurred based on my activation of a page on your screen is a list of references that you can use to get further details on the topics it contains your end. Thank you for your time. -

Hands-on exercise

Apply your knowledge by trying out what you learned with this hands-on exercise.

Prior to trying the hands-on exercise, make sure you’ve watched and understand the video above, and following materials:

Also, make sure you have completed the previous hands-on exercise:

Hands-on exercise GitHub repository

Hands-on with Cloud Manager

Explore triggering a Cloud Manager pipeline using the AIO CLI Cloud Manager plug-in.

Try out Cloud Manager