Go-Live

In this part of the journey, you will learn how to plan and perform the migration once both the code and the content are ready to be moved over to AEM as a Cloud Service. Additionally, you will learn what are the best practices and known limitations when performing the migration.

The Story So Far

In the previous phases of the journey:

  • You learned how to get started with the move to AEM as a Cloud Service in the Getting Started page.
  • Determined if your deployment is ready to be moved to the cloud by reading the Readiness phase
  • Familiarized yourself with the tools and process through which you can make your code and content cloud ready with the Implementation phase.

Objective

This document will help you understand how to perform the migration to AEM as a Cloud Service once you are familiar with the previous steps of the journey. You will learn how to perform the initial production migration as well as the best practices to follow when migrating to AEM as a Cloud Service.

Initial Production Migration

Before you can perform the production migration, please follow the fitment and proof of migration steps outlined in the Content migration strategy and timeline section of the Implementation phase.

  • Initiate the migration from production based on the experience you gained during the AEM as a Cloud Service stage migration performed on clones:

    • Author-Author
    • Publish-Publish
  • Validate the content ingested into both the AEM as a Cloud Service author and publish tiers.

  • Instruct the content authoring team to avoid moving content on both source and destination until the ingestion is complete

  • New content ca be added, edited, or deleted but avoid moving it. This applies both to source and destination.

  • Record the time taken for full extraction and ingestion to have an estimate for future top-up migration timelines.

  • Create a migration planner for both author and publish.

Incremental Top-Ups

After the initial migration from production you must perform incremental top-ups to make sure your bring your content up to date on the cloud instance. Because of this, it is recommended you follow these best practices:

  • Gather data on the amount of content. For example: per one week, two weeks or a month.
  • Make sure to plan top-ups in such a way that you avoid more than 48 hours of content extraction and ingestion. This is recommended so that the content top-ups will fit into a weekend timeframe.
  • Plan the number of top ups required and use those estimates to plan around the Go-Live date.

Identify Code and Content Freeze Timelines for the Migration

As mentioned previously, you will have to schedule a code and content freeze period. Use the following questions to help you plan the freeze period:

  • How long do I have to freeze content authoring activities?
  • For how long should I ask my delivery team to stop adding new features?

To answer the first question, you should consider the time it has taken to perform trial runs in non-production environments. To answer the second question, you need close collaboration between the team who is adding new features and the team refactoring the code. The goal should be to make sure all the code that is added to the existing deployment is also added, tested, and deployed to the cloud services branch. Generally speaking, this means that the amount of code freeze will be lower.

Additionally, you need to plan for a content freeze when the final content top-up is scheduled.

Best Practices

When planning or performing the migration, you should consider the following guidelines:

  • Migrate from Author to Author and Publish to Publish
  • Request a production clone that can be used to:
    • Capture repository statistics
    • Proof of migration activities
    • Prepare the migration plan
    • Identify content freeze requirements
    • Identify any upsizing needs on Production when doing the migration from production

Content Transfer Tool best practices

Make sure that when going live, you run the content migration on production instead of a clone. A good approach is to use AZCopy for the initial migration and then run top up extractions frequently (even daily) to extract smaller chunks and to avoid any long-term load on the source AEM.

When performing the production migration you should avoid running the Content Transfer Tool from a clone because:

  • If a customer requires content versions to be migrated during top-up migrations, then executing the Content Transfer Tool from a clone does not migrate the versions. Even if the clone is recreated from live author frequently, each time a clone is created the checkpoints that will be used by the Content Transfer Tool to calculate the deltas will be reset.
  • Since a clone cannot be refreshed as a whole, the ACL Query package must be used to package and install the content being added or edited from production to clone. The problem with this approach is that any deleted content on the source instance will never get to the clone unless it is manually deleted from both source and clone. This introduces the posibility that the deleted content on production will not be deleted on the clone and AEM as a Cloud Service.

Optimizing the load on your AEM source while performing the content migration

Remember, the load on the AEM source will be greater during the extraction phase. You should be aware that:

  • The Content Transfer Tool is an external Java process that uses a JVM Heap of 4 GB
  • The non-AzCopy version downloads binaries, stores them on a temporary space on the source AEM author, consuming disk I/O, then uploads into the Azure container which consumes network bandwidth
  • AzCopy transfers blobs directly from the blob store to the Azure container which saves disk I/O and network bandwidth. The AzCopy version still uses the disk and network bandwidth to extract and upload the data from the segment store into the Azure container
  • The Content Transfer Tool process is lighter on the system resources during the ingestion phase, since it only streams ingestion logs and there is not much load on the source instance as far disk I/O or network bandwidth are concerned.

Known Limitations

Please take into account that the entire ingestion fails if any of the following limitations are found as part of the extracted migration set:

  • A JCR Node that has a name longer than 150 characters
  • A JCR Node that is bigger than 16 MB
  • Any User / Group with rep:AuthorizableID being ingested that is already present on AEM as a Cloud Service
  • If any asset that is extracted and ingested moves into a different path either on source or destination before the next iteration of the migration.

Asset Health

Compared to the section above the ingestion does not fail due to the following asset concerns. However, it is highly recommended you take the appropriate steps in these scenarios:

  • Any asset that has the original rendition missing
  • Any folder that has a missing jcr:content node

Both of the above items will be identified and reported in the Best Practice Analyzer report.

Go-Live Checklist

Please review the list of activities presented below to ensure that you can perform a smooth and successful migration:

  • Schedule a code and content freeze period. See also Code and Content Freeze Timelines for the Migration.
  • Perform the final content top-up
  • Complete testing iterations
  • Run performance and security tests
  • Cut-Over and perform the migration on the production instance

You can always reference the list in case you need to recalibrate your tasks while performing the migration.

What’s Next

Once you understand how to perform the migration to AEM as a Cloud Service, you can check the Post-Go-Live page to keep your instance running smoothly.

On this page