Learn how Content Transfer Tool helps you migrate content to AEM as a Cloud Service from AEM 6.3+.
Hi, my name is Kiran Murugulla, working with AEM Cloud and Customer Solutions engineering team as Senior Cloud Architect.
In this session, we are going to learn about different methods of migrating content into AEM as cloud services, either from on-premises or Adobe managed services. We are also going to learn how to choose the right method and touch upon best practices and guidelines to plan and execute a successful content migration. As we see here, there are three different methods plus content transfer tool for which I’ll be providing an overview of the tool, prerequisite for the uses, planning the migration using CTT and end that section with a short demo. Second, bulk import service. Bulk import service is an out of the box AEM as cloud services feature that is available for all cloud services customers. This service will facilitate to pull the assets from external cloud storages, such as AWS or Azure.
Third, package manager. There are some specific things to keep in mind by using the package manager in AEM as cloud services, which I will touch upon in that section. Let’s dive in. Content transfer tool for short CTT. This tool is developed and managed by Adobe engineering team. This is a primary tool for migrating AEM to AEM AS cloud services, which is successfully being used on multiple migration projects This tool is distributed as a standard AEM package through software distribution portal. The package contains two components front end and backend front end, which I will demo later consists of touch UI based user interfaces to create connections to target AEM as cloud service create migration sets Each migration set will contain one or more content cards from vets. The content has to be extracted options to include or exclude versions and also perform either insure or top up migrations. The CTT user interface also contains actions to stop, pause and monitor the migration process. CTT migration process itself is divided into two major steps. One is extraction The other one is ingestion during the extraction phase. The content and the referenced blobs that are stored in the blob store are being extracted and downloaded and returned to a temporary space on the source system, into the desk. From there, content and data will be uploaded into a cloud storage, which is going to become the source for the next step, which is ingestion This medial cloud layer or sodas layer is Azure blob store.
As of recording this session during the ingestion process, the content that is available in the Azure container, which is stays in container will be ingested into target EMS cloud services and then indexed so that it would be available for the consumption of the content others. If the ingestion is happening into author instance and it is available for the delivery, if it is ingested into a cloud services published there, please note that CTT is compatible with AEM six dot three and onwards on the source system. So if source system is in 18, less than am six.3 such as six.2 two or six.one or six.0 then the repository must be upgraded to six.3 and onwards.
Please note that all the customizations and other related custom code are not required to be upgraded. As long as the CTT user interface is loading and CTT actions could be performed.
another thing to note is while the CTT package that is going to be installed on AEM source system has both Fenton and backend facilitating the extraction. There are some libraries, the companion libraries on the AEM as cloud services which are facilitating the ingestion process One of the primary features of content transfer to is its ability to perform top-up migrations. Now let’s talk about how to approach and plan and execute a successful content migration There are four crucial steps for a successful content migration to occur. Number one is to identify whether the source repository status sticks are in line with the supported limits for easing content transfer tool, or even to store in AEM as cloud services to do that there is a requirement to gather information from the source system to review those numbers that are outlined in the CTTP requisites public documentation.
Then if any of these limits are going out of the bonds that are published publicly, then it is recommended to create an Adobe support ticket before moving on to the migration. So there are two ways that this kind of information it could be collected from source AEM system. One that recommended is collect ABPA report and then import that information into cloud acceleration manager and cloud acceleration manager also has a feature to estimate it’s it’s not like a perfect estimate, but at least it will give the indications of how long the CTT extraction and ingestion is going to take. Again, those are the indicator numbers, and then to gather a segment store size index store size and all such information, you can use standard Linux, discusses commands once that information is gathered, reviewed, and determined that well, we’re good. Then the next crucial step in the process is proof of migration. The intention behind proof of migration is you can think of this as a proof of concept, but during proof of migration, there are certain things that used to be considered carefully. So number one is try to migrate a production copy of the content so that we are actually dealing with the content that is there available under production. So it is recommended to get a clone at this stage just to try out before migration step and then try to place the clone in the same network zone as in the production so that we can simulate the network connectivity or any sets items, and also identify a good subset of content that could be migrated and then try to migrate all the users and groups with user mapping as well to identify any issues. So a good rule of thumb is an interface at least 25% of the client. are at least one terabyte that is by one terabyte is mentioned, is to get an estimate or how much time it will take to task for our extract and ingest one terabyte. And then that number could be extrapolated at a later point to plan the initial migration, to put the estimates into the overall project plan that comes from proof of migration. So overall the intention behind performed migration is to identify any issues very early on, fix them and then be prepared for ensure migration. And this give a near to real time, basically near to realistic estimates so that the we’ll give a clear idea how long the Content migration itself is going to in the overall project plan. So plus the proof of migration is all clear, then move on to the initial migration. So during the initial migration, the best practices here are always do a migration from author to author and publish to publish. This is primarily to make sure you are replicating the state of the content as is from the source to destination system. And also make another note here that when the content is being ingested into AEM as cloud services, the author instances are going to be done. They are going to be scaled down and there will be scaled up once the content ingestion completes, but that is different on AEM publish drive though when the content injection is happening on the publish, the publish instances are not going to be done. So that is something to keep in mind. Then once they ensure the migration, which is going to be the crucial step is complete, then plan for the incremental top-ups, which is the top of migration. So for planning, the top of migrations one of the crucial data points is how much content effectively is being added. Meaning edits are fine because they just, most of the time are going to be at a property level. So it’s mostly a textual data, but if a heavy number of assets are being added, then it’s a good idea to measure how much of that content is going to be added for a certain period of time. Could it be a week within a two weeks or month timeframe And then based on the amount of the volume of assets or content is being added, then schedule that number of frequent incremental top-ups. So the idea behind a frequent incremental top ops is to make sure you’re catching up AEM as cloud services target instance, with the latest content that is being pumped into the live production data. So that finally, before going live that length of content fleas is very low.
So there are certain things to keep in mind or being aware of. Number one from process standpoint, make sure the proof of migration is plan into very early on in the migration project timelines. So when an extraction is started, CTT is going to spun up its own jella process. And this, jella process is going to be owned pretty much in the next four. It is going to be owned by the same user who is going to own the AEM process. So, and that CTT, Java process will take up to four gigs of ram So if CTT is planned to execute on the layer production system, this is something that has to be taken care, or taken into account. So if there is a requirement to add that four gigs to upsize the servers, this is the time to do it. So before starting CTT, apart from using the four gigs of heat it also uses two other infra elements. One is the disc IO and disk uses and the network bandwidth.
So when CTT extracting the content from blob store and segment store, then what it will do is it will temporarily put that into a temp space, which is relative to CRX quick start folder, which is AEM installation folder. And then it uses network bandwidth to upload the extractor content into an Azure container in the cloud. And this particular Azure container is properly secure. Not any other customers can access it, it only this particular source system can access it certain other users because by the secret case.
So because it is using the network, so there are two things. One is, it is using network two the network. Connectivity has to be established if there are any firewall lists or any such connectivity that has been open up, this is a time to plan for that as well. So on and all CTT user sits on additional java space heap, or up to four GB. It uses this scale, it uses this space. And also it is, is the network bandwidth on the source system. So those are the things that has to be considered when you are using CTT. So with that, let’s jump into the quad man transfer tool demo so that I can show you how it looks. There is an interface and other aspects of it. As a first step, let’s see how to download the content transfer tool from the software distribution portal. So once you log in into the software distribution portal, then you can go navigate into AEM as cloud services and then under software type, if you choose two link, that’s easiest way to locate it. And you will find the latest version of content transfer tool available here. So once you download the package and then he can go into the package manager on your source system here for demonstration purposes, I am using a six doc for instance, and I have uploaded the content transfer tool here. And then I went there and installed the package. So the package installed, as it mentioned before, it installs the tool. you can access that by going into AEM and then operations under operations, you will have a content migration. Okay? So on our current migration, that is a content task force section. Once you are here, let’s try to create a content migration set so that I can demonstrate what are all kind of involved here. So give a name for your migration set name. This is the name for the entire migration set, and then we can provide your cloud services URL here and cloud services, author URL, primarily. So this is another note, whenever you are doing either author to author migration or publish to publish migration always use the same author URL and then access to open Once you click open access token, it will show the accessed token of the cloud service instance. And here you can toggle whether you’re intended to include versions or not included versions. And then also you can enable mapping the IMS users and groups into, with the associated ACLs. So the way whose or mapping works is let’s say, for example, on your source system for a specific assets, I’m taking a specific example, like for example, under content dam, you have a JPEG and on that, it are JPEG.
If the ACLs are provided to Joe Smith and then Joe Smith is part of dam administrators group, when CTT is migrating the content, it is going to take the content and Joe Smith because he’s assigned ACL and the SL ACL are inherited from the dam administrators. It’s going to take the group and the users and put you in cloud services. But due to AEM as cloud services is being connected or provision through IMS. Now we have to make sure the adjustment is available. The identity of that user is available in IMS and map from IMS to AEM. So that’s all the user mapping at high level, but more information is available in the public documentation section. Now this is the crucial step where you have to select what you are going to kind of extract, right? So I am configuring it to extract content assets. I’m picking really detail under English, and I’m going to pick the asset activities. Now I’m going to say this, once you say, this is what you’re going to see. So when I say there are some actions that you can perform here is like, you can go and click on extract So what exactly to do is let’s see what, what it will do in the backend. So I’m going to click extract and then the overwriting stays in container during extraction means if the contents of this path are already available in the Azure container, there will be all written, which will be kind of turn off during the top-up migration so that it won’t kind of override the existing content in the stays in container. And then I’m going to click extract once the extraction is running it, the show running. Now let’s go quickly and see whether we have a new Java processes. I mean, here, if you look at under cloud migration, there is a new extraction folder that has been created. So if you get into that extraction folder, this is where it will write the temporary content onto the desk. And also the output law is nothing but the extraction log. So you can, watch the log contents either from here or from CTT user interface. So if you want to, you know, what’s the log file. So you can click on the logs and then click on the extraction log, which will show the extraction process. So when we grabbed for the JAVA process, There is no java process, and also we can look the ingestion log when the ingestion is happening. I’m not going to ingest right now because it’s going to take it a little bit time. So, but overall, and that’s how you can download the CTT package, install the latest version into your source system, and then create a migration set. And then in shared an extraction. And if you are a, administrator slash system admin, who has access to the sodas, SSS access, then you can go into the CX Quick start and monitor the disc is radiation that way. And also you can watch that output, .log, or if you are watching the logs from, from here, from touch interface, you can just go and read the log So I hope that is useful. Thanks for watching. - -