Ingest, map, and transform Adobe Analytics data

In this video, we demonstrate how to use data prep features for Analytics data, including data manipulation features such as mapping Analytics variables to new custom fields and performing transformations and calculations. These activities are done in the Source Connections workflow for Analytics in Experience Platform.

In this video, I’ll explain how users can ingest their data from Adobe Analytics into Adobe Experience Platform and enable the data for platform’s Real-Time Customer Profile. Data ingestion is a fundamental step to getting value from the Experience Platform such as building robust Real-Time Customer Profiles and using them to provide meaningful experiences. If you’re using Adobe Analytics, you already know it’s a powerful analytical engine that helps you learn more about your customers, see how your digital properties are performing, and identify areas of improvement. The data connector lets you easily tap into this data to use in the Real-Time Customer Profile in the least amount of time compared to other methods. These are the areas I’ll cover in this video; ingesting analytics data using a standard or custom schema, why you’d want to use a custom schema which requires a bit more setup, data prep functions available in the custom schema workflow, how to enable this data for Real-Time Customer Profile where you can monitor your analytics data flow to make sure there’s no errors or gaps with the data coming through. Now, the Analytics source connector isn’t the only way to get your analytics data into platform but it is the fastest method requiring the least amount of effort from you or your resources. You may have use cases that require edge or streaming segmentation based on analytics attributes. That topic is out of scope for this video. I’ll show you a mapping feature in the user interface as well as calculations you can apply to your data. This is all done within the platform user interface. Now let’s look at some architecture. Analytics collects data from various digital channels using multiple data centers around the world. Once the data is collected, you can use processing or VISTA rules to modify the incoming data for better reporting. Once this lightweight processing happens, it’s ingested into platform and is ready for consumption by the Real-Time Customer Profile and associated services like segmentation and activation within a couple of minutes. A caveat to this is if you use analytics as a reporting source for target activities also known as A4T. In that case, it will add another 15 minutes to data availability. The same data is micro batched into the data lake with a latency of about 15 to 30 minutes for consumption by things like query service and intelligent services as well as Customer Journey Analytics. To set up the Adobe Analytics source connector, just log into Experience Platform and navigate to Sources and open the Sources catalog. Under Adobe applications, look for Adobe Analytics. Select the Adobe Analytics source connector to add data. In the Analytics source add data step, you can choose the source data from a classification file or a report suite. For this video, I’ll select the report suite data option. Now select the report suite you want to ingest data from from the list below. I’m going to select Hotel Reservations. Then at the top, select Next. Now at this point, I have two choices; default schema and custom schema. First, what is schema? Well, it’s a set of rules that validates the structure and format of data and is used by platform to ensure the consistency and quality of data coming in. Now I’m going to show you the Hotel Reservation schema, just so you have some exposure to it. When I expand the analytics structure and then select custom dimensions, notice the objects that you see underneath. You should be familiar with the things like eVars, listProps, and props, just to name a few. What we’re looking at right now is the default analytics field group. Conceptually, it’s important to understand that selecting the analytics default schema in the source connector workflow will automatically map your report suite data to the default schema without any additional effort on your end. You don’t need to create a new schema before invoking the workflow using the default schema. Everything comes over as is. The descriptors used for the analytics variables in the report suite selected will be copied over to platform after initial setup. I’ll be showing you the more flexible option in this video, which is based on the custom schema. Before we get into that, I’ll explain the difference between the default schema workflow versus the custom schema workflow. Here you see at the top that everything comes over as is using the default analytics schema. However, you can add additional field groups to the schema that you define. Okay, so what’s a field group? Well, a field group represents a building block of a schema that organizes a group of fields into an object that can be reused in multiple schemas of the same class. Now at the bottom, there are data prep features you can leverage as you map your standard attributes to new custom attributes. When you use a custom schema in the analytics data connector workflow, you need to plan and create your schema in advance. Let me show you what I mean. Here, I have the schema that I showed you earlier. There are two field groups that are part of the schema; the Adobe Analytics ExperienceEvent template, which I showed you earlier, and there’s a second field group called Reservation Details. This is a user-defined field group added to the schema. Here, we see fields that have descriptive names like transaction, cancellation, and confirmationNumber. I’m going to be mapping some analytics variables to a couple of these new attributes in the source connector workflow. Here are the main use cases for our custom analytics schema. First, it’s more user friendly to leverage semantic or descriptive attribute names in things like segmentation service and Customer Journey Analytics instead of referencing native analytics eVars like eVar1, for example. Second, if you want a more standardized way of referencing the same data that might be captured differently across report suites, using custom attributes is the way to go as you see illustrated in this table. Third, you may have data in analytics that is stored in a pipe-delimited format or maybe you want to join two values together in a single attribute. The custom schema workflow will help you accomplish both of these goals. Last, let’s say you want more flexibility for defining identities for your analytics data beyond the ECID, which is the Experience Cloud ID Service. You can do that by setting up a new attribute in your custom field group and marking it as an identity field. Okay, let’s put it all together again before we go back to the UI. To recap, use the custom schema and associated workflow in the source connector setup when you have use cases to extend the schema beyond the default analytics field group if you don’t use the default setup which has fewer steps and a lower level of effort. Also, if you have multiple report suites to bring into platform, you’ll need to set up a data connector workflow for each one. Next, I’m going to select the Hotel Reservation schema from the list. The map standard field section gives you details about the default mapping that occurs from your report suite to the Analytics ExperienceEvent field group in the schema. If there were any descriptor conflicts when mapping your report suite to a preexisting schema, you’d see them here. I’ll show you that for demonstration purposes now. You can see the conflict in the descriptor names; custom insight on the left versus new insight on the right. At this point, you could either accept the mapping discrepancy or create new mappings using the Custom tab. I’m going to select Hotel Reservations. Let’s look at some mapper functions. They allow pass-through mapping as well as calculated field mapping, which performs data manipulation operations for things like string, numeric, daytime, array, and data types. First, let’s work with pass-through mapping. I’m going to select add new mapping under Custom. In the source field, which is coming from my report suite, I’m going to select eVar5. My report suite doesn’t have a label or descriptor for this, but let’s say it contains the confirmationNumber. I want to map this to the semantic field created in the Reservation Details custom field group that’s part of the schema. Next, let’s work with a calculated field. I’m going to select add calculated field. Once this opens, I get a friendly editor to work with. It’s going to contain functions, fields, and operators on the left, a text editor and preview section in the middle, and, as we’ll see, a contextual help reference on the right. I can search for a specific function using the search box, which I’ll do now. I’ll type in trim, and I’m going to add it to the editor by clicking on the plus sign. Now I’ll type in lower and add that to the editor as well. Now I’m going to select the field link to search for the analytics field I want to work with, and I’m going to search for eVar2. Let’s say eVar2 contains a transactionID. And my goal here is that I want to trim any spaces and ensure the value is all lowercase. Last, I’m going to make sure to rearrange the formula so that the syntax is correct. Now I’ll click on Save in the upper right corner. This adds the calculated field in the mapping screen. I’m going to pop open the schema from the target field section by clicking on this icon on the right, and I’m going to select transactionID in the transaction object. Next, I’ll click on Select at the bottom. After all of the mappings are addressed, click on Next in the upper right corner. On the data flow detail page, I’ll provide a name. I can also select any of these alerts about ingestion status for this data flow. Now I’m going to click on Next in the upper right corner. This is going to let me review all the information about this data flow and when it looks good, click on the Finish button. Okay, so I just created that data flow, and I want to show you some other validations and configurations you can do once your analytics data has started to ingest. I’ll use a different analytics data flow to show you this. Next, I’ll select the third data flow item from the top. And when I do this, some properties display in the right panel. One of these is the dataset name used for ingesting this report suite data. I’m going to click on it, and this opens the dataset activity page. Under the dataset activity, there’s a quick summary of ingested batches and failed batches during a specific time window. I can scroll to see ingested batch IDs. Now each batch represents actual data ingested. Once everything looks good, enable the dataset for Real-Time Customer Profile. To do this, there are two objects that require configuration for profile; the schema and the dataset. On the properties for the dataset, I’ll open the schema in a new tab. In the properties for the schema, I can see it’s already enabled for profile. It’s important to note that once a schema is enabled for profile, it can’t be disabled or deleted. Moreover, fields can’t be removed from the schema after this point, but you can add new fields. So keep this in mind if you’re working in your production environment, and if you’re unsure, just use the soundbox until you’re ready. Now, I’m going to go back to the dataset tab. The profile flag would need to be set here as well in order for this data to be sent to the profile store. Now, remember the profile store is what is leveraged by the Real-Time Customer Profile. You’d simply click on it to toggle the configuration and then select Enable. However, I’m going to cancel out of this since I’m just demonstrating. Okay, you should now know how to ingest data from Adobe Analytics into platform using the sources connector. You should also feel comfortable using a custom schema as well as the mapper and data prep features. Also, you should be able to enable this data for the Real-Time Customer Profile. Good luck. -

For more information, please see the Adobe Analytics source connector documentation and the Data prep functions documentation.