Migrate to EDDL/WebSDK - May 2023 APAC Adobe Analytics Skill Exchange Grow Track
Why migrate to EDDL/WebSDK and what should be considered before starting the migration
Transcript
Hi everyone, good day. I’m excited to be here at the Experience Maker, the skill exchange. Today I will talk about the journey to the next generation website tracking. Before I start, let me have a introduction of myself. I’m Neil Lau, the director of Martech Lead Asia for Manner Life. I have over 25 years of experience in web and digital foundations and over 10 years on digital marketing and digital analytics for insurance at regional level. I’m currently the Adobe Analytics Champion and also the Adobe Community Advisor. In my personal time, I love to learn to run, to read and play games. The first one, the learn, is the most important thing. So that’s why we are here today to learn from each other and then to do some knowledge exchange. Today I will talk about the common way of the website tracking and the problems now and then two new tools, the Eventuform Data Layer and the Adobe Experience Platform Web SDK and how they can address the problem and then what to be considered if we decided to migrate to these new tools for the website tracking. So the common way of website tracking nowadays that when we talk about the website tracking, we basically have two things that the event and data. So the event is the user behavior on the website and then we of course using the Adobe Launch to capture different types of events and then those three common events, the DOM event binding to an HTML element with a specific CSS selector so then when user click on the button or something else, then we can trigger rules in Adobe Launch for tracking. Or we can have some sort of direct call rules that we have the Funland Development Team calling the satellite track function and then we can trigger some rules in Adobe Launch. And the last type is the data element change event that we can create a rule in Adobe Launch that monitors the content of a data element. So whenever the data element, the content changes, the rule will be triggered as well. When the rule is triggered, we are going to do some tracking and of course we have to color the data and then also three common ways for collecting the data. One is the DOM scripting with a CSS selector that we can basically able to get all kind of content available from the Funland web pages. Or we can use the data layer object, use the JavaScript variables and the browser storage such as the cookies, session storage, etc. And then of course for this, we have to discuss with the Funland developer to know where they store the data or having them to store the data to some PD Finder location as required. And then finally for the direct call rule, it can also take in the second parameter which can be passing in with any data we want to send to Adobe for the tracking. So what are the problem? Now they usually when we’re talking about the website tracking, that will be two different teams that the front end development team, they are creating a website, they are setting up all the logic for the websites and then the website tracking implementation team that we will conflict any rules in Adobe Analytics data element to do the tracking. Maybe on day one when we’re having a new site query, there may be a clear documentation on how the CSS being used, what JavaScript variable are available from time to time when there are some updates changed to the website logic or the presentation, those CSS will be updated, all the JavaScript variable will be updated. And then when these things are updated, and if there is not communication between the front end development team and the website tracking implementation team, the tracking will be broken. Either the rule will not be triggered, or even the rule triggered, no data can be collected, because all these things have been updated. Another common issue is the DOM event, P event, default and stop propagation method. These methods basically interrupt any event and then abort those events. And then when those events are aborted, like the click event, it will not wish Adobe Analytics and then at the end Adobe Analytics cannot catch those data. We may have some use in this P event default and stop propagation front end, but this will have a side effect that stopping the tracking happening correctly. For the data element change event, there is a latency. So Adobe Launch, it has a one second monitoring to all the data element change, which means the launch will check to the data element once a second. And then you can imagine that if there is some quick change of data element, that will be in some case that either the event will not be triggered or the event triggered, but by the time the tracking happened, the content of these data elements changed already, and then we are capturing some incorrect data into Adobe Analytics. And then also the existing way of tracking is a bit difficult to debug. We may have some difficulty to simulate the DOM event. Something like click or page load, they are super easy, simple to simulate. But some of the other events, like the custom event that may be triggered when some backend web service return, we are having problem to simulate those DOM event and unable to simulate the tracking for us to do some sort of debugging or verification. Also at the same time, it is difficult to observe the actual value of data element at the time the rule triggered it. We can use the satellite.getWaa function to look at the content of data element anytime, but you remember that is only being the time we enter, execute this satellite.getWaa function. And then by the time these rule, some Adobe launch rule triggered it, the value can be different. Of course, we can use the console log to print out all these data elements when the rules triggered it, but that is quite a bit of work and then not easy to manage and then left behind a lot of unnecessary code in the tracking. So here are the two new tools. The first one is the event-driven data layer. So data layer is not a new thing. I had mentioned it before. It is a JavaScript object on the page as a medium to pass data from the front end to Adobe launch for the tracking. The event-driven data layer has two significant differences from the ordinary data layer. Instead of an object, the event-driven data layer is a JavaScript array, which each element is a transitional data layer object. And then each data layer object pushed into this EDDL has the event attribute to signal the tracking. So then we can create Adobe launch rules to monitor to different EDDL event and then trigger the tracking. This is the combination of event and data together in the event-driven data layer. And then it serves as a single protocol for the front end development team and the website tracking implementation team to pass data and then also to trigger tracking. And now of course, to have EDDL work well, we need to have a symmetric structure and design of the EDDL. So it is easy to understand the context of each of the attributes in this event-driven data layer, such as we have the web.webpage.data.name for the page name, or we may have the web.button.label for the button label. And then we also need to have a well-maintained document for the EDDL on the data structure and what kind of event being pushed, when it will be pushed, and then what tracking should be triggered from those events. How the event-driven data layer helped to resolve the problem? First, the most important one is the remove of dependency on the CSS and Java variables in the tracking. Because they are specific to the front end presentation and operation that may change from time to time, which is not related to the tracking. And also the tracking code and the data layer push are clear and explicit in the front end source code. So the developer, when they look at the front end source code, they are 100% sure that these lines of source code are related to the tracking and that any change of the source code, these data push will have impact to the tracking and then they will know that they have to communicate to the website tracking implementation team. And like the CSS, that may be just a class designed for a button use or some formatting that there is no clear indication to the use of the website tracking. And then also for the event-driven data layer, the event is really only for the tracking. These events are not DOM events, so they will not be impacted by the p1 default and stop propagation function that they will keep going on and then can wish the other relaunch and then trigger tracking. And for the event push or any other data push into the event-driven data layer, tracking can be happened in real time. So it is different from the monitoring approach for the data event, data element change rules. For the data push into the event-driven data layer, it is listened to by the Adobe Launch Library. So it will trigger the rules in real time immediately. And then also all the events and data are centralized and then also available for a visual review. So for the website tracking team, we can just go directly into the development console to type in this EDDL object name to look at what kind of data has been pushed into the EDDL. And then we can also simulate the tracking by simply pushing the required data into this EDDL. And then the rules correspondingly will be triggered. And then we can look at the tracking to see if there is anything wrong or if the data is sending into Adobe Analytics correctly. This is another new choice, the Adobe Experience Platform Web SDK. It is a new JavaScript library and launch extension replacing the same for multiple solutions. They are the Adobe Analytics, Adobe Launch, Audience Manager, and Experience Cloud Identity Service. The Web SDK will utilize the AET edge network with one single level core to send and receive data from multiple solutions mentioned above. So there will be no more multiple WebPICorns for analytics, target, etc. The obvious benefit is a smaller size of the library because they are combining into one and then there will be fewer level core and then that will be a clear improvement of the website performance. But in terms of the tracking, the two key benefits is the access to the AEP assurance, which is also available for Web SDK, not only for the mobile app tracking, and that the Web SDK is using the experience data model, the SDM to package data and then send to the Adobe solutions. How these are helping and then why using the Web SDK together with even different data layer? First is the AEP assurance. So now a day when we are doing any kind of tracking, verification, or debugging, we may be using some browser extension or even looking at the network traffic on the browser to see what data has been sent to Adobe. And then when you close the browser, all this information will be gone. But by using the AEP assurance, it will record all the sessions and then maintain them for 30 days. So we can go back and look at all the previous testing, tracking, example, and then to see what has been happening. And then it can help us to do the verification and then the ratification for the tracking. And then also it is not only recording the data sent, it is actually recording the data received by destination solutions. Something like the Adobe Analytics, if you know about the data feed of Adobe Analytics, which is the raw data of it. In the AEP assurance, you can actually see what kind of data feed the raw data has been received by the Adobe Analytics, which will be presented in the workspace or the report. And then you can make sure that what data going in. Another good thing for the web SDK is the simplification of the data mapping. Nowadays, when we do the Adobe Analytics, we have to spell out each of the EY or the EN to be used to be assigned to the Adobe Analytics in the extension configuration, in the rule configuration, or even some of those that are not available. We have to use the custom code to assign the data to Adobe Analytics so it can capture. But using the web SDK, there will be a data mapping to the Adobe Analytics dimension and metrics if we are using the SDM as the EDDL structure. And both the EDDL structure and the SDM of web SDK, they are just JSON. So we can simply directly using the SDM as the EDDL structure. And there are a lot of Adobe Analytics standard dimension and metrics. They were automatically mapped to the corresponding dimension. So like the web.webpage.details.name, it is automatically mapped to the page name dimension in Adobe Analytics, and then we don’t need to do anything. And then there are some custom attributes in the EDDL, which can be easily mapped to the Adobe Analytics as well, that we can just do some sort of data mapping in the data stream. It can reduce the number of data elements required in the Adobe launch. Less thing to maintain, easier for the tracking. And then also for the future planning that it helps to lay a good foundation for the adoption of AEP native apps in the future, those CDP, the CGA and AJO. So what we need to consider when we are doing the migration. Yes, first it is largely, greatly depends on the existing tracking implementation. So for example, whether if there is any data layer currently being used, then you need to think about whether we are doing the mapping or we are changing the data layer to use the SDM. Or if we are not using data layer at all, or a really minimal use, then we can just simply have a new design of the different data layer, which is aligned to the SDM to simplify the migration. And also a phased approach being used by a big bang approach. Because we are talking about the migration of the existing tracking, that there may be a lot, and then the website is now currently live. And then there are also maybe some sort of ongoing development change to the website. We can have a phased approach that we having both Adobe Analytics extension and the Web SDK extension, the old way of tracking and new way of tracking in the same Adobe launch container. And we progressively changing the tracking journey by journey. So we can start off some basic page with tracking, some basic file downloads tracking, and then we test it. We publish it to the production and then we can have a another phase for some other tracking like the website search tracking, migrates tracking, migrates some other like the form submission tracking on the website. Test it and then push it to the production. And then eventually we will migrate the entire website from the old way of tracking to the new different data layer tracking. Or another approach is a big bang change that we do all the new tracking in a new Adobe launch container. And then once we finish it, then we swap out the Adobe launch JavaScript on the front end and then to take in the new Adobe EDDL tracking. We as mentioned a bit above that we have to consider the structure of this event different data layer. And in any case, it must be flexible for future tracking requirements. So they are on two extreme and then we have three approach that we can go completely with the Adobe SDM. So that even for any custom dimension we just directly use this experience. .analytics.customdimension.er.er1 in the event different data layer to store the data. Or we can have the Adobe SDM as the foundation for the standard metrics and dimension such as the page name. We use the web page details.name to store it in the EDDL. Then we can have some custom dimension and metrics such as the web.button.label which will map into a custom UI. Or at the end of course another extreme is we completely using a custom build semantic EDDL that not only the something like the web.button.label but also the page name. We can have the web.page name in our EDDL and then we do a mapping on the data stream to map these custom attributes in the EDDL to some Adobe Analytics dimension. Theoretically when we are doing the tracking migration there should not be any impact to existing report and downstream data consumer. For example anyone using the data warehouse etc. But again it is largely depends on what is the current implementation and then maybe during this migration you are going to make some change of your tracking that some data will be going into a more well-defined data warehouse. So you can see that we are using the data warehouse in the data center, in the Adobe Analytics as you are changing the EDDL. And then this kind of change should be reviewed and then communicated to all the data consumer no matter they are whether using the Last but not least the most important thing for any migration project the communication is the key thing that we have to communicate with the front-end development team on the documentation that any change to the documentation and how to maintain it and then to the data consumer for any change in the report and data. So the final key takeaway that EDDL is a better approach for the website tracking because it has less dependency on the front-end technology and then it has fewer problems and error. And the AEP web SDK is a natural companion with the EDDL so when we change to EDDL it is very leisurely to go for the web SDK instead of individually assigning all the URLs and events in the Adobe Analytics extension again. Migration plan largely depends on individual digital property and then a phased approach is generally better because there will be a lot of ongoing changes to the website during the migration that may take months at the end. The review and design of the event different data layer are important as they define the feasibility of how good how wireless for us to go for the future AEP native apps and the communication is critical because any impact to the report and downstream data consumer and the documentation of the EDDL structure and the data push will be have a big impact on whether we are able to collect the data correctly and quietly being used by the data consumer in future or not. So that is my sharing today and then thanks and then let’s have some sort of Q&A later. Thank you. Wow, Leo that was a robust presentation. Thank you for sharing. Thank you. So we have lots to cover so let’s get started. I’m getting a lot of questions on the chat board. Let me pick the first one. There’s a question from the audience. Is it okay to use DCR when we do not have ACDL in place? Will it affect any performance? Yeah, so first I think for the DCR it is talking about the discovery data layer in the Adobe launch but actually there are multiple implementations of data layer out of the market. The Adobe ACDL is one of those and then it is available in the Adobe launch but if you use some other data layer of course you can do so but you need to have some sort of comparisons to understand which one best fits your purpose. And then for me of course I do have my own preference that I will use the ACDL because it has a very unique feature in the data push with the event info attribute that can hold the data for one single tracking call, one single Bitcoin that will send over to Adobe and that it will not be left behind in the event different data layer because you know that whenever you push data into the event different data layer at the end you use the get state function you will have the accumulative data from all the previous push but sometimes when you’re doing the tracking the tracking is really for one single event like after the page view you have some file downloads and then you have some form submission. On the form submission if we are using a simple data layer that keep accumulating all data at the end when you do the form submission tracking you will kind of having the file download information together in this form submission and then sending out to Adobe analytics that is not that correct I would say that because that the last submission is really just submission no form download happened it was a previous event so my personal preference is really the ACDL and it is available in both Adobe launch and also as a library a JavaScript library itself you can download from the Adobe website but again like what I said that it is all open for what EDDL implementation you can use and you may find that in the in some other EDDL implementation there are some features that are critical to you then of course you can go for that and for the performance you may need to then usually when we compare the performance for the EDDL the one major point of comparison is the library size then maybe you can start trying to compare whether the library size are they bigger or smaller or else other consideration of the performance like whether the data push the trigger is it real time or not I believe all the EDDL implementation they are doing the same thing Awesome, awesome. We have a second question. How can you ignore a custom link server call from the bounce rate calculation in Adobe Analytics? So can I come again? So there’s a question from the audience. How can you ignore a custom link server call from a bounce rate calculation? Oh okay so actually this is not really on the EDDL but yeah I can also answer that. I think you need to know that how Adobe calculate the bounce rate out of the box. For Adobe to calculate the bounce rate it’s really based on how many hits sending in to Adobe in a session so when we are doing the exit link tracking it is basically a second hit sending into the Adobe Analytics and then for their default bounce rate calculation honest speaking it is no longer a bounce already so my my approach to handle this is of course we are not trying to ignore this outbound link tracking we still want that but we will not simply use the out of the box bounce rate calculation but we will be having our own calculator metrics in the Adobe Analytics that we install based on a hit per sessions or hit per visit to calculate to determine whether it is bounce or not we based on the page wheel that if it is one single page wheel and then it is a bounce no matter how many file download or exit link or other single page activity happened so it will be very much you need to align on your business decisions or point of view whether this is considered as a bounce or not so the calculator metrics will be the answer for this one. Oh wow okay that sounds interesting we have another question around ACDL I know like we just spoke about that so is it okay to have both Adobe Digital Data Layer as well as ACDL until we migrate completely you know will it make any will it create any kind of issues in the system? That is an interesting approach I would say yeah I think if we talk about the Adobe Analytics extension and the Web SDK extension these two for sure they will coexist for some time and then when we migrate and then move out from Adobe Analytics to Web SDK then we will remove the Adobe Analytics but if we are talking about the ACDL so this is the data layer extension and then you have some other extension so what I think is the key is ultimately who will sending the data triggering the data to the Adobe Analytics whether it is from the ACDL or some legacy extension and then you need to manage that maybe journey by journey that when you have a first batch of tracking for basic page will this migrate from the old data layer extension to the ACDL then of course you will not know you will no longer push the data to the old array but to the new array and then you trigger the event from the ACDL and then for this approach that yes you can have both ACDL and some other data layer implementation on the site at the same time and then you migrate feature by feature and then eventually you can remove the old one but of course you need to be careful that when you’re moving the data from the old data layer to the new data layer you need to know that for example the example I just mentioned about the page view and file download if on the file download event you have some dependency on the on the some previous data push into the data layer but you move that to the new data layer already but having the file download tracking in the old data layer then you can imagine that when that happen the file download data push when you use the gas state you cannot find any previous information in the data layer so I would say that is quite a tricky thing that you need to clearly understand what data are available in both data layer and manage what event should be listened from both data layer and then which one the in the Adobe launch you should have a rules to responsible and then to pick up the data and then send the data to the Adobe. Okay, okay I know like we are around the clock but one last question you know what are some of the best practices and you know to maintain a clear data layer in ACDL if we do not want it to persist so how to deal with this you know many times it is sticky in the compound state. Sorry can you come again? So the audience is interested in understanding you know what are some of the best practices to maintain a clear data layer in ACDL if we do not want it to persist? Well I think the really a good documentation is one thing and then a semantic understandable structure of the data layer is also important because if we have a understandable naming convention on all those attributes and events being used in the data layer not only for the developer but also for the implementation the analytics team when they look at the data layer they can simply read through the data layer structure name and then knowing what kind of event is triggering and then what data will be hold in each of those attributes so a clear and specific naming convention on the data layer is something very important. Okay thanks thanks Liu for all these wonderful insights. Yeah welcome and then maybe just one final quote I think Jen in the previous conversation also mentioned about the experience lake community so welcome join us on there any further questions not only Jen and me but a lot of our community advisor and analytics champions will help you over there. Rightly said as Liu said you know both Liu as well as Jen and many other Adobe champions they are available on the experience league community please feel free to leverage that to ask your questions after this session.
recommendation-more-help
82e72ee8-53a1-4874-a0e7-005980e8bdf1