Data quality and adoption in Analytics
Companies have identified data as a key enabler for future business success. To deliver on this, a high-level of data quality and an adoption of a mature data stack are needed. Learn how you can build data quality into every aspect of Adobe Analytics and create a shared culture around it.
Hello, and welcome to our Adobe Analytics webinar, Fueling the Successes of Tomorrow, Data Quality and Adoption. This webinar is a part of the Learn from Your Peers Adobe DX Adoption series. The goal of these webinars is to provide practical guidance from peer experts on how to get the most from your Adobe Analytics instance. Through these sessions, you’ll learn how to take your usage of Adobe Analytics to the next level with art of the possible use cases, best practices, and tips and tricks. My name is Alyssa McGrew. I’m a customer marketing manager here at Adobe focusing on Adobe Analytics. Today, you’ll be hearing from two of our amazing Adobe Analytics champions, Fred Wigwener and Sarah Owen. Before we jump into our presentation today, I want to cover a few pieces of housekeeping. This webinar is being recorded. And after the webinar, we’ll be sending out a copy of the recording, which will also include the slides. There are a few things on your screen I’d like to point out. First, the council that you’re seeing on your screen is the CTO of Adobe Analytics. The council that you’re seeing on your screen is completely customizable. So feel free to resize or minimize any of the widgets on your screen, make the video or slide smaller or larger based on your preference. Second, we’ve shared a number of resources related to today’s webinar. Most of these will be referenced during the presentation today. So we’ve included the slide number next to the link for easy reference. You can find these in the related content panel on the top right of your screen. Third, throughout our session, if you have any questions for our presenters, please feel free to contact us at the Q&A box at the bottom of your screen. And then lastly, type your question in the Ask the Presenters box at the bottom center of your screen. We have several people who are behind the scenes who will help answer your questions or will tee them up for the speakers during the live Q&A. We’ll do our best to get to all of your questions. But if we don’t, we’ll follow up in a discussion thread that will be posted on the Adobe Analytics community, also linked in the related content section. And last, but certainly not least, you’ll find a few other things. So additional information about our speakers, a survey, which you’ll also find on the right side of your console. Please be sure to take that before you leave. That’s how we pick topics and presenters for our future sessions. And you can even give our speakers a bit of love throughout the session with the Reactions button. So feel free to try that out now.
Now, on for our agenda for today. We’re going to be discussing the importance of data quality and adopting mature data stack. Specifically, our wonderful presenters are going to talk about building a data quality mindset, enabling high quality data collection, and monitoring data quality in Adobe Analysis Workspace. As mentioned earlier, we also have time set aside for Q&A. But feel free to ask your questions throughout the session using that Ask the Presenters box. So with that, I will pass it over to Frederick and Sarah to introduce themselves. Yeah, thank you, Alyssa. And hello, everyone. I’m Frederick. And I come to the most boring side of today. But yeah, I’m Frederick. I’m one of the Adobe Analytics champions also of this year. And as you can see on the screen, I’m physically unable to shut up about Adobe Analytics all day. There’s two weird call-outs at the bottom, which are going to make more sense later. I’m also a contributor to the Adobe Launch Core extension that is going to be relevant later on. And if you want to read a blog about Adobe Analytics, why not check out full stick analyst.io, where I write about Adobe Analytics, customer journey analytics, and all those good things. Then over to Sarah. Awesome. Hey, guys. I’m Sarah Owen. And I’m a senior analytics engineer at Search Discovery. I’ve had a lot of fun in the analytics industry for a number of years. Lately, I’ve had the honor of being an Adobe Analytics champion, speaking at Summit, and co-leading an Adobe Analytics user group. And most importantly, I am an avid reader of the Full Stack Analyst blog. And you know, I hear that author is pretty cool.
Frederik, I’ll turn it back over to you. All right. Then let’s get started with the actual content. So to start the thing off for today, we’re going to start with a little bit of background on why instead of only considering data quality within your own analytics team, you should actually work on aligning your whole company on some shared standards and practices. So Sarah found this great quote that really illustrates that point, because when we’re thinking of data quality, many of us are just thinking of some isolated activities or artifacts. So for example, defining requirements before development or initial validation of implementations, trainings, and all those things, or even cleaning up after an incident happened and you lost a bunch of data. Those are the moments when we like to think about data quality, but in a more established and successful practice, it’s important to become aware of how our own actions and decisions each and every day can actually impact data quality. And once you are aware of your own actions, you will also want to share that mindset with the rest of your company. And we’re going to take a look at how things look in the beginning. Because especially if you start out with analytics as a company, it’s not very uncommon to find a lot of those tasks on the analysts table. Because they are the ones that are creating the setup in Adobe Analytics and Launch, and once they have access to the tech manager, they are also going to set up the property in their reports within analytics and give people access and train them on the tools. But then they will also likely start collecting information from the website, and they are going to figure out some more or less clumsy CSS selectors, two-trick button clicks, they are trying to get the information from the DOM, like get to figure out what the name of that button is in the CSS, and all that, and funnel that into analytics. And in the end of the day, it’s also them who then report back to the business about the performance, or even try to give some recommendations or educate stakeholders on their business, which in my mind, that’s a big red flag. Like whenever analysts try to educate the people of the business about their business, like I don’t like that one particularly much, but that’s just how businesses start out. And in a setup like this, it’s very hard for the analyst to ensure that data is of the highest quality, because it’s not only all the responsibility on their shoulder, between all those different tasks that you can see on the screen, but they are also very reliant on information from others. For example, about changes to the environment, if like a CSS class changed on the page, or if a big marketing campaign is rolling out, they had no idea that it’s actually happening. So in a more mature setup, you can see how the tasks are now way more differentiated, and also distributed between all the teams involved. It’s usually still the analyst who takes care of the setup in Adobe Analytics and also launch, but they will then work with the site or the app developers to find some standardized interfaces that allow them to send that data on their own. And in my ideal words, the reporting should then especially with a system like Adobe Analytics that allows for some very nifty democratization of data, be in the hand of all the business stakeholders, because only they know about the latest changes on the page, about the latest content rolling out, marketing campaigns, like that’s all the business, and they should also be the one who report on that, at least in my mind. So then also even for validation tasks, those mature teams can then leave it to the developers or even business stakeholders, because it’s been made easy for them, because that’s actually built into the process.
And as a whole, it’s then important to keep everyone aligned on one core belief that you see on the top here, that data quality at the end of the day is the same as decision quality. So without the right data- I’m giving an amen for that one. I think that one needs to have an amen. Yeah, maybe we can have some indie reactions. I don’t know how we actually see that on this beautiful platform, but yeah, if you agree, let us know. Like in the end of the day, that’s what we’re fighting for. And that’s also where we should align our team and should align our company on, because without knowing what is happening on our experiences and in our business, like how are we supposed to optimize any of that? And for example, for developers, they are usually already aware that their work has some direct impact on the quality of the user experience, because if they break the app, if they break the website, then the business will not be successful. That’s quite understandable to them. But if they break the data through their actions, they will also be responsible for the wrong decisions being taken. And once they understood that and re-internalized that, they will be very interested in learning how they can actually make sure that the data’s up to spec and collected correctly, even as they are busy coding. And we’re going to take a look at how that can be built into the process. Then on the business side, developers usually have a very big need to validate what they’re doing in an analysis workspace, because if they make any mistakes there, they may derive the wrong conclusions or even embarrass themselves in front of their colleagues. So even if you have given them the security they need, they will want to extend their skills and increase the depth of their insights. So there’s a lot of data quality happening on that business level as well. And then finally, as analysts, we want to know what is happening, especially when we keep an eye on all of our implementations, because we may have many of them. And that’s a big change there to actually keep in the know what is happening across all those websites, because we don’t want to play communication, ping pong, or avoid any blockers towards the business when we actually put data quality onto our website.
So what we’re going to cover today are some of the classical dimensions of data quality, like you can see right now, like accuracy, precision, and also consistency. And we’re going to show you ways how to systematically improve them through measures, like for example, automated testing, data monitoring, and also some analysis workspace and Adobe Analytics tips and tricks. So going to the next slide and over to Sarah.
There are many, many people involved in data quality. So through the next few slides, we’re going to really hone in on how key pieces of data collected in your implementation and the documentation that you have and your workspaces can support your company’s data quality journey.
So as the saying goes, knowing is half the battle. And in our case, knowing which logic and launch was executed in order to produce the data that was sent off in your analytics call is a great half to start with. So to dynamically pick up the name of the launch rule, you can use event.$rule.name. Now sometimes multiple rules within launch fire off for one analytics call. So you’ll want to take this into account by checking to see if the variable that holds your launch rule, so Prop 19 in this example, is already set. If it’s already set, then you can just append your current launch rule name to it. And you can even use this event.$rule.name to dynamically populate the link for your s.tl calls.
Before we start coding though, let’s take a look at the latest core extension. This is version 3.3.0. Because we got a little help from our friend Frederick, you know, as he shared in his intro, he has contributed to the core extension by adding this data element type of runtime environment. This new data element type will pick up the rule name, which we were just talking about, plus as you see, a bunch of other great information for you. So now we can easily collect the rule name without any custom code by creating a data element. And then we can reference this data element either in a rule or in other data elements. So thank you so much, Frederick.
Now, okay, now this little interaction here with y’all. I bet most of us have at least two monitors, maybe three. So what I’d like you to do is open up your website on the monitor that you don’t have this webinar in. So keep us up, but put your website in another monitor. All right, so I’ll wait for a minute for you guys to do that. And once you have it there, if you could open up developer tools. Now you can do that by right-clicking on the page and selecting Inspect. If you’re on a Mac, you’re just on your own.
But once developer tools are open, then please click on the Console tab because I’m asking you to do all this because I think most of us on this call are using Launch. So we should be able to put these two snippets of code that you see on the screen, and that will get dropped into the chat area or the answer question panel. We’ll drop it in there so you can copy and paste it into your console. And so having these two pieces of data, your environment and your build date as part of your analytics call is gonna come in really handy when you have one of the analysts come over to your desk and point to some unexpected data and say, what happened here? And so like we talked about in the last slide though, whoops, on the last slide, you can use Frederick’s new runtime environment data element to capture this data. So here you see both of those pieces of data of the environment and the build date.
So other helpful pieces of information to have, whoops, I found it against this slide, there we go. Other helpful pieces of information to have in your analytics call are the unique IDs and version numbers of other Adobe products you have running on your site. So here we see Adobe Visitor ID, which probably all of us also have running on our site. So in your console, if you type visitor with a capital V dot version and hit enter, you should get a result back. And then if you have Adobe Target or Adobe Audience Manager set up on your site, then the Adobe.target.version or dildotversion will also bring back some results. Now to get the target activity name and activity ID and the experience name and experience ID, it takes a little bit more code, but don’t worry about it because there’s a really good article out on Experience League that walks you through the code that’s needed to pick up those four pieces of information. And that will be dropped in the chat and also back in that resource panel as well. So you guys can reference that.
And if you have other products on your site, like the recording product, Decibel or Glassbox, or a voice of the consumer tool like Medelia or Qualtrics, you can grab those key pieces of information from them as well. So here you see on the screen how to grab the Decibel session ID. And Medelia has an extension that can help you out so that way you can collect all the key pieces of information and that feedback, UUID, that’s the ID that’s served up once someone’s responded to your survey. So having these IDs in your analytics call will really help you quickly dive in and you can get that data or the Decibel session ID from your analytics that you’re looking at to figure out what’s going on. You can get that ID and go over to Decibel and very quickly find the recording and see firsthand what happened. Same thing with the Medelia. You can look into your Adobe Analytics workspace, grab that feedback UUID, go over to Medelia, find that survey and be able to read all that great open-ended feedback that the customers are giving you. All right, so everybody refresh your screen and let’s look at your analytics call that’s going off to Adobe. Do you guys see any of the stuff that we’ve been talking about in your call? If so, that’s awesome, I’m so pumped for you. But if not, don’t worry about it, it’s totally okay because I think you can just set aside some time like later today or tomorrow morning and get a couple of these features in your Dev and Fire environment. So the next part of enabling high quality data collection is actually documentation. And there was a webinar about documentation a couple of weeks ago. So there is so much you can talk about documentation like where it’s at, what tools you use to do it. So, but on this, we’re just gonna really hone in on the easiest way to do it, which you see here is like a Google Excel or a Google Sheet or an Excel. And so within the Excel, what you’d wanna list out are the different types of data that you would expect like on all your pages. So like on all pages, you would have a page name or pieces of information that you’d find on specific pages. So like on the product detail pages, in addition to the page name, you’d also pick up the product ID. And here you see in the documentation too, it tell you like where to find it in a data layer and maybe some examples to help you out.
Now, this type of documentation is gonna become really useful for good old manual QA. So when you, with our multiple monitors that we all have, we’ll have your website up, you can have your debugger tool up and now you can have your documentation up as well. And so you can look at your debugger and verify off your documentation like, am I seeing the page name that I expected? And then if you’re not, then you can look at your data layer to say, well, is it in the data layer and just not in my analytics call? Or maybe it’s missing in both places. So this documentation will be very handy that way. This documentation will also come in handy when you go to like maybe work with the QA team and get some, get your data layer as part of their automated scripts of QA. So there’s lots of options out there for automated, automating your QA and Frederick’s gonna go into that a little bit deeper, but this Excel will help out and will also help out with Observe Points. So Observe Point is something like you and I can do, like you don’t even have to go to the QA team, but if you have this Excel with all your information, it’s gonna be helpful when you are filling out those journeys and audits within Observe Point, because Observe Point not only can look at your analytics call, it can look at your data layer and it can look at other tags as well. So now, Frederick, I’ll turn it back over to you to go a little bit deeper into the automation. Yeah, thanks. And also from EA plus one on the piece on this will insights because we recently revamped our integration with Decibel Insights and those pieces of code that you shared those came in very handy for us because that’s exactly what we’re using to get all those Decibel sessions matched up with our Adobe Analytics sessions. So yeah, what we’re now going to talk about in a little bit more depth is how we can like become a bigger portion of the developers day to day life. Because when it comes to collaborating with them, one great way to empower them to maintain quality in their day to day work. And that’s really what we’re talking about. It’s not just this one big event that happens like once a quarter or once a year, but it’s really in your day to day work, how can they be sure that they didn’t break anything that’s of importance? And for that, they are like chances are still already using automated testing like we can see on the screen. So as soon as they push some code into the repository, there’s going to be, as you can see, some interaction and accessibility testing. There’s going to be some user flow testings, all those great things. And you can create the same thing for Adobe Analytics or any analytics solution for your tech manager, for your data layer. And that way you can actually become a part of their day to day life. So what you can do to work with them is what you can see already on the screen. And chances are they are going to use some automated testing tools like Selenium, for example. There’s others out there that are like bots that go over your page, they click on every button, they check what’s happening there. And they can do things like, for example, read and write from the browser console and take action based on the result of what they’re finding there. So if you manage to put some information for them into that browser console, maybe only on your staging environments, then you can actually make sure that whatever they are doing in their code and whatever change they’re pushing live, is not affecting data quality and whatever you have implemented on the page. You can also, as you can see, just simply validate what’s in your data layer, like confirm that events are firing, or even if launch rules are firing using the satellite monitors. Like I’m not the biggest fan of that, like very tight integration, because maybe you want to change how your launch rules work at some point in the future, and then you need to work with them to reconfigure the testing. But in theory, all that’s possible, and you can do all that. There’s also going to be some links in the resources to link to the documentation on all of those great parts. When your devs then code that new feature, they will then immediately see if that code has broken analytics. So a little bit more on that later. But first, another, like one of my personal crusades in the industry is also to reduce custom code, because that’s, in my personal experience, one of the worst things to inherit. Like whenever you’re switching companies, whenever you’re new to a implementation or website, and you find lots of custom code, like for me, that’s personally, like my personal nightmare. And what many analysts or even implementation specialists will do on their website is they will have some knowledge about JavaScript. So when they need to create like a data element or conditions or even actions, they will think in their head, like, I know how to do this in JavaScript. I’m just going to quickly code this up, and then like, it’s going to work. Like I just need to get this job done. I don’t care about who’s coming next and who needs to read all that in two years time. So they will just put it on the page. But what you see on your screen right now is a real life example that is, like unfortunately still live on my main website. And you can see that a 167 line custom code condition that actually fires on all rules. And it’s very, very old. Like it’s attaching some information from the DOM to the event object, so we can use it later on. It’s really elegant, but it’s not documented anywhere. It’s just in launch. And you can see those comments in there. That’s the only documentation we have. It recently broke on production in each and every rule, and we had to fix it really quickly. And yeah, that’s what the white goose chase to find the like root cause of that problem. And you want to rework to not have that and to get rid of it if it’s already in your implementation. So what I’ve personally done then is I was considering to build a custom extension because that’s what we can do in launch. We can just build our own private extensions, and then we have all those functionalities covered within our own extension, but then we would need to take care of it. And I was like, no, like some of the stuff that you can see on the screen, and I’m like every customer is doing it. Like why should it be exclusive to us? Why would I need to maintain a launch extension? That’s not something I’m familiar with. So instead of doing that, what I was actually doing is work with the launch developer team. Like you can also find in the resources, it’s like the Adobe LaunchCore extension is open source on GitHub. Like everyone can just contribute, like create a pull request for a new feature. And that way I was both able to get my features that I wanted and make it available to everyone else. So there’s no need for custom code anymore.
So what you like, as Sarah pointed out before, what you really want to do is you want to make sure that the core extension is also other extensions. Like maybe offer what you want to do already. And maybe there’s a way that you haven’t thought of, and maybe you haven’t taken a look at the release notes for a while, but this is really something how you can bulletproof your implementation. Because this one, I can say those data elements, those functionalities, I don’t need to care about them anymore. Now they are Adobe’s problem. Now they need to make sure that it works in the future. I’m out. I have no code now and I’m still the contributor of that, but I didn’t need to make sure it works. And that’s really cool. So another way that, well, another issue that, for example, implementers and IT have is that IT usually has an interest or also developers to know what is happening on their page. And they give us all that power to like put any custom code live through launch. And that’s a lot of power and they trust us, but it’s still like maybe the page broke recently and nobody really knew what was happening. And maybe there was something going on in launch, but who can we ask to actually check that? And one thing that launch offers us is what they call custom callbacks, which are just web hooks that we can define through the API. And with those, we can then inform external systems about, for example, deployments can also be other things that happen with the launch. But those examples on the screen are some of my favorite applications for them, because the one on the left is from Microsoft Teams, where all deployments are posted into a dedicated channel to proactively inform everyone who may want or need to know about new code on the page. So instead of them having to go to a release notes page, instead of you having to reach out to them, they can just join that channel and they can just see what’s happening in real time. That’s really, really helpful because they would be able to see, well, nothing happened in launch, nothing was released, whatever happened on the page, it wasn’t the analytics team. And what you can see on the right is what we actually integrated with JIRA, because there, as soon as we mentioned a ticket number in our library title in the Adobe launch, that ticket will then receive a comment like on the right hand side about that deployment. And you can see there’s information about the data elements, the rules, the extensions that have been changed and all that helpful information. And once your stakeholders request that one marketing pixel or that one information to be on the page, you can actually just inform them proactively and they don’t need to ask anymore and they don’t need to wait for you to reach out to them. They see, all right, it’s live, I can now validate and go on about my day. So those are really, really helpful to get everyone on the same page.
So another thing to then also look out for, I was hinting to that before, but yeah, some companies may consider another common example of reducing custom code through a custom launch extension. And because every user of Adobe launch can just start coding their own extension and then extend the system with whatever they want to have in terms of functionality, this may look very tempting to those code-savvy team members like usually there’s like one or two team members that would really like to do that kind of thing. Like to them, it would be a lot of fun to put that into their own internal GitHub and then like deploy all those different versions, but you should really close the exam before going down that route, because in the end you will have to support that and maintain it for many years. Maybe even after that creator left your team and there’s some alternatives on the screen right now that you can do instead. And if you want to read more, you can check out the post on my personal blog or also the Adobe Tech blog. There’s also some links to resources on like on the, how’s it called again? The resources tab, I think. We don’t see it, but I think I trust it somewhere you can find those links.
So then one legitimate use case for custom code and that’s just because we don’t have any alternative to using custom code today is to create some custom lock debug messages for business stakeholders, also developers, all those automated testing tools. Because using the example on the screen right now, you can use that satellite.logger object to create some custom messages that only your team knows how to read. So for example, you can just dump your whole data layer when you trigger that page view to allow your business stakeholders to see what the page name is. You can allow your next millennium bots to see whatever was in that event payload. You can put a message into the browser console that needs to be enabled. You can put whatever you like in there. And as mentioned before, this is a great opportunity to integrate with all those tools. And you can recheck those messages and quickly see if everything is still in order. So then I think it’s time to like leave the area of automated testing as much as we love it and take a look at what we can do in Adobe Analytics there, all right? Yeah, so we’ve just gone through, like Fredrik said, a wide variety of technical steps to take in your data quality journey. So now we’re gonna look at some ways within workspace that we can monitor the data.
So at a minimum, be sure your minimal viable metrics are getting data. All right, well, now part of me wants to stop and be very minimal and just go on to the next slide, but I should say a little bit more here.
So minimal viable metrics are like your handful of metrics that show how well your site or your app is doing at achieving its purpose.
Maybe another way to think about it is these minimal viable metrics, it’s what ties to the initiatives that you’re gonna be held accountable for probably like in your year end review.
So I know it can seem impossible. You’re probably sitting there and being like, sure, right, just a handful, are you kidding me? But I’ve even heard some people take it as far as saying, oh, let’s pick one metric. So I’m not quite to the one metric place in my life yet, so I’m with the handful. So we’ll go with a handful for this presentation.
And now, in addition to your minimal viable metrics, there probably are some more like secondary metrics that maybe like leading indicator metrics. And you can feel free to add those, but I would ask you to add them as a second table so you have a distinction between your minimal viable and then more of your secondary ones. And now once you have these tables created, it’s a really easy way for you to like just hop in and keep an eye on them, or you can use these to create alerts. So that way, when you eventually go to bed or decide to turn off your computer on the weekend, you can have Adobe monitoring them on your behalf.
Another thing that in workspace that I’ve found really handy is having a deployment workspace. And so with this deployment workspace, I’ve had a panel for each of my deployments. So at the top of my panel, at the most, I just wanna make sure that there’s still data coming in after my deploy. So you’ll see all of these report lists within my panel, they’re all looking at hourly, because I wanna see the hour I deployed and then watch my data afterwards to make sure that it is still coming in. So at the high level, are we still sending data? So trend your visits. The next thing are those minimal viable metrics that we just talked about, as well as some of those secondary ones. So once again, you wanna make sure you’re keeping an eye on these hourly after your deploy. Even if your deploy had nothing to do with these metrics, that way, just in case, just in case they get touched accidentally, you’re keeping an eye on them.
Then last but obviously not least, it’s like what you actually deployed. And so you see here at the top of the panel, we had some descriptions on what was deployed. So in this case, it was some internal search. Items were updated. So you do see down below, we’re looking once again hourly at our internal searches performed, our search terms coming in. So we are keeping an eye on what we specifically deployed.
Awesome. Now, Frederick is gonna give us some more workspace tips. I do, thank you. So yeah, one thing coming back to actually your tips that you said before, like once we actually monitor like all that information from Adobe Launch, like for example, the deployment date of our Launch Library environment, like even some more internal details, what we then can do in workspace is just create a simple chart like this, where you’re actually able to see how long, for example, after release, you still see users with those older code versions. And one view that I like to do with that is to actually all normalize it around 100%. So you can actually see all those little date ranges or time ranges, like depending on what your app or website is doing, if you have one of those single page applications that never reload the page, like those who will have some older Launch Libraries, potentially for weeks, if people don’t restart their computers. So what you see is something that we’re very used to from mobile apps, but can also see through cached versions and all those different things in the browser as well. We can actually see that old code versions are still live in people’s browsers. And that will mean that you will see potentially data in two different formats for a while, because there will still be people who are using that old format. And if you have some classification rule, build a rule, split on that new format, and you want to see your new data in that format, like why do I have like unspecified in my classifications, all of a sudden like this code went live days ago, you will be able to see all those users who still use those old versions. There’s not a lot you can do about that, but knowing about it, that’s like very instrumental to also like not set off any developer alarms, because they will be like, hey, why does my data now look that weird? So one additional great way to monitor data quality in Adobe Analytics is basically doing the inverse of what Sarah presented a second ago, because just as you can do with your minimal viable metrics, which is a turmoil of, you can also do that for example, for your unspecified items. And what you can see on the screen right now is that this is actually a table in analysis workspace that is using more than one dimension, because in this table here, I’ve just collected all the unspecified items from my four top most important like minimal viable dimensions. So with this simple table, I can then say, all right, what’s the trend on the unspecified in those dimensions and see if it’s going up, if it’s going down, because unspecified can have meaning like that can be a very legitimate value and you want to see what’s happening there. And as Sarah was explaining before, you can then just mark them all down, right click on them and create an alert from them if you want to check through some automated alerts, if your development team broke anything overnight, because you’re going to be the first one to know.
And then for the next side, we also have some great examples of like my actual production data where you can see the rules of our Adobe launch implementation firing on the page as we’ve defined them. And you can see those go page view and global custom link tracking rules, those who are the ones that were broken by that like condition that I’ve shown before. So there’s quite a big percentage of events affected by that. So all that can be seen through Adobe Analytics in the NASA workspace by just tracking the name of the rule it has been firing. And you can see, is it going down? Is it going up? How many visitors and visits are like, have that rule firing as part of the session? And you can see if there’s anything fishy in there, because yeah, you will just see what launch is doing. And this is like in my mind really closing a gap. I think Google Tech Manager can do that out of the box where you can actually see if a rule is firing, which is super cool. And you can have something similar to that, at least for your defined Adobe Analytics events, which is cool.
One way to also increase data quality when working with your business team is what you can see on the screen right now. Because what you can see is two different, like dimension descriptions for EVA1. And you can see on the top it’s saying EVA1 is described as the campaign marketing channel, which is correct. Like there’s nothing wrong about that. You can describe it in that way. But on the bottom of the screen, you can see described as the channel of external marketing campaigns. You can use this dimension to learn what channel brings traffic to your site and break down by campaign or creative and use with unique visitors or bounce rate. Like unfortunately, we have some character limitations when we put in those descriptions, but like don’t know if we can see reactions to that, but if there’s a way to vote, you can vote on your favorite version, because what that will allow you to do is in an as-is workspace, people can actually see those component descriptions as they analyze their data. So if you put it there, they will have a way to know why they’re doing analysis. Like what can I use this dimensional metric with? Like should I break this down in the way that I’m doing it too? You can also like in theory link to your conference pages for your documentation and all that great stuff. So making that information available to them as they do their work, like that’s really helpful to them. They can then have confidence in whatever they are doing and not have to reach out to you to validate if they like just use the marketing channel that I mentioned with bounce rate, because you can see it’s working right from the description.
So last tip then also, you have to click that little icon to see that, but you already knew that. And last advice, of course, whenever you do something that has value to your company, consider saving that as a template in an as-is workspace, because people in my personal experience are really shy to get like that first drag and drop item into workspace. They’re like totally confused in terms of how should I start this off? Like once they actually see something in front of them, like a old report or a template, they will be able to go from there, like look at the descriptions, look at all the hints that you put in front of them, but templates and all the things that you can put in front of your business users really help you to increase data quality for them, because you can actually become a bigger part of their life. So that’s all the tips we have, and now we have the summary left from Sarah. Yeah, so we’ve covered quite a bit today, and Fredrik and I wanted to provide this recap slide of the topics that we have discussed. So this would be a really great time to take a screenshot. I do know the presentation and recording will be shared later, but if you take a screenshot, you’ll have it like this day right away, because I think a lot of you hopefully can start putting some check marks in those boxes to say that you’ve done it or that you’re working on it. So that will help push your data quality journey down the road. Awesome. I think Alyssa that Fredrik and I will turn it back over to you.
Great, thank you, Fredrik and Sarah for sharing all of that amazing content. I know everyone has a lot of takeaways with the resources you shared, and I saw a lot of great reactions throughout the session. It looks like we have a couple of questions from our audience, so we can go ahead and take those now. But I wanna remind everyone, if you haven’t had a chance to submit your questions to our presenters, you can do that now by using the Ask a Presenter panel. There are lots of resources and great tips that were shared, so definitely take advantage of the live Q&A time. And as a quick reminder, if you have to leave early, please don’t forget to take our short survey. It’s only three questions, and as mentioned, it helps us select topics and speakers for our future sessions.
So with that, one of our first questions that came in from the audience, should we pull instances for avoiding unspecified values? Oh yeah. Fredrik, I think I remember seeing that in your screenshot when you had your unspecified occurrences. So I probably like, is it occurrences or instances or visits? Yeah. So for that use case, you should actually use occurrences over instances because instances are always bound to that one dimension that you’re using. So either you want to use all the instances metrics for all the dimensions that you add to the table if you go down with the approach that I was showing before.
If you want to do that, that’s fine, but I would recommend to use occurrences if you really want to have that one table with all the unspecifieds in it.
Okay, good call. Great. And we’re really paying attention to your screenshot, so that’s awesome.
Yes, lots of great engagement. And I know even lots of questions about the slides being shared, so great resources. We have another question from Lucy who said, "‘Very interesting session,’ thank you both. "'Your tips are great for assuring the data quality "'on a new website that you’ve just set up. "'However, do you also have some tips for mature websites "'with a lot of legacy rules, many stakeholders, "'and different developers working on them? "'I’d say that clear data governance processes “‘and responsibilities are needed in such a complex setup.’” Yeah. Absolutely. Yep. And I think a lot of the stuff that was shared today, you could put in whether it’s an existing or a net new implementation. So I know, and like Fredric’s been the same way, when we’ve come into a new role or a new position, these are things that I look for right away. So your minimal viable, making sure that you have that set up and shared with your stakeholders so they can keep an eye on it too and understand the importance of it. So that way, if something does go wrong, you have their backing to help maybe push or escalate getting something fixed. I think one of my most favorite ones that we’ve talked about is actually getting the launch rule. And making sure that, especially when you’re new and you’re new to the implementation that’s already existing because that way you can kind of quickly see, okay, what launch rules are firing, which ones aren’t firing, or which ones are just doing it a little bit and do you even need them anymore? So once again, that kind of helps with some cleanup. So I think that would be my suggestion is for one that you’ve inherited, the minimal viable and the launch rule names. Those would be the two that I would tackle first. Fredrik, what’s your take on it? Yeah, exactly what I was also going to recommend there because whenever you come into a new situation in that regard, what you want to do is you want to start collecting metadata about what’s happening. And the launch rule name, that’s really cool together, like looking at analytics logs, even saying who’s actually using Adobe Analytics, they in there out, who’s logging in? Is it all the analytics team? And then also actually those marketing and product teams, like what stage of democratization are we on? Like who should we actually optimize our implementation for and who do we need to work with at whatever stage we are on? Like all that, like everything that helps to get that quick overview, for example, through the launch rule name and any type of logs, that’s really helpful to just understand what’s happening because every company is sending this highly customizable tool different. And there’s so much you can learn by just looking at what’s happening today. And like those pieces of information to just connect already existing systems, like for example, Jira, Teams and Adobe Analytics or Adobe Launch, like you don’t need to put anything new in place for that. You just connect the dots and that will get you some great startup points for actually bringing those convergences to the business.
Great, great. Great, thank you both. And speaking of Jira, we had a question. Is there any documentation you recommend for how to get the Jira launch integration set up? Well, yes and no. So the documentation that’s perfect for that is the Jira documentation and the launch documentation. Like I think I’ve linked to the launch documentation, at least in the resources. But yeah, for that specific use case, I haven’t seen any documentation yet. Maybe I need to write about it now that you’re saying it. But yeah, that’s actually handling a standard webhook and then like on that formatting and all that. Maybe that’s something worth writing about. I don’t know, Sarah, maybe you’re aware, but I don’t know if that’s publicly available anywhere.
No, yeah, I think I’ll just keep an eye on that analytics full stack blog, see what happens there.
Maybe we can convince them there.
That’s awesome. Sounds like some new content tips coming out of the session, which is great. So another question that we had, what might be the pros and cons for dev managing launch versus analyst managing launch? Look at that. I feel like we need cookies and donuts first to make everybody a little happy, get everybody in good mood before we start talking about who’s in charge of what. I think it’s actually a combination. It’s a good working relationship because yeah, launch, it can bring your site down. So an analyst, you’re not maybe on the IT team, but yet you do have that capacity to push stuff out there that could be wonderful or could bring something to its knees. So I think it’s having that relationship with your dev team and even communicating when launches are happening from launch and maybe line up with their deployments. I know the glory of launch is being able to deploy at any moment. And so still be able to use that great feature of it, but make sure you’re talking to your dev team and like your QA team, pull them in. I think that’s something that I really enjoyed being able to do is not only QA myself and doing like that manual QA, then also tapping into the, like Frederick talked about at the beginning of this seminar, tapping into the business owners and the stakeholders and the QA teams, so tapping into everybody to help, you know, be a second set of eyes to validate everything’s looking good. Cause there’s a lot, a lot of browsers, a lot of devices. So there’s a lot that I myself can’t look at, but other people can help out. And then when we do push live, whether it’s the dev person pushing the button live or like the analysts or implementator pushing it and launch, everybody’s aware of it. Everybody knows what’s happening. And then it’s a joint, it’s a joint effort. That would be my take. Frederick, how would you, how would you address this? We think way too, way too alike because there are not a lot to add there, but yeah, it really depends on like what your process should look like because you really want to have the trust from your development team. And especially in the beginning, it can be a good way to just say, hey, you’re in charge of the tool. Like you only have the rights, like in launch you can configure who can actually put something live and you can give them that privilege that only they are in control of what goes live, at least in the beginning, because like as humans work, we just need to earn trust from them, right? Like it’s great if they trust us upfront, it’s great if they say, you can do whatever you like to our page, but if they don’t, then you can just play along the rules and just say, hey, we’re going to let you know once we have something live. And then for example, through the Teams webhook, you can actually implement like a approve or deny button. And you could actually completely manage that through Teams. Like this way to make it really horrible, really slow and really cumbersome. This way to make it really fluid and really trusted and that kept a shared, trusted relationship. And it’s what you likely want to work towards and whatever helps, like getting all the people in one room, agreeing on ways of working and agreeing on what you can and can’t do. And like for custom code, if people need to prove it from the development team, there’s ways that you can actually make all that work. And yeah, like it’s a people question. It’s not a technical question. You can do everything you want with a great permission system. You can give everyone every permission or like only one person permission to deploy. And yeah, there’s many ways you can go about this, but trust is key. Mm hmm. Great answers. And I feel like also great, great advice for just general relationship management. So hopefully you’ve announced that question. We have a couple of more questions that we can go through. One from David, what is the best practices for updating extensions in launch? And how often would you recommend? Frederick, I’ve been jumping in first for a couple of days. I will let you go first this time, I apologize. Yeah. Like that’s usually two, like two, like, how to say like two occasions on when you want to update something. Like one occasion for me to update my core extension is for example, when there’s some new features in there, like when I issued a new pull request and I actually get some exciting new data elements, I want to have them. I need to use them. There’s new functionality, bugs getting fixed and all that. And that’s usually something you want to do as quickly as possible. But for all the like maintenance, all the cleanup stuff, like how I usually like to do it like once a quarter, I create a ticket for myself to just go through each and every launch property because there’s going to be many of them and just say, all right, like add all new extensions to that property and then just push it live together because there might be some like dependencies, for example, from the like experience code and the extension to the analytics extension to the core extension, all that. And like how I usually like to do it, it’s just update then all at once because it’s like the safest way I feel about like updating all those dependencies at once. That doesn’t allow to like dedicated testing of each and every like extension, but coming back to trust, I trusted the we on that. So yeah, I’m going to complain to them if things break. So I feel comfortable doing it that way. How about you Sarah? Yeah, I totally agree. And I think one thing too to look at is, I think a prime example is the target extension. So I think there’s two of them now. And it’s before you move to the newer version of target, the target extension, really read through that to understand what is changing. Cause with that change, with that shift, there’s a lot of behind the scenes changes that you’ll need to make like in your code base. It’s not just like, hey, here’s the newest extension version and you can QA it and push it live, but there is substantial coding changes to use that newer extension, which is not a bad thing. Like if it matches your use case, and I think it had a lot to do like single page applications, if that’s really what you needed, then do it, get onto that new extension, do the additional code changes to support that. But if not, I don’t think it’s not a bad thing in my opinion to stay on that like other version of target extension, because it’s once again, it’s meeting your need. You’re able to do what you need to do. And so there’s no need to like say, oh, I gotta be on the latest and greatest at the very moment it comes out. And read like Frederick said, read what’s in the release notes, why you would want to change to it, what’s the new capabilities. And then you can stay with what’s working, but then maybe put it into your roadmap to say, okay, I’m staying here with what’s working right now, but let’s figure out what’s the game plan to maybe get to this newer version if it’s needed for our business. So yeah, great question. That’s super cool. And just to add to that, because like also props to Doobie, because they are really great at keeping those endpoints compatible to older versions of code. So usually there’s no rush in updating. Like you can use some year old extensions still today in the way they were released at that point in time. So really want to say like Sarah said, like read through those release notes, see what changed. If you need to change anything and then decide on that, but don’t rush it at least for Adobe’s own extension. They are doing a great way of keeping those endpoints consistent and also just to add, I think with the Web SDK, because the Web SDK can also do target request. We actually have now three Adobe target extensions, but that’s beside the point. It’s not going to look too much detail. Good call, good call.
Awesome. I think we can probably fit in a couple more. We have one question from Greg. Are there any handy launch extensions for app properties that help enable high quality data collection? Well, I’m going to like highlight Frederick’s runtime. So that’s what the core extension. So it does have a lot of great pieces of data. So in that you do have like, oh man, I don’t have my screenshot in front of me, but I think almost every single one of those, before I have written custom code, I’ll press up to it. But now that Frederick has it out there, I have updated to 3.3.0 so I can use it. I think like that would be a great place to start is using those to get data into your analytics call.
And then, yeah, just looking, I think, you know, with Medellia, I’m probably going to date myself, but I remember back in the day when, you know, you’ve got the document from Medellia on how to like grab some of those feedback, your UIDs and the survey IDs and the survey names. But I think, you know, keeping an eye on the extensions that are out there as well is a really helpful way to be able to start getting more different systems data into your Adobe Analytics. Cause that is actually one of my favorite things to do too. So I guess saying earlier, minimal viable metrics and the launch rule name in my analytics, some things I look for right away. The other one that is just, I think a game changer is getting as many other platform IDs into your analytics call. Whether it’s, you know, the ones that we talked about earlier, like Decibel, Glassbox, Medellia, Qualtrics. You can even do like your like Eloqua session IDs, your Salesforce session IDs. So any ID that you can get into your analytics is going to be able to help you go into other systems and really help you understand like, hey, when I look at this, you know, segment of data in Adobe Analytics and I’m seeing something wonky, you can then start pulling up, you know, breaking it down by those other IDs that you have. So you can go into those other systems to help give you that whole story. So that would be, that would be my two cents. Frederik, what are you thinking? Yeah, like along those lines definitely. But I think Rick, it was also like asking specifically for app properties. So like for apps, there’s like, especially what you want to keep an eye on is the, what was previously called Adobe Griffin. Like there’s been a fantastic dedicated session on like one of the US, like it’ll be analytics user groups. So you can find that actually the recording for that on YouTube, where they looked into that, which actually lets you like just take out your phone, start a debug session. And then as you use the app, it will show you all the events that are happening and it will basically show you what the AEP debugger does for the web, but also for mobile apps, which is super cool. But especially with mobile apps, like I like to use like the occasion of having to debug something or to validate something also to build that relationship with your developers, because they will have, like they have ways like in their developer console to see as they develop the app, like which events go out, what’s in those events, like what the payload is and all those things. And actually on my block, you can find on the top right corner, I think there’s something like Debug Lock Passer or something like that will actually allow you to take those like pieces of information that the debugger puts into the browser console and then pass that into like a nice table, like nice, but like functional table for export and later use. So I would always advocate for like using that to build your relationship, because you don’t want to make your app developers dependent on you, right? Like they should know how to validate whatever requirement you put in that JIRA ticket on their own and like validate also on production if things broke or not. So I would take that opportunity to again, build relationships with trust and enable them to do stuff.
Good call, good call.
Great, well, we will ask one final question before closing and this will be an opinion question between the two of you, I think. But if our audience can only take one thing away from this session, and there were lots of great things shared, what would each of you say is the most important thing for them to remember? Wow, that’s a good one.
I think it’s like what Frederic’s talked about at the very beginning. And we’ve talked about even now at the end here, it’s the relationships. Like none of this is one person. It’s not just you, it’s not just Frederic or me, it’s the team. And so it, like Frederic just said, it’s building the relationships with the QA team, your stakeholders, your IT team. I think that would be the big takeaway is it takes all of us to do data quality. And so that would be my takeaway is buy cookies and donuts and go make friends. It’s great. I shouldn’t have let you go first, because I was going to say the exact same thing. So yeah, like also what was of course covered in the questions before, like collect metadata on what is happening. Like not just look at what your team is doing, like what data you are collecting, but also look at how you’re collecting it. If you’re working in sustainable ways, if you have your democratization on track, and if you have all those, people working like on the project with you, like that’s really going to help you to identify those bottlenecks. It’s going to help you to identify any challenges. So collect metadata and act on what you see in there.
I’m glad we both got to talk because then we can cover all of it. So we got all our bases covered. We can keep going. Do you guys have some time left? We can just stay. Awesome. Thank you both for coming. Thank you both really for the great session. And we just wanted to say thank you everyone for joining. Hope everyone has a good rest of their days. Awesome. Thank you so much. See you guys.
Bye.