Analytics Grow Experience Makers Spotlight

Join us as we spotlight Andy Lunsford and Tyler Scott, two expert customers, and Adobe Analytics users. Each will share their best Adobe Analytics tip or trick. Their session is followed by an opportunity to ask questions live. You don’t want to miss this.

Transcript

Next up, we have two fantastic speakers to spotlight, Andy Lunsford and Tyler Scott. First up is Andy Lunsford, Manager of Digital Analytics Implementation at First Financial Bank. A hallmark of Andy’s career is helping others, from helping an SEO team rank number one for a specific term to helping consolidate analytics report suites to provide better data analysis and cross-device measurement to helping people learn how to ice skate. And true to form, today Andy’s going to help us. Well, actually, he’s going to help us help ourselves. So without further ado, let’s welcome Andy to the Skill Exchange. Thank you so much for the introduction, Sarah. My name is Andy Lunsford. I am the Manager of Digital Analytics Implementation at First Financial Bank. A little background about myself before we dive into the wonderful world of debugging variables. My responsibilities include maintaining and implementing all Adobe Analytics data collection at First Financial Bank across all of our web properties, which a lot of my work is being done in AEP data collection tags. Some of the work I’ve done includes being a lead on a large DTM to launch migration and being the project owner for a port suite consolidation project. I’ve also defined a data layer structure at a few organizations, with my primary goal being to keep a strong focus on scalability and governance. On the personal side, picture here is my wife, Lindsay, and our dog, Winnie. And we’re expecting not only our first child soon, but twins. Outside of work, I love to immerse myself in their culture of all sorts, board gaming, video games technology, discovering new music, and refurbishing and restoring video game consoles. Picture in this slide is an N64, actually, that I did an HDMI mod on. For our agenda today, we will cover a common definition of debugging variables, since this is primarily a term I use to describe these variables we’ll be looking at today. We’ll be talking about what benefits adding them can bring to your port suite in SDR, cover some best practices for using them, and discuss two ways to quickly implement them via tags and through the App Measurement JavaScript library, debugging variables, the what and the why. Debugging variables is a term I like to use to describe the variables that aren’t part of your measurements or business metrics that you might still have an interest in collecting data about. These types of variables capture a data point about the implementation itself. I think an important call out with these variables is variables such as page name, page URL, or site section aren’t really considered debugging variables for our definition, as these are used in regular reporting and could be needed to fulfill measurement items. On that same thread, I’d even consider app version to not be part of these variables. If you are discussing a mobile app, as a product owner, you want to know user adoption of the latest app version for a product standpoint if your measurement is, how many users have adopted the latest app? So the variables we are talking about today are items like rule name or app measurement library version, variables that don’t directly translate to measures outside of the analytics function. So outside of rule name, here are a few more tags related to debugging variables, some example outputs for them, and a quick description. The greatest thing about all the above variables is that they can be created as a data element using the core extension via a quick dropdown selection inside tags. So event type, rule ID, property name, and library build date are all available inside the tags interface, just waiting for you to configure them. Outside of tags, a few environment variables that I’ve configured implementations I’ve worked on include the three shown examples. I’ve provided a friendly descriptive name describing these because they’re just kind of JavaScript variables, the JavaScript path to access from the console, an example output, and a description of what is being captured. These types of variables can be captured by using the JavaScript path on the page, which can also be configured easily inside the core extension using the JavaScript variable data element type. So why do we care about these variables? What value does implementing them give us? Is it just something else to capture? So in my opinion, the three main benefits to using these variables in your implementation are increasing analytics health, gathering deeper implementation insights, and better governance and scalability. I’ve listed some questions that you can answer for each category based on some of the debugging variables we’ve shown so far on the slide. You can see that there are a lot of interwoven threads amongst these three categories. One I’ve called out, how many users are still loading the old library, shows how some implementation insights can also be indicators of analytics health if you’re not aware of them. Because if you’re loading the old analytics library and you’re not aware of it, you are having some analytics health problems. Quick disclaimer, all data I’m showing is simulated data and it’s not actually from a real workspace, but the concepts being shared can be applied to an implementation using these values. For this example, Prop 1, i.e. Build Info, is a combination of property name, environment stage, build date, and app measurement library. And it looks very similar to Build Library Info that I’ve captured in my own implementations. Looking at the Build Library’s being captured analysis workspace this month, there are two things we can see from this Build Info prop. Most users are in the latest version of the Build Library, 8.10, which could indicate a caching issue or a stage release. But either way, changes made to the 8.10 library are only being shown to 3.6% of users. The second item is a little bit harder to see, but the app measurement library was updated in the latest release to 2.22.4. So the users are not getting the full benefits of the latest release. This impact is probably minimal compared to the other issue, looking at the release notes, but it’s probably a good thing to know if there was anything tied to the app measurement library that we wanted to utilize the latest update that could impact our analytics health. Looking at our implementation on a by rule basis, this workspace shows an example where C1 is now our rule name variable. It captures the name of the rule triggering and tags that is sending events, EVARs, and props and sending the beacon to Adobe Analytics. We can see that DLR push track event rule is by far the rule most often firing for E1 when breaking down on a rule basis. The benefits gained from this can allow us to better predict what our server call usage may look like if track event is going to be deployed on a new web property of server volume, which is great for making sure you’re properly estimating your server call volume, especially in busy seasons. It also identifies if a rule is setting an event it shouldn’t be and helps us identify any potential timing or configuration issues. For example, EVAR click track other button click shouldn’t be setting E1, which is also a very big problem. For governance and scalability, we can now look at some of these debugging variables over a time period. By doing this, we can identify rules that are no longer triggering. In this example, we can see that over the past few months, the EVAR click track button click event has been on a steady trajectory downward, till having zero instances of the event firing in May. Now, the context may be that we modified the rules to no longer set E1, which if that’s the case, perfect. That makes sense and it lines up with our reporting. But if nothing has changed on the tag side, then we can have an instance where the event trigger for this rule no longer exists, which is due to the CSS selector changing for the rule or some other reason, or the feature contained in the CSS selector being removed entirely. Now, we want to confirm there’s no active defect, of course, that’s our first priority when we see something like this. But if we see that that’s not the case, and we see another month where event one doesn’t fire tied to this rule, it might be time to retire the rule. As pruning outdated rules from your launch libraries is essential for good governance and preventing accidental triggers in the future. It is also essential to helping keep your library as streamlined and lightweight as possible. Some best practices to follow. When using these variables, you want to set them on every beacon for most use cases as unspecified values for items like build date aren’t very helpful. Your library was built at some point, and you want to try and minimize the unspecified values when using debugging variables as much as possible. The reason why I say most cases is that they do have a debugging variable, my values personally, when I have two rules triggering a beacon, one to push the data layer and the other to capture the launch dot rule value from the data layer, which contains the name of the rule that triggered the initial push. So I can have both rule names captured with the beacon. And there’s a valid reason why you wouldn’t want to have that push rule set when it’s not being sent to a push. So that is an example where unspecified does make sense. Putting multiple data points into a single prop when possible helps save on prop usage, as we showed a few slides back with the build info example. You can use classification rules to break out these props. There are components for a more port friendly view. Just be mindful of your character limits when using a combined prop. And the last thing I want to mention is your SDR should be your limiting factor. I’ve worked in environments where we had so many opportunities to use more props and others where we had to prune the SDR monthly of underutilized props and EVARs to free up space for more active measurements. The less free space you have, the more conservative you’ll have to be and the more targeted you’ll have to be when choosing which types of debugging variables to use. There’s always a but. These are the two biggest objections I hear kind of in response to why should like, I don’t, I can’t implement these variables. And they’re the two biggest ones I hear by far. So the first one, I don’t have the EVARs to spare. Some concerns when using debugging variables as part of your SDR could just be the pure amount of EVARs props you can spare, which is a fair concern. You likely have more props available than EVARs though, which works out best for us since we want expiration after hit anyway. We can also combine a lot of smaller debugging variables into a single prop if we mind the character limit as we’ve shown on previous slides. The other objection is one that I can completely understand from working in sensitive industries such as legal and currently banking, which is that we might be revealing too much of our implementations inner workings, or we have attributes in our property name that might cause a problem being publicly available if we don’t want those to be available. And for the rule and property related debugging variables, your solution is really simple. It can be instead of using rule name and property at name, we just use rule ID and property ID values instead, which utilize the unique hash namespace the launch API uses to refer to these values. You can see this in the URL itself when viewing the rule or property while logged in. And we can even create a classification import to upload to Adobe analytics for them for the prop. So they are port friendly and they can be seen a rule name and property name inside Adobe analytics while still being shown to anybody viewing the beacons as being obfuscated hashed values. So we’ll quickly highlight how to implement these types of variables using tags and using app measurement at a high level. So implementation via tags is an absolute breeze for most of the debugging variables we’ve reviewed in this presentation. Because a lot of them can be configured via simple dropdown, which you can see on the slide when creating a data element inside UI. For the variables I demoed earlier that are tied to JavaScript variables, we can instead use the JavaScript variable data element type and enter the JS path to the variable. Once you’ve set the data element type, you can configure the Adobe analytics extension global variable settings to set the desired prop with your debugging variables or combination of doing variables. Or if you have more custom code and are doing this inside the custom tracker code section of your settings and you’re using do plugins function, you might need to add them there or consult your implementation professional to make sure that you’re setting them in the correct way and you’re not going to be impacting your global variables that you’re currently capturing. There’s a lot of different ways that can be done if you’re not using the global variable setting. And if you have an instance where you’re unable to utilize tags, which is a bummer, it’s a real bummer, you should be using tags. You can still always resort to setting the desired prop with the value you want to capture. You will lose some variables such as rule name and property name since technically you no longer have either of these without using tags, but you can set these with static string to indicate details about where it was implemented. If you’re in a hybrid situation where you’re using both tags for some web properties and not using tags for others. For the JavaScript variables that fall outside of the environment ones that can easily be configured in tags, we can set these props by setting directly to the variable equivalent. So the report suite one we used on an earlier slide could just be set to prop five by just using s.prop5 equals s.version. We have functions that return string output or variables that contain information about the code used to set these props. We can even set values equal to the result to provide more insight into the implementation. So closing, hopefully you now know what debugging variables are and how they might be helpful for your organization. Maybe you even know a few that you have already that you didn’t realize were them, or you thought some new ones that, oh my God, this would be super helpful to use in our implementation. I hope so. You also know what benefits they can provide outside of your normal measurement items in your SDR, some best practices on using them effectively, and how to quickly implement them via tags or at measurement JavaScript library. I’ll leave you a quote that I find very relevant to this presentation. True wisdom is knowing what you don’t know. The best debugging variables are ones that will help you understand more what you don’t and capitalize on filling those gaps in your implementation. Thank you all and look forward to your questions. Back to you, Sarah. Thank you, Andy. I absolutely love those debugging tips. Our next spotlight speaker is Tyler Scott, the manager of digital measurement for Major League Baseball. In this role, Tyler leads the solution design and implementation team for Adobe Analytics, Adobe Target, and Adobe Audience Manager. Currently, Tyler and his team are focused on transitioning to Adobe’s Experience Platform, Web SDK. Now, everybody get your batting gloves on because Tyler is about to share some learnings from a recent implementation project so that you can knock your next implementation out of the park. After Tyler’s presentation, both he and Andy will join us for live Q&A. So keep dropping your questions and comments in the chat. Welcome to the Skill Exchange, Tyler.

Thank you for the introduction. I’m Tyler Scott. I am the manager of digital measurement here at Major League Baseball. And today I’m going to be talking about avoiding pitfalls of over engineering. A brief introduction on myself. Like I said, Tyler Scott, manager of digital measurement at MLB. So I run the solution architecture digital measurement team. Work very closely with all of our Adobe Experience Cloud tools and manage the implementations therein. I’ve been an Adobe Analytics solution architect for going on 10 years, dabbled in other tools, and very apt in the Adobe Experience Cloud suite. Outside of my day to day, I am a huge Seattle sports fan. Otherwise I’m an anime nerd and tattoo collector and an aspiring homesteader. I’ve also included my LinkedIn here if you’d like to connect with me. It’s provided as a link there. So what we’re going to talk about today, brief overview, is we’re going to introduce the tracking use case, the reporting requirements, what we really wanted to get out of the solution. Then I’m going to dive into a little bit of how we did the solution. None of this is necessarily groundbreaking, but just so you have an understanding of the context of what we’re talking about when we get to the learnings, which is going to be the third step. We’re going to talk about what we learned in this process of going through this implementation. Finally, a point of reflection and what my takeaways were, and I hope you would learn from what I learned. So a brief overview of the problem we’re trying to work with. So we have a hybrid app. So that is a mobile native app that uses web views to pipe in content. Very common, and these are notoriously difficult, especially when it comes to tracking. One issue that really complicates it is we had a native app report suite and a separate suite for our web data. These were built at different times. The app has one data set, the web has one data set. And when you currently, when you open up a web view that opens the web content, opens the web implementation, and it would fire the data to the web suite. And you end up having some of the user data in the app suite, some of it in the web suite, and hard to really tie those together and trace those as a single user. We also had some existing tracking in place before this new solution I’m talking about. Some of those things we did to mitigate that cross web view tracking and reporting issues. One of them is the app itself. When a web view is opened, it fires what I’m calling a false page view or in app parlance, the track state. So this basically told the app suite, by the way, they opened a web view. We don’t know anything about the web view it opened, but a web view was open. So we at least have that insight in the app suite. Additionally, we had previously to this, we track using our campaign parameter, which we call partner ID. We would tell the web suite, by the way, this came from the MLB app on iOS and it was a web view. This came from the MLB app on Android and it was a referral. But really just so the web suite, when it did see these web views that popped up in the web data, we would know, by the way, this is what proportion of it came from an app versus came from web or other general desktop data. Here’s a quick mockup I drew sort of the two parallel implementations and the dotted line that I’m drawing here is where the app is opening a web view. It’s passing along a partner ID, but ultimately that data ends up in the MLB web suite. So the solution we came up with, it basically involves tying those together and that involved two steps. One of those is updating the web client to understand where a user came from and then redirect the hits accordingly. So if a user came from the MLB app on iOS, we would send it to the MLB app iOS suite. To be able to do that though, we also had to make updates to the app itself to send along more data. So previously, as I said, we were sending along the partner ID, but the web client needs more data than that to redirect these things properly. The first one being this Adobe MC parameter. This is provided by the SDK using a pen visitor info method. I’ve included a link to that here, but the idea is if we want to tie a user’s journey together, Adobe has to know, use the same marketing cloud ID across both of them. So this communicates the marketing cloud ID, the timestamp, the MC org ID, some other basic info that Adobe analytics needs to understand that this is the same user who was using these web views. But we also have to solve the report suite issue. Like I said, we have separate report suites. This is a little easier if you have everything going to a single suite, but because our app and web are separate suites, we had to also have the app pass along an explicit RSID. We could have used the partner ID that we’d mentioned previously because that does also tell the web client where the user came from, what app it should go to, but that method would require some extra logic on the web client to basically map the RSID from that value I’d shown earlier in a partner ID. We decided to go ahead and add just an explicit RSID parameter that puts character for character with the RSID should be. It takes a little bit of the logic off of the web client and also makes it future proof in that we can add more apps to this logic without needing to update the web client every time we have a new app. We can just pass a new RSID. The web client will take it and send data to that RSID. Here’s another brief mockup I did showing kind of that logic here. The little brain indicates that the web implementation is doing something different instead of now just getting a partner ID, it’s getting an Adobe MC and an RSID parameter, and then understanding who this user is on the native app and also what native app suite that data should go to and redirecting it with that second dotted line. One other step that was a fairly heavy lift here that we were aware of but hadn’t considered really how heavy of a lift it would be is that there were processing rules required to remap props and EVARs. The reason for that being because these two suites had separate solutions with separate prop and EVAR indexes and the web implementation wasn’t using context data. It’s not using a data layer at all. It’s actually hard coding all of our props and EVARs from the page. So we had to use a processing rule in the app suite to say, by the way, when you see a web view, remap all of these props and EVARs to the indexes that we use in the app suite. So that was two steps here. One of them was that remapping where we take what was a web view index and remap it to the app index. But the second step is also clearing the unused web props and EVARs. The reason for that being, let’s say we didn’t remap a prop or an EVAR because we didn’t have use for it or we didn’t have a destination for it in the app suite. If we left the value there, it would still appear in whatever that prop or EVAR was report in the app suite. So we’d end up having reports that are contaminated with values that don’t represent what they’re supposed to be in the app solution. So through this implementation, we had a couple hiccups, but we did learn a lot. The first thing that we found is that it did work for the most part. When we communicate over the MID and the RSID and we have the web send that data to the right RSID with the correct MID, Adobe is able to string those together as if they are the same user because it’s the same values across the board. Most of our web views now are reported in the app suite. I say most, this is important here for the next issue that we ran into, but all of our user IDs are connected. And again, I’ll demo that here and how we saw that. Here we actually see a flow report. The first two nodes there you’ll notice have MLBA in capitals. That’s our app syntax for our app page views. And that third node there is slightly different because that is how our web implementation names pages. So what you’re seeing here is users go from the app. That second node is actually that false page view that I had mentioned. And that third node is the actual content being web viewed. And you see that they flow together. We also see in the third node that very, very few of those are entries and exits, which means that the user ID is being connected to an app user correctly. The issue that we ran into really is there wasn’t 100% coverage. And this appeared in two ways. One of which is just that our implementation, again, it’s hard coded. It is not run through a tag manager or data layer or anything else. So making these logic updates, we weren’t able to deploy globally. So we still to this day have parts of the site that if you were to web view into them, they would not redirect the hits. This is a major issue for analysts now who have to want to account for all page views. But now instead of just having web views in the web suite and app states in the app suite, they actually have to understand what web pages redirect, what web pages don’t. It has made reporting a little bit more complicated. The second coverage issue is that some users haven’t updated their app. So even though we do have this working on a lot of locations, there are users who can still open up these pages and still not have them redirected to the app suite. And you end up with the same page views being represented in two different places, depending on whether or not the users updated their app. The implementation also didn’t go super smoothly. The main issue we ran into there was just difficulty with validation. Our beta app did not point to our beta website. And so in that we were making both of these changes in parallel across both the app and site, it was hard to validate them in turn. We ended up having to push the web changes to production and then test the app changes with the production URLs.

And it clashes now with the existing false page view implementation. Like I said, and we see it here, there’s this MLB kind of the second node. There is a false page view that comes from the app that says, by the way, they open a web view. And now on top of that, we have the web view also redirecting to the app. And so in some cases we are double counting these page views. Would have been one idea to actually just turn off the false page views ahead of time, but we decided to keep them because now we can compare the false page views to these web views and understand what’s being missed, what the delta is and transition our year over year reporting, which is very important. But again, that’s one more thing the analysts have to remember when they’re doing this analysis is not to double count these, not to count these or some of them. So the takeaways, finally, this is pretty straightforward. Some of these seem really simple, but we missed on some of them. And if you take them for granted, it’s easy to get these wrong. Plan for deployment is the first one. This again goes back to maybe an MLB specific issue and that we don’t use a tag manager currently. So deploying these and rolling these changes out just wasn’t perfectly smooth. Planning for validation, again, something we assume and we do every day and usually go smoothly, but in some of these more complex use cases, it’s very possible to have a validation scenario that you’re not able to validate against like we were. So just making sure that you’ve thought through all of the ways that you’re going to need to validate and that the app and pre-production test universe supports those. Planning for analysis. This is one that even where we accounted for how the changes would be expected, how things wouldn’t be a hundred percent, I didn’t really think through how much extra work this was going to make for the analyst. So ultimately I was hoping to make data better by tying the users together across app and web view. And it ended up really complicating the data for all of those cases where it didn’t work perfectly. And finally, planning for unideal user behaviors. When we’re building out a solution and idea, it’s really easy to assume that everything’s going to go smoothly, that the users are going to follow the happy path. But as we’ve seen, there’s always going to be users who, whether it’s the delay and update or do something else that is outside of the expected or anticipated path, can really throw off the data. So making sure we don’t just think about the ways that this can go right, but really list out and think through all of the ways that this can go wrong from user behavior to complications and analysis and implementation and really, really understanding is this going to ultimately add value to reporting or in our case, kind of make it a little more heavy handed. So that’s the difficulties we ran into and kind of our takeaways throughout this process. Like I said, it is a working solution, but thinking through some of these things a little more in depth would give you a little more better grasp on the issues as they arise. I hope that was helpful and I look forward to answering questions in the Q&A session. Hello, Tyler. How are you doing today? Excellent. Thank you. Awesome. It’s so great that you could be here prior to the craziness that is playoffs. This is our crazy month getting crunch time, everything ready for playoffs because we got to have our everything out the door before that starts. So happy to be here and make the time today. Oh, you’re sweet. And hey, Andy, it’s so great to have you here as well. It’s an absolute pleasure to be here. Thank you so much, Sarah. Awesome. Awesome. Awesome. Now, Tyler, I think I’m going to start with you. So with your apps that you were just talking about, one of the problems seemed to be that people weren’t updating their app. Can you talk a little bit more about that and maybe like what are some ways that you are dealing with that? Yeah, absolutely. That did cause a lot of issues with our data as we saw there. And I think that’s something that you ultimately can’t control from the app side unless you do a force update, which does happen occasionally. Some apps, you are able to do that with a release, but it is an ideal user experience. So you really don’t want to do that more often than you absolutely need to. So one thing we’ve done in the past with a different application and kind of a high impact implementation change that really required people to do the update for it to work is we held back on the implementation until other features came in where we did a major release and the product wanted to do a force update. And so we said, hey, if you’re doing a force update, let’s do this one here too. But you do know every time you do a force update, somebody’s going to leave a one star for you. People don’t like being told you’re not able to use the app unless you update. That’s an ideal user experience. So that’s one option. One other thing about this is that a lot of app implementations, when you’re adding something net new, it really is okay if people don’t update because they just won’t have this net new tracking. One of the things that made this a little more complicated is that we weren’t making something net new. We were actually shifting data from one place to another. And so what happens is the people who don’t update have their data in one place. The people who do have it in the new place. And now that’s what causes the analysis problems. Whereas if this was just, like I said, net new tracking, it’s easier for analysts to say we have the new data for the users who have updated for the users who haven’t, they have no effect on the data. So they don’t have to juggle back and forth between the two sources. So it’s not an issue with every implementation, but where it is, and it’s going to be a big one, it’s worth holding back for a forced app update. Oh, man. No, that, yeah, I’m with you. You don’t want to force people to update their apps, but yeah, you’re going to have to live with it. Oh, and Andy, oh my gosh, I love your debugging variables. Those are some of my favorite things. And we’ve had a question come in about the whole props and EVARS. And so you talked about, you know, you could put it in either one, but is there kind of a best practice or what do you recommend on putting props, you know, using the props or the EVARS? That’s a great question. And I think that unfortunately the answer is kind of, it always, it depends for your situation. So the benefit of putting it in EVARS, you definitely have more options when it comes to expiration. So if you have a situation where you want it to kind of persist past hit level, I think a great example that kind of is a classic one that I’ve seen done a lot and I’ve used before is having page name set in an EVAR, and then you have the page name prop. So obviously you can have items hit, like it happened on the existing page, be included as debugging variables. So for that’s one big benefit of having it in both. I think a big benefit of the EVAR over the prop is if you have a very long piece of data, you have that hundred character bite limit you have to work with in props. So if you’re setting it in a prop, it’s going to be potentially a risky there, but I think honestly props, if you can break up the data into like nice chunks, they’re a great spot to put them in. So I think it really depends though on whether you copy them and have the situation where you have an EVAR that’s duplicating a prop. There’s definitely a lot of valid reasons for it.

True statement, true statement. But I think it is, it is one of those debates that will always go on, but it’s fun. All right. Now, Andy, I’m going to stick with you for one more question. Sure. Can you give me an example of when one of these debugging variables like really came in and saved you in the past? Oh yeah. I think I kind of give a few in the actual presentation, but I can give a couple of extra ones as well. So a big one that was a huge lifesaver for me was I saw a situation where we looked, we had like a lot of extra events. We’ll just say event five for this situation was on firing. And it seemed like it was going almost double what it used to be. And when I was diving into it, I was kind of like, Oh, this seems wrong. And then you look at the rules and like have to see the rule name and kind of cross that with the actual event that was being set. I was able to see that, Oh, this is actually literally double. And what ended up happening was that there was a rule that it was setting off of a CSS selector and another one that was setting off an ID. And what had happened was an ID had been added to an existing CSS selector on another page. So the rules now, both rules were triggering correctly. That was the problem because now we have situation where both would fire the rule. So we got double the data and without the rule variable in place, we would have kind of been probably scratching our heads for a bit and having to manually go and test it. And obviously we definitely still want to do that and kind of go through that. And so a lot of times these will still get you a situation where you’re kind of a better spot of where you need to be. You’re looking in the right neighborhood as opposed to kind of just searching the entire city to find out what the issue is. True statement, true statement. I Tyler. Okay. So I hate to kind of sound like a bummer, but kind of like this in the presentation, maybe you seemed a little disappointed with how the outcome of the implementation went. Knowing what you know now, would you go back and do anything different? Yeah, I think that is not a downer at all. It’s exactly the point of what I’m hoping to get across here is that I had kind of taken a lot for granted. I had assumed that things would go smoothly because I’ve done this a while, I’ve done things like this. And so I didn’t take the time to step back and think about where are the places this can go wrong. Even in my last slide, like I said, it’s pretty simple things in analysis, in validation, in user behavior, all these things that we like to, as we’re planning a solution, it’s pretty natural to assume the best case scenario for all of those factors. And so what I would do differently is definitely go back and revisit those factors and try to assume the worst case scenario. What happens if I can’t validate this because the desk app is not speaking to the beta website correctly, and so we can’t test in beta on both applications at the same time? What happens if the user doesn’t update? Does this actually break our data? Or like the last question, does it just add something new? I think if I revisited those and imagined how things could go wrong instead of assuming they would go right, I definitely would have said, let’s stick with the data we have right now. Another point here is that we did have data in place and it wasn’t wrong. It was just, I thought we were trying to clean it up and it ended up being a little over engineering and didn’t work out exactly the way we expected because I expected everything to go a hundred percent when things can always go wrong. So I would revisit it and I would probably say, let’s hold this off for a bigger implementation update. Yeah. Well, that’s kind of a good lead in and talking about a big implementation update because we talked about earlier in the lead in that you guys are looking to move some Web SDK. Can you talk about what you’re thinking about with your app and the Web SDK? Yeah. So there’s a lot of new features and interesting things that can take on there. I think specific to our implementation, one of the main differences is going to be, one of the problems we solved for here is that the application and the website are on different report suites. And so that’s again, a legacy or just how our implementation has been done in the years. Actually, our web implementation was done before there were even virtual report suites. Pretty old school. And so the app was stood up separately. This is all kind of trying to layer these pieces together that were already in place. So what Web SDK gives us in this new implementation is really a fresh surface to just go in and say, let’s put everything in the same sandbox. Of course, there are different implications to how the sandbox can split out data because sandboxes and views operate differently and can filter slightly differently from say report suites and virtual report suites. So you want to double check all those details and make sure you’re not painting yourself into a corner by just assuming you can put everything into one sandbox. But that is the path we’re going to be taking now. And so we’ll have a single sandbox, all of our apps and web going to it. The one thing that wouldn’t change here is even in the future state, this AEP universe with Web SDK, when you’re doing a hybrid app or connecting users across domains or anything else, you do still need to pass forward the Adobe, the MID, whatever the identifier is. The sandbox will be the same, but Adobe still needs to know it’s the same user. So that won’t change. But again, that comes with functions that the implementation gives us. So that’s a little more straightforward and harder to mess up than this artist ID logic that we had to sort of custom build. Oh my goodness. Sounds like a lot. I think that’s a really great way to look at it. So awesome. Let’s see what else do we have in here? You guys, both presentations were just so jam packed full of good info. Oh, here’s kind of a I’m going to say a hard one just because Andy, someone wants to know if you could only use one debugging variable in your implementation, what would it be? Oh, that’s a hard one. That’s a good question. So a lot of times I find myself going back to two. I’m going to go I’m going to the two. You can apply your case scenario of which one’s going to be better solved for your situation. The two I go to all the time are rule name, because one example I just gave earlier and the property like the build information. And those are the two I go to like most of the time. I mean, if I was going to have to pick which ones were the ones like most commonly used, those two are. So I would use rule name if you have a situation where your libraries are being deployed in a pretty standard way. You’re probably having them managed by Adobe and using Akamai to deploy them as opposed to having a self-hosting option where there’s not too much that can go wrong there. And the situation maybe don’t have as much caching on your website. Rule name is going to give you a lot more mileage. That being said, if you have a situation where you’re uploading through SFTP and deploying the directly on your site so you can have them be white labeled and you have your development team that’s doing their own releases to release your analytics and deploy them on your website through the SFTP upload. I think the build date is an absolute lifesaver because you’re going to have situations where you don’t want to be spending a lot of time trying to figure out what is going wrong with this tagging. I thought I fixed this and just find out, oh, I did fix this. This is just the old library I built like last year. Hopefully not an adult. Oh my gosh. Between the apps not getting updated in this. Oh my gosh. That’s awesome. No, I’m with you. One of my favorite too is capturing that rule name because then there’s just so much you can do with it. Especially if you just inherited an implementation, you can see like how often this is real fire. Does it fire? Maybe it’s not needed anymore. I think those debugging ones are really great. Are there any other, like, you covered a lot, but are there any other items that you like to grab when you’re debugging to help out? That’s a good question. One of the ones I didn’t mention in my presentation I use a lot because it’s kind of specific to your implementation is I have event name which is kind of correlating with the data layer. So a lot of times most of your ports are probably not going to be having people say, the stakeholders say, well, what event triggered all these events, EVARs, props, and what event triggered all these? Because that’s usually kind of more focused on the development side. However, it’s very helpful to know because it’s good to know which events are triggering on your website or if an event were to stop firing entirely. You can kind of have, there’s great tools out there like observe points for monitoring your website and seeing that kind of stuff. It’s great to be able to see it in your actual port suite. That’s a great one. I think another one that’s kind of just a catchall is, well, anything, there’s a lot of extensions that use event.details to pass a lot of useful information. Especially the one I use personally a lot is Adobe Client data layer. I will capture the event name using the event.details.event name is what it is. I have to double check that. So don’t hold me to that exact one. But if you use the percent time variable bracket notation, you can use it directly in the UI without having to use any code at all. And you can just set that directly to the element or you can use what Frederick Warder has done in the past which I think is absolutely fantastic and make a constant data layer. That way you can manage that and kind of control that and be a little bit more documenting when you are using those kind of solutions. Yeah. Very awesome. Now, Tyler, I do have to ask this last question. Because when I go watch my Royals play, you have apps that you can download at the ball stadium and MLB. I feel like you have lots of apps and ways that we can connect. Do those all run through you and your team? Yes. We manage the solution design and implementation, validation, troubleshooting, user admin, all for every single app that you can access an MLB property on digitally. So that’s Android apps, iOS apps, websites, and any streaming devices. If you’re on a console or set top box or anything else, you can use that. We are doing all of those. Those are all in separate report suites. And this big project we’re working on is trying to get all of those into one sandbox so we can have one way of reporting video starts, content views, all these other things that we’re adding. We’re doing a lot of really cool stuff around tracking and understanding people’s favorite teams and players, building out personalization using target so that you can tell them what they’re doing. And we can show you more of that. But yes, not just analytics, but again, the target implementation, the personalization, it all comes across my three-person team. Oh, my gosh. A lot going on. That is cool. Do you guys in the NBA, do you guys talk about who’s going to be the next player in the NBA? Check in with each other. I work with their architects and chat with them when I have questions and stuff. I’m curious on how did you handle this and how are we handling this? We’re definitely friendly in that respect. That’s good. Way to be good teammates. Andy and Tyler, thank you both so much for being here. Thank you so much for having us. Thank you. It’s been an absolute pleasure. Thank you so much, Sarah. Yeah, absolutely. Thanks for having us.

recommendation-more-help
82e72ee8-53a1-4874-a0e7-005980e8bdf1