Discover the power of Adobe Analytics' Attribution Panel and Lookback Window to better understand your customer journey.
When I first thought about the attribution panel and lookback window, I was immediately reminded of the concept of 'time travel’; then, of course, I was also reminded our typical response to many new tools like these is to simply put off trying to use it, because they look so complicated.
I mean honestly, just look at all those options, switches, panels, readouts, and knobs. And seriously, let’s talk about those complicated flashing lights, hoses, gauges…. WAIT!! This is not the time to get distracted talking about time machines, we just don’t have the time… or do we?
I will admit the attribution panel is a fairly complex tool; however, our typical job as analysts, day in and day out, is to use another one of our favorite and highly complex tools to also take a look at what happened in the past. That tool is called Adobe Analytics! So yes, to answer our very relevant question, I believe these two things say we have plenty of time.
Therefore, why should we allow something like a little fear to stand in the way of such amazing, sophisticated, and powerful tools like these that literally allow us to look backward in time, each and every single day?
After all - this is TIME TRAVEL, folks!! We’re all about that kind of stuff. Right???!!
So, what are we waiting for - a shiny metal car, a police box, or a vintage telephone booth using the wiring of an old umbrella as its antenna to show up on our doorstep?
No! We’ve got something even better, so let’s strap in and hang on!
Well… you get the idea.
Now that we’re all excited about time travel, let’s take a deep breath, step back a little, establish what the attribution panel really is, and break things down a little bit:
Figure 1 - Numbers displayed inline with text further below
In attribution, simply consider how events/actions might be caused by an individual, several individuals, or one of any number of different events over time.
According to Adobe, attribution gives analysts the ability to customize how Dimension items receive credit for success events.
In fact, rarely is any given customer journey truly linear and even less often predictable. More so, each customer will proceed at their own pace; often, they might double back, stall, drop out, or engage in other non-linear behavior. These organic actions make it difficult or practically impossible to know the impact of marketing efforts across the customer journey. It also hampers efforts to tie multiple channels of data together.
That’s right. Leave your “domino” analogies at the doorway and open your minds to concepts more along the lines of the butterfly effect and string theory - but like everything else, we need to start with some of the basics.
Attribution models
When we use the attribution panel, we may begin to observe several different things. For instance, the attribution models demonstrate to us how our conversions (i.e., ❶ success metrics) may be distributed across hits in any given group.
Simply put, if 10 people press a BIG RED BUTTON to step through a door, our attribution models are going to tell us which of those 10 people we want to assign “credit” - or even better said, how much “credit” we want to assign them - for pressing said button.
Keeping this in mind, here are a few examples of how the ❸ attribution models might affect those 10 people:
-
First touch: This model works exactly like it sounds by giving 100% credit to the first person who walked through the door. Marketers are more likely to use this approach for tactics like social media or display; however, it is also a great tactic to often use for on-site product recommendation effectiveness.
-
Last touch: This tactic also works exactly like it sounds, but instead gives 100% credit to the LAST person who walked through the door. This model is typically used to analyze things like natural (organic) search and other short-term marketing cycle campaigns.
-
Linear: This model distributes equal credit across EVERY SINGLE PERSON who walked through the door.
-
U-Shaped: This approach assigns 40% of the credit to the first person in the door, spreads 20% of the credit across everyone in between, and then gives 40% to the last one through. This model will most often be used in situations when you have a long conversion/sales cycle containing several touchpoints along the way. In this case, your goal is to primarily highlight the first and last marketing tactics that contributed to the customer converting.
-
J-Shaped and Inverse J:
-
Think about U-Shaped, but instead this model assigns 60% credit to the last person walking through the door, 20% to the first, and then divides the remaining 20% across everyone else in the middle. Inverse J does the exact opposite.
The goal here is to put most of your emphasis, either at the beginning or the end of your campaign; however, you still want to assign a certain amount of credit to the contributing item on the opposite end while acknowledging the “little guys” along the way.
-
-
Time decay: Now, I would be remiss if I didn’t share this one. This model literally has a half-life that exponentially decays - over time! In this case, the default parameter for this model’s half-life is 7 days. The way it works is to then apply weight to each marketing channel, based on the amount of time that passes after the initial touchpoint and when the customer converts.
Time decay and U-shaped attribution models are both typically used to measure longer-termed campaigns, but as you can see, they have slightly different goals, based on how they ultimately weigh the value of the outcome.
-
Custom: You pick and choose who’s going to get credit. It’s your campaign!
For additional information about these and other attribution models, click here
To make this even more interesting, let’s talk about turning back the clock!
Lookback windows
Now it’s time to start taking your mind to the next level. This is where we literally add the time travel element to our analysis - and again, we are beginning with the basics.
Adobe defines ❹ lookback windows as “the amount of time a conversion should look back to include touch points. Attribution models that give more credit to first interactions see larger differences when viewing different lookback windows.”
In other words, lookback windows determine the time period during which conversions are considered and provide context to the attribution analysis. Adobe Analytics offers three types of lookback windows:
-
Visit lookback window: Looks back to the beginning of a visit when a conversion happened, providing insights into the immediate interactions leading up to conversions.
Remember this is typically the shortest lookback window to use.
-
Visitor lookback window: Looks at all visits back up to the first of the month within the selected date range, offering a much broader view of the customer’s interactions and helps identify patterns over time.
-
Custom lookback window: Allows you to expand the attribution window beyond the reporting date range up to a maximum of 90 days. It provides flexibility in capturing touchpoints that occurred outside the selected date range, ensuring a comprehensive analysis.
By adjusting a given lookback window, analysts may then examine the impact of one or more touchpoints within specific time frames and gain greater insights into how different durations affect attribution results.
Bringing it all together
So, what does all this mean to us as analysts?
The attribution panel and lookback window give us the power to look beyond the mundane, surface-level data, and dive deeper into the customer journey. By understanding which touchpoints had the greatest impact on conversions, we can make informed decisions about our marketing strategies and allocate resources more effectively.
Remember, after you have your attribution models and lookback windows selected, you may still further manipulate your data by filtering it with a ❺ segment, or any other component you wish at this point. Furthermore, after the panel is rendered, you have all the functionality of a traditional Workspace at your disposal.
Finally putting it into practice
Now that you’ve got the concepts down, imagine you’re running a marketing campaign and trying to determine which channel is the most effective for driving conversions. With the help of the attribution panel, not only can you see the last touch, but also the first touch, same touch, and any other model you choose to determine which channels are the most effective in driving your conversions. Then, this information can be used to optimize your campaigns and improve overall performance simply by turning back the clock with the lookback window of your choice!
Now that you’ve seen what it can do, don’t be fooled or intimidated by the seemingly complex features of the attribution panel. Face it. Embrace it. Understand it.
BUT MOST OF ALL - Use it to your advantage. The attribution panel and lookback window are the keys to unlocking a deeper understanding of your customers and their journey with your brand.
Now, we can travel “back in time” with confidence and use the power of our trusty time machine (a.k.a. Adobe Analytics) to make data-driven decisions.
It’s finally time. You put together a solid Solution Design Reference (SDR). This is the guide that you use to implement your metrics and dimensions, what they’re called, when they fire, and your devs loved it. You went through the entire process of deployment, writing acceptance criteria, going through your sprints, QAing the entire thing, and it’s done! It was a lot of work, and now it’s done. Your instance of Adobe Analytics should be getting marketing and product jumping up and down as they dig into the data, get new revelations about your customers, and find all the areas of success and, well, areas of less success. But you’re not hearing the accolades that you were expecting.
From one camp, you hear complaints.
“Why can’t I figure out the conversion rate on this funnel?”
“Why isn’t there a metric for this?”
“I need way more detail about this! A metric alone isn’t enough. There are at least three different dimensions that I need in order to understand performance. Why didn’t you put them in?”
But it’s the other camp that’s an even bigger cause of concern. From them, you don’t hear anything at all. But much worse, you see charts that were very clearly taken from your old analytics solution, you know, the one that’s no longer being maintained, and every day is falling further into a swamp of decrepitude and dirty data. A sense of dread fills you as you think about the decisions that might be made with that mess.
What went wrong? Why are there gaps in measurement? Why aren’t your team members embracing this?
I’ll start by letting you off the hook a little bit. There’s always going to be some revision. If your site or app is complex enough to need an enterprise analytics solution, it’s basically guaranteed that you’ll miss something. But not enough to explain the measurement gaps I’m talking about here. What went wrong is a lot harder to put into a spreadsheet. You missed out on your first chances to build a collaborative data culture at the same time you built your SDR. I want to walk you through a method that I and my colleagues have developed to both build a better SDR with fewer gaps, and to get end users invested and even occasionally excited about their new instance of Adobe Analytics. Let’s go over the hows and the whys.
The How
The Measurement Conference:
- Get your stakeholders together, either in person or virtually with the goal of finding out what to measure. This should include some execs.
- Have some obvious examples on the board on sticky notes already, things like revenue, sales, or leads, the very core KPIs you know will be measured. Repeat with dimensions, things like logged in state, product categories, or search terms.
- Have everyone add their own sticky notes, grouping as necessary
- Have people vote on the ones that they think are important. These are unlimited votes since maybe all of these metrics and dimensions matter.
- For any that have low votes, have the stakeholders who asked for them explain what they’ll use them for. If there’s a good use case, keep it in. If there’s a better way of getting that data, they can’t explain how it’s actionable, or there’s another good reason to leave it out, strike it from the board.
- Add these metrics and dimensions to your SDR for an initial review by the stakeholders who were present
The Funnel Map
- Get a visualization of all the funnels, step by step with every state included
- With the designers and product managers, go through each step and talk through what they consider success on that funnel. Is it conversion rate? Is it choosing a particular path? Is it using certain features?
- Ask questions about what metrics and dimensions are necessary to understand funnel performance on each step of the funnel and overall.
- Above each step of the funnel, add the metrics and dimensions that will be measured on that step, including the calculated metrics.
- At the beginning of each funnel, write out the reports that will go in the dashboard that the product manager will use to track performance, things like a fallout report, current month and trended conversion rates, and anything more specific to that funnel.
- Add the new metrics and dimensions you’ve discovered to the SDR and send it to the stakeholders for a second review.
The Preview Dashboards
- Using the Funnel Map as a guide, create mockup dashboards.
- There should be an overall view, such as an Executive Summary Dashboard, and dashboards for each of the funnels.
- There will also be some more specific to your site or app, such as product performance or content performance.
- Distribute these to the relevant stakeholders and get feedback on the design.
- Make any updates requested and if new metrics or dimensions are needed, add them to your SDR.
- Send out the updated preview dashboards and SDR for a final review.
Data Democratization Tools
- Create a Data Dictionary. The SDR is for your devs. The Data Dictionary is for your end users. Make it readable for end users so they can easily look up what data is available and know how to use it. Your end users should be the final approvers of this.
- Annotations. In every organization, there are certain dates that matter every year and others that will come up. Make sure that you gather the relevant ones from your stakeholders and add them in as annotations to increase the understanding of the data they see.
- Curation. If your SDR is big, it might be overwhelming. Paralysis of choice doesn’t just apply to your clients. See what matters to each group of users and curate the elements they’ll see.
The Why
To Get Requirements
This is an obvious one but there are other effective ways of getting requirements. I’ve personally used one on one interviews, questionnaires, and reviews of existing reports. Those will work, though I think not as well as the methods I’ve just outlined. I honestly don’t think the gap in requirements gathering is that big though. The method I’ve described will get you 95% of the way there, and these other methods will get you 90% of the way. So, what’s the big why?
To Build Data Culture
With this process, you:
- Spark deep thought about how to measure success
- Create a sense of ownership in your stakeholders
- Make it easier for stakeholders to understand their data
Sparking Deep Thought about Data
For many of the people at your company, data is something they consume. They use it. They analyze it. They don’t think deeply about it. Some of them inherited reports and processes from their predecessors they have not altered because of the need for continuity. They’ve never needed to think about the why of the data.
This process gives them an opportunity to really understand data. Asking the questions, what is success? How would you know if you were successful? How would you know what to change if you weren’t successful? This is an exercise that should be done at the beginning of creating every site, app, and product, but far too often is not. By asking these questions, you help deepen their understanding of not only the data, but also their own product.
Creating a Sense of Ownership over the Data
This is not something that was handed down from on high. This is not something that was a thirty-minute meeting three months ago. This is not that annoying questionnaire that they were hounded for a week to answer and that they did so in a hurry because they had a demo to get to so they could make the sprint release date. This is the product of their deep thought and their work with you and their colleagues, the thing they’ve looked over several times, provided ongoing feedback for, and that they’ve approved after that feedback was incorporated. It’s theirs! The fact that it’s useful is due to them. It’s their data and it’s the process that made it theirs.
Making the Data Easier to Understand
You’ve also shown them how they’ll use it and what it will look like through the preview dashboards. Any new solution is hard. There’s so much to learn and given the tremendous customizability of Adobe Analytics, the learning curve can be quite steep. You’ve removed 80% of that though. Even before the first line of code has been written, your stakeholders know what their dashboards will look like. They’ll know how to read them and get meaning from them. They’ll know what success literally looks like because they’ve told you what metrics and dimensions define success, and you’ve told them how that will be visualized for them. The delivery of the actual dashboards is a refresher, not a scary new learning task.
This isn’t the quickest way of getting an SDR together. It’s a lot of work and requires a lot of coordination of schedules, especially since it’s vital you have some execs in the mix. In the end though, an enterprise analytics solution is a huge investment of time and money, and you want to make sure adoption and satisfaction are high. This method goes a long way toward making that happen.