First use impact analysis

Learn how to use the first use view in Adobe Product Analytics, which shows a comparison of how key indicators performed before and after a user uses a product feature for the first time.

Hi, this is Jen Lasser with Adobe Analytics Product Management. In this video, I’m going to show you the first use impact analysis in Adobe Product Analytics. First use impact analysis enables me to measure the impact of first use of a feature on a key success event. I know my users won’t adopt something the first day it’s released, and sometimes my releases roll out over several days. In these cases, it’s more important to measure the impact of a user trying a feature for the first time to see if it influences my key indicators rather than analyzing based on a fixed release date. First use view does exactly that. Let’s say I just released a new add to my list capability over a three week time period, and I want to understand its impact on how often users view their list and start media content. I’ll select my key indicators of view my list, media starts, and any event so we can measure overall engagement impact. Then I’ll select add to my list as my first use event. This represents using the feature that I released for the first time. And finally, I’ll select the beginning of my release window of May 10th. This enables the analysis to know when to start looking for the first use. This means that this type of view can apply to both new features as well as existing features that you’re improving. After making those selections in the query rail, I can sit back and let first use analysis do the work. I get an answer in the form of an insight, chart, and table. Let’s take a look at the chart quickly. While release view centers the analysis on a fixed date, first use view centers the analysis on event occurring. This means that day zero can be different for every user. Some might adopt on May 10th when my release started, while others might adopt on May 20th. And that’s not a problem for the first use view. In the table, I can view a before and after average and percent difference for each indicator. I can quickly see that users viewed their My List page 47% more and started content 10% more on average after they tried my new Add to My List feature for the first time. It also looks like overall engagement was up about 8%. I can also inspect the impact chart to see how each day compares to the average for the period. This helps me understand if there’s a certain window of time after they’ve tried my feature for the first time that they tend to be more engaged with my key indicators. For example, seven days after, overall engagement is noticeably higher, which tells me something about my user’s behavior that I can apply to my future product decisions. Now while these events may be correlated, this analysis does not imply causation. However, this is a positive sign that my feature did not negatively impact my user’s experience. If I want to repeat this analysis for a longer before and after period, it’s as easy as making a new selection in our settings. No need to spend hours redeveloping this analysis, it happens on the fly. For example, I wanted to see if this trend continued if I looked four weeks out on either side. From this view, I can see that the impact is even more significant. It looks like actually, in fact, I might be due for a raise because the impact of using my new feature for the first time on Media Starts is 114% in the after period compared to before. Whether you’re measuring a new feature or making changes to an existing one, first use impact analysis will help you understand the impact you’re having on your product experience.

For more information, please visit the documentation.