Troubleshooting and Frequently Asked Questions (FAQs) about Auto-Target activities in Adobe Target.
Consult the following FAQs and answers as you work with Auto-Target activities:
Decide if the business value of a Revenue per Visit (RPV) success metric is worth the additional traffic requirements. RPV typically needs at least 1,000 conversions per experience for an activity to work versus conversion.
Decide on the allocation between control and personalized experiences before beginning the activity based on your goals.
Determine if you have sufficient traffic to the page where your Auto-Target activity runs for personalization models to build in a reasonable amount of time.
If you are testing the personalization algorithm, you shouldn’t change experiences or add or remove profile attributes while the activity is live.
Consider completing an A/B activity between the offers and locations that you are planning to use in your Auto-Target activity to ensure the locations and offers have an impact on the optimization goal. If an A/B activity fails to demonstrate a significant difference, Auto-Target likely also fails to generate lift.
If an A/B test shows no statistically significant differences between experiences, it is likely the offers you are considering are not sufficiently different from each other, the locations you selected do not impact the success metric, or the optimization goal is too far in the conversion funnel to be affected by your chosen offers.
Try not to make substantial changes to the experiences during the activity.
Your optimal traffic allocation split depends on what you want to accomplish.
If your goal is to personalize as much traffic as possible, you can stick with 90% targeted allocation and 10% control for the lifetime of the activity. If your goal is to run an experiment comparing how personalized algorithms do versus the control, then a 50/50 split is best for the lifetime of the activity.
Best practice is to maintain the traffic-allocation split for the lifetime of the activity so that visitors don’t switch between targeted and control experiences.
No, only visitors who qualify for and view the Auto-Target activity are counted in reporting.
There are four factors required for an Auto-Target activity to generate lift:
The best course of action is to first make sure the content and locations that make up the activity experiences truly make a difference to the overall response rates using a simple, non-personalized A/B test. Be sure to compute the sample sizes ahead of time to ensure there is enough power to see a reasonable lift and run the A/B test for a fixed duration without stopping it or making any changes.
If an A/B test’s results show statistically significant lift on one or more of the experiences, then it is likely that a personalized activity will work. Of course, personalization can work even if there are no differences in the overall response rates of the experiences. Typically, the issue stems from the offers and locations not having a large enough impact on the optimization goal to be detected with statistical significance.
Auto-Target can be used as “always on” personalization that constantly optimizes. Especially for evergreen content, there is no need to stop your Auto-Target activity.
If you want to make substantial changes to the content in your Auto-Target activity, the best practice is to start a new activity so that other users reviewing reports do not confuse or relate past results with different content.
The time it takes for models to build in your Auto-Target activity typically depends on the traffic to your selected activity locations and conversion rates associated with your activity success metric.
Auto-Target does not attempt to build a personalized model for a given experience until there are least 50 conversions for that experience. Furthermore, if the model built is of insufficient quality (as determined by offline evaluation on hold-out “test” data, using a metric known as AUC), the model is not used to serve traffic in a personalized manner.
Some further points to keep in mind about Auto-Target’s model building:
No, there must be at least two models built within your activity for personalization to begin.
You can begin looking at the results of your Auto-Target test after you have at least two experiences with models built (green checkmark) for the experience that has models built.
This feature lets you route the entire control traffic to a specific experience, based on the traffic-allocation percentage configured in the activity. You can then evaluate the performance reports of the personalized traffic against control traffic to that one experience.
For more information, see Use a specific experience as control.
Adobe does not recommend that you change the goal metric midway through an activity. Although it is possible to change the goal metric during an activity using the Target UI, you should always start a new activity. Adobe does not warranty what happens if you change the goal metric in an activity after it is running.
This recommendation applies to Auto-Allocate, Auto-Target, and Automated Personalization activities that use either Target or Analytics (A4T) as the reporting source.
Using the Reset Report Data option for Auto-Target activities is not suggested. Although it removes the visible reporting data, this option does not remove all training records from the Auto-Target model. Instead of using the Reset Report Data option for Auto-Target activities, create a new activity and de-activate the original activity.
This guidance also applies to Auto-Allocate and Automated Personalization activities.
Target builds one model per experience, so removing one experience means Target builds one fewer model and does not affect models for the other experiences.
For example, suppose you have an Auto-Target activity with eight experiences and you don’t like the performance of one experience. You can remove that experience and it doesn’t affect the models for the seven remaining experiences.
Sometimes activities don’t go as expected. Here are some potential challenges that you might face while using Auto-Target and some suggested solutions.
There are several activity setup changes that can decrease the expected time to build models, including the number of experiences in your Auto-Target activity, the traffic to your site, and your selected success metric.
Solution: Review your activity setup and see if there are any changes you are willing to make to improve the speed at which models build.
There are four factors required for an Auto-Target activity to generate lift:
Solution: First, make sure that your activity is personalizing traffic. If models aren’t built for all of the experiences, your Auto-Target activity is still randomly serving a significant portion of visits to attempt to build all models as quickly as possible. If models aren’t built, Auto-Target is not personalizing traffic.
Next, make sure that the offers and the activity locations truly make a difference to the overall response rates using a simple, non-personalized A/B test. Be sure to compute the sample sizes ahead of time to ensure there is enough power to see a reasonable lift and run the A/B test for a fixed duration without stopping it or making any changes. If an A/B test results show statistically significant lift on one or more of the experiences, then it is likely that a personalized activity works. Personalization can work even if there are no differences in the overall response rates of the experiences. Typically, the issue stems from the offers and locations not having a large enough impact on the optimization goal to be detected with statistical significance.
This is expected.
In an Auto-Target activity, once a conversion metric (whether optimization goal or post goal) is converted, the user is released from the experience, and the activity is restarted.
For example, there is an activity with a conversion metric (C1) and an additional metric (A1). A1 depends on C1. When a visitor enters the activity for the first time, and the criteria for converting A1 and C1 are not converted, metric A1 is not converted due to the success metric dependency. If the visitor converts C1 and then converts A1, A1 is still not converted because when C1 is converted, the visitor is released.