List of frequently asked questions (FAQs) about Automated Personalization (AP).
You can select an experience to be used as control while creating an Automated Personalization (AP) or Auto-Target (AT) activity.
This feature lets you route the entire control traffic to a specific experience, based on the traffic allocation percentage configured in the activity. You can then evaluate the performance reports of the personalized traffic against control traffic to that one experience.
For more information, see Use a specific experience as control.
There is no turn-key option of comparing AP to a default experience. However, as a workaround, if a default offer or experience exists as part the overall activity, to understand its baseline performance you can click the “Control” segment in the reports and locate that particular offer in the resulting offer-level report. The conversion rate recorded for this offer can be used to compare with the conversation rate of the entire “Random Forest” segment. This helps to compare how the machine is doing compared to the default offer.
If you are looking to personalize a lower-traffic page, or you want to make structural changes to the experience you are personalizing, consider using Auto-Target in place of Automated Personalization. See Auto-Target.
Consider completing an A/B activity between the offers and locations you are planning to use in your Automated Personalization activity to ensure the location(s) and offers have an impact on the optimization goal. If an A/B activity fails to demonstrate a significant difference, Automated Personalization likely will also fail to generate lift.
Make sure to use the Traffic Estimator so you can have a sense of how long it will take for personalization models to build in your Automated Personalization activity.
Decide on the allocation between control and targeted before beginning the activity based on your goals.
There are three scenarios to consider based on the goal of your activity and the type of control you’ve selected:
Targeting rules should be used as sparingly as possible because they can interfere with the model’s ability to optimize.
Reporting groups can limit the success of your Automated Personalization activity. They should only be used under specific conditions.
Consult the following FAQs and answers as you work with Automated Personalization activities:
Target has a hard limit of 30,000 experiences, but it functions at its best when fewer than 10,000 experiences are created.
This same limit is applied even when the activity has enabled the Dissalow Duplicates option.
When each visitor arrives, the set of possible offers the visitor can see is determined by the offer-level targeting rules. Then, the algorithm chooses the offer that the model predicts will have the best expected revenue or chance of conversion from among those offers. Note that offer targeting impacts the efficacy of Target’s machine learning algorithms and, as a result, should be used as sparingly as possible.
There are four factors required for an AP activity to generate lift:
The best course of action is to first make sure the content and locations that make up the activity experiences truly make a difference to the overall response rates using a simple, non-personalized A/B test. Be sure to compute the sample sizes ahead of time to ensure there is enough power to see a reasonable lift and run the A/B test for a fixed duration without stopping it or making any changes. If an A/B test results show statistically significant lift on one or more of the experiences, then it is likely that a personalized activity will work. Of course, personalization can work even if there are no differences in the overall response rates of the experiences. Typically, the issue stems from the offers/locations not having a large enough impact on the optimization goal to be detected with statistical significance.
For more information, Troubleshooting Automated Personalization.
Automated Personalization routes visitors to the experience that has the highest forecasted success metric based on the most recent Random Forest models built for each model. This forecast is based on the visitor’s specific information and visit context.
For example, assume an AP activity had two locations with two offers each. In the first location, Offer A has a forecasted conversion rate of 3% for a specific visitor, and Offer B has a forecasted conversion rate of 1%. In the second location, Offer C has a forecasted conversion rate of 2% for the same visitor, and Offer D has a forecasted conversion rate of 5%. Therefore, Automated Personalization would serve this visitor an experience with Offer A and Offer D.
Automated Personalization can be used as “always on” personalization that will constantly optimize. Especially for evergreen content, there is no need to stop your Automated Personalization activity. If you want to make substantial changes to the content that aren’t similar to offers currently in your Automated Personalization activity, the best practice is to start a new activity so that other users reviewing reports will not confuse or relate past results with different content.
The length of time it takes for models to build in your activity typically depends on the traffic to your selected activity location(s) and your activity success metric. Use the Traffic Estimator to determine the expected length of time it will take for models to build in your activity.
No, there must be at least two models built within your activity for personalization to begin.
You can begin to look at the results of your Automated Personalization activity once you have at least two experiences with models built (green checkmark) for the experience that have models built.
Review your activity setup and see if there are any changes you are willing to make to improve the speed at which models will build.
Automated Personalization activities are evaluated once per session. If there were active sessions that have qualified for a particular experience and now new offers have been added to it, users will see the new content along with the previously shown offers. Because they have previously qualified for those experiences, they would still see them for the duration of the session. If there’s a desire to evaluate this at every single page visit, you should change to the Experience Targeting (XT) activity type.
We do not recommend that you change the goal metric midway through an activity. Although it is possible to change the goal metric during an activity using the Target UI, you should always start a new activity. We do not warranty what happens if you change the goal metric in an activity after it is running.
This recommendation applies to Auto-Allocate, Auto-Target, and Automated Personalization activities that use either Target or Analytics (A4T) as the reporting source.
Using the Reset Report Data option for Automated Personalization activities is not suggested. Although it removes the visible reporting data, this option does not remove all training records from the Automated Personalization model. Instead of using the Reset Report Data option for Automated Personalization activities, create a new activity and de-activate the original activity. (Note: This guidance also applies to Auto-Allocate and Auto-Target activities.)
One model is built to identify the performance of the personalized strategy vs. randomly served traffic vs. sending all traffic to the overall winning experience. This model considers hits and conversions in the default environment only.
Traffic from a second set of models is built for each modeling group (AP) or experience (AT). For each of these models, hits and conversions across all environments are considered.
Requests will therefore be served with the same model, regardless of environment, but the plurality of traffic should come from the default environment to ensure the identified overall winning experience is consistent with real-world behavior.