Reference the Experience Platform Web SDK or at.js on every page on your site. For example, you can add one of these libraries to your global header. Alternatively, consider using tags in Adobe Experience Platform to implement Target.
The following resources contain detailed information to help you implement the Experience Platform Web SDK or at.js:
Each time a visitor requests a page that has been optimized for Target, a request is sent to the targeting system. The request helps to determine what content to serve to that visitor. This process occurs in real time. Every time a page is loaded, a request for the content is made and fulfilled by the system. The content is governed by the rules of marketer-controlled activities and experiences and is targeted to the individual site visitor. Content is served that each site visitor is most likely to respond to, interact with, or ultimately purchase. Personalized content helps maximize response rates, acquisition rates, and revenue.
In Target, each element on the page is part of a single experience for the entire page. Each experience can include multiple elements on the page.
The content that is displayed to visitors depends on the type of activity you create:
The content that displays in a basic A/B test is randomly chosen from the experiences you assign to the activity. You can assign the traffic allocation percentages for each experience. As a result of this random splitting of traffic, it can take a significant amount of initial traffic before the percentages even out. For example, if you create two experiences, the starting experience is chosen randomly. If there is little traffic, it’s possible that the percentage of visitors can be skewed toward one experience. As traffic increases, the percentages equalize.
You can specify percentage targets for each experience. In this case, a random number is generated and that number is used to choose the experience to display. The resulting percentages might not exactly match the specified targets, but more traffic means that the experiences should be split closer to the target goals.
See Create an A/B Test for more information.
Auto-Allocate identifies a winner among two or more experiences. Auto-Allocate automatically reallocates more traffic to the winning experience, which helps to increase conversions while the test continues to run and learn.
See Auto-Allocate for more information.
Auto-Target uses advanced machine learning to select from multiple high-performing marketer-defined experiences. Auto-Target serves the most tailored experience to each visitor. Experience delivery is based on individual customer profiles and the behavior of previous visitors with similar profiles. Use Auto-Target to personalize content and drive conversions.
See Auto-Target for more information.
Automated Personalization (AP) combines offers or messages, and uses advanced machine learning to match different offer variations to each visitor. Experience delivery is based on individual customer profiles to personalize content and drive lift.
See Automated Personalization for more information.
Experience Targeting (XT) delivers content to a specific audience based on a set of marketer-defined rules and criteria.
Experience Targeting, including geotargeting, is valuable for defining rules that target a specific experience or content to a particular audience. Several rules can be defined in an activity to deliver different content variations to different audiences. When visitors view your site, Experience Targeting (XT) evaluates them to determine whether they meet the criteria you set. If they meet the criteria, they enter the activity and the experience designed for qualifying audiences is displayed. You can create experiences for multiple audiences within a single activity.
See Experience Targeting for more information.
Multivariate Testing (MVT) compares combinations of offers in elements on a page to determine which combination performs the best for a specific audience. MVT helps identify which element most impacts the activity’s success.
See Multivariate Test for more information.
Recommendations activities automatically display products or content that might interest your customers based on previous user activity or other algorithms. Recommendations help direct customers to relevant items they might otherwise not know about.
See Recommendations for more information.
An “Edge” is a geographically distributed serving architecture that ensures optimum response times for visitors requesting content, regardless of where they are located around the world.
To improve response times, Target Edges host only activity logic, cached profiles, and offer information.
Activity and content databases, Analytics data, APIs, and marketer user interfaces are housed in Adobe’s Central Clusters. Updates are then sent to the Target Edges. The Central Clusters and Edge Clusters are automatically synced to continually update cached activity data. All 1:1 modeling is also stored on each edge, so those more complex requests can also be processed on the edge.
Each Edge Cluster has all the information required to respond to the visitor’s content request and track analytics data on that request. Visitor requests are routed to the nearest Edge Cluster.
For more information, see the Adobe Target Security Overview white paper.
The Target solution is hosted on Adobe-owned and Adobe-leased data centers around the world.
Central Cluster locations contain both a data collection center and a data processing center. Edge Cluster locations contain only a data collection center. Each report suite is assigned to a specific data processing center.
Customer site activity data is collected by the closest of seven Edge Clusters. This data is directed to a customer’s pre-determined Central Cluster destination (one of three locations: Oregon, Dublin, Singapore) for processing. Visitor profile data is stored on the Edge Cluster closest to the site visitor. Edge clusters locations include the Central Cluster locations and Virginia, Mumbai, Sydney, and Tokyo.
Instead of responding to all targeting requests from a single location, requests are processed by the Edge Cluster closest to the visitor. This process helps mitigate the impact of network/Internet travel time.
Target Central Clusters, hosted on Amazon Web Services (AWS), include:
Target Edge Clusters, hosted on AWS, include:
The Target Recommendations service is hosted in an Adobe data center in Oregon.
Adobe Target currently doesn’t have an Edge Cluster in China and the visitor performance remains limited for Target customers in China. Because of the firewall and the lack of Edge Clusters within the country, the experiences of sites with Target deployed can be affected. Experiences can be slow to render and page loads can be affected. Also, marketers might experience latency when using the Target authoring UI.
You can allowlist Target Edge Clusters, if desired. For more information, see allowlist Target edge nodes.
Adobe ensures that the availability and performance of the targeting infrastructure is as reliable as possible. However, a communication breakdown between a visitor’s browser and Adobe’s servers can cause an interruption in content delivery.
To safeguard against service interruptions and connectivity issues, all locations are set up to include default content (defined by the client). This default content is displayed if the user’s browser cannot connect to Target.
No changes are made to the page if the user’s browser cannot connect within a defined timeout period (by default: 15 seconds). If this timeout threshold is reached, default location content is displayed.
Adobe protects the user experience by optimizing and safeguarding performance.
Adobe Target aligns with search engine guidelines for testing.
Google encourages user testing. Google states in its documentation that A/B and Multivariate Testing does not harm organic search engine rankings if you follow certain guidelines.
For more information, see the following Google resources:
Guidelines were presented in a Google Webmaster Central Blog post. Although the post dates back to 2012, it remains Google’s most recent statement on the matter and the guidelines remain relevant.
No cloaking: Cloaking is showing one set of content to your users and a different set of content to search engine bots. Cloaking is accomplished by specifically identifying bots and purposely feeding them different content.
Target, as a platform, has been configured to treat search engine bots the same as any user. As a result, bots can be included in activities if the bots are randomly selected and “see” the test variations.
Use rel=“canonical”: Sometimes an A/B test must be set up using different URLs for the variations. In these instances, all variations should contain a
rel="canonical" tag that references the original (control) URL. As an example, suppose that Adobe is testing its home page using different URLs for each variation. The following canonical tag for the home page would go in the
<head> tag for each of the variations:
<link rel="canonical" href="https://www.adobe.com" />
Use 302 (temporary) redirects: In the instances where separate URLs are used for the variation pages in a test, Google recommends using a 302 redirect to direct traffic into the test variations. The 302 redirect tells the search engines that the redirect is temporary and are active only as long as the test is running.
window.location command. This command directs users to test variations, which does not explicitly signify whether the redirect is a 301 or 302.
Adobe continues to look for viable solutions to completely align with search engine guidelines. For those clients that must use separate URLs for testing, Adobe is confident that proper implementation of the canonical tags mitigates the risk associated with this approach.
Run experiments only as long as necessary: Adobe believes “as long as necessary” to be as long as it takes to reach statistical significance. Target provides best practices to determine when your test has reached this point. Adobe recommends that you incorporate the hardcoded implementation of winning tests into your testing workflow and allot the appropriate resources.
Using the Target platform to “publish” winning tests is not recommended as a permanent solution. If the winning test is published for 100% of users 100% of the time, this approach can be used while the process of hard-coding the winning test is completed.
It’s important to consider what your test has changed as well. Simply updating the color of buttons or other minor non-text-based items on the page does not influence your organic rankings. Changes to text should be hardcoded, however.
It’s also important to consider the accessibility of the page you’re testing. If the page is not accessible to search engines and was never designed to rank in organic search in the first place, then none of the considerations above apply. An example is a dedicated landing page for an email campaign.
Google states that following these guidelines “should result in your tests having little or no impact on your site in search results.”
In addition to these guidelines, Google also provides one more guideline in the documentation to their Content Experiments tool:
Google states as an example that “if a site’s original page is loaded with keywords that don’t relate to the combinations being shown to users, we may remove that site from our index.”
Adobe feels that it would be difficult to unintentionally change the meaning of the original content within test variations. However, Adobe recommends being aware of the keyword themes on a page and maintaining those themes. Changes to page content, especially adding or deleting relevant keywords, can result in ranking changes for the URL in organic search. Adobe recommends that you engage with your SEO partner as part of your testing protocol.
Adobe Target uses the DeviceAtlas metric “isRobot” to detect known bots based on the User Agent String passed in the Request Header.
For Server-Side requests, the value passed in the Request’s “Context” node is given precedence over the User Agent String for bot detection.
Traffic that is identified as being generated by a bot is still served content. Bots are treated like a regular user to ensure that Target is in line with SEO guidelines. Using bot traffic can skew A/B tests or personalization algorithms if they are treated like normal users. Therefore, if a known bot is detected in your Target activity, the traffic is treated slightly differently. Removing bot traffic provide a more accurate measurement of user activity.
Specifically, for known bot traffic Target does not: