You must reference the AEP Web SDK or at.js on every page on your site. For example, you might add one of these to your global header. Alternatively, consider using Adobe Platform Launch.
The following resources will help you implement the AEP Web SDK or at.js:
Each time a visitor requests a page that has been optimized for Target, a request is sent to the targeting system to determine what content to serve to a visitor. This process occurs in real-time, every time a page is loaded, a request for the content is made and fulfilled by the system. The content is governed by the rules of marketer-controlled activities and experiences and is targeted to the individual site visitor. Content is served that each site visitor is most likely to respond to, interact with, and ultimately purchase, to maximize response rates, acquisition rates, and revenue.
In Target, each element on the page is part of a single experience for the entire page. Each experience may include multiple elements on the page.
The content that is displayed to visitors depends on the type of activity you create:
See Create an A/B Test for more information.
The content that displays in a basic A/B test is randomly chosen from the assets you assign to the activity, according to the percentages you choose for each experience. As a result of this random splitting of traffic, it can take a lot of initial traffic before the percentages even out. For example, if you create two experiences, the starting experience is chosen randomly. If there is little traffic, it’s possible that the percentage of visitors can be skewed toward one experience. As traffic increases, the percentages should become more equal.
You can specify percentage targets for each experience. In this case, a random number is generated and that number is used to choose the experience to display. The resulting percentages might not exactly match the specified targets, but more traffic means that the experiences should be split closer to the target goals.
See Auto-Allocate for more information.
Auto Allocate identifies a winner among two or more experiences and automatically reallocates more traffic to the winning experience to increase conversions while the test continues to run and learn.
See Auto-Target for more information.
Auto-Target uses advanced machine learning to select from multiple high-performing marketer-defined experiences, and serves the most tailored experience to each visitor based on their individual customer profile and the behavior of previous visitors with similar profiles, in order to personalize content and drive conversions.
See Automated Personalization for more information.
Automated Personalization (AP) combines offers or messages, and uses advanced machine learning to match different offer variations to each visitor based on their individual customer profile, in order to personalize content and drive lift.
Experience Targeting (XT) delivers content to a specific audience based on a set of marketer-defined rules and criteria.
Experience Targeting, including geotargeting, is valuable for defining rules that target a specific experience or content to a particular audience. Several rules can be defined in an activity to deliver different content variations to different audiences. When visitors view your site, Experience Targeting (XT) evaluates them to determine whether they meet the criteria you set. If they meet the criteria, they enter the activity and the experience designed for qualifying audiences is displayed. You can create experiences for multiple audiences within a single activity.
See Multivariate Test for more information.
Multivariate Testing (MVT) compares combinations of offers in elements on a page to determine which combination performs the best for a specific audience, and identifies which element most impacts the activity’s success.
See Recommendations for more information.
Recommendations activities automatically display products or content that might interest your customers based on previous user activity or other algorithms. Recommendations help direct customers to relevant items they might otherwise not know about.
An “Edge” is a geographically distributed serving architecture that ensures optimum response times for end-users requesting content, regardless of where they are located around the globe.
To improve response times, Target Edges host only activity logic, cached profiles, and offer information.
Activity and content databases, Analytics data, APIs, and marketer user interfaces are housed in Adobe’s Central Clusters. Updates are then sent to the Target Edges. The Central Clusters and Edge Clusters are automatically synced to continually update cached activity data. All 1:1 modeling is also stored on each edge, so those more complex requests can also be processed on the edge.
Each Edge Cluster has all the information required to respond to the user’s content request and track analytics data on that request. User requests are routed to the nearest Edge Cluster.
For more information, see the Adobe Target Security Overview white paper.
The Adobe Target solution is hosted on Adobe-owned and Adobe-leased data centers around the globe.
Central Cluster locations contain both a data collection center and a data processing center. Edge Cluster locations contain only a data collection center. Each report suite is assigned to a specific data processing center.
Customer site activity data is collected by the closest of seven Edge Clusters and directed to a customer’s pre-determined Central Cluster destination (one of three locations: Oregon, Dublin, Singapore) for processing. Visitor profile data is stored on the Edge Cluster closest to the site visitor (locations include the Central Cluster locations and Virginia, Amsterdam, Sydney, Tokyo, and Hong Kong).
Rather than respond to all targeting requests from a single location, requests are processed by the Edge Cluster closest to the visitor, thus mitigating the impact of network/Internet travel time.
Target Central Clusters, hosted on Amazon Web Services (AWS), are located in:
Target Edge Clusters, hosted on AWS, are located in:
The Target Recommendations service is hosted in an Adobe data center in Oregon.
Adobe Target currently doesn’t have an Edge Cluster in China and the end-user performance will continue to be limited for Target customers in China. Because of the firewall and the lack of Edge Clusters within the country, the experiences of sites with Target deployed will be slow to render and page loads will be affected. Also, marketers might experience latency when using the Target authoring UI.
You can allowlist Target Edge Clusters, if desired. For more information, see allowlist Target edge nodes.
Adobe ensures that the availability and performance of the targeting infrastructure is as reliable as possible. However, a communication breakdown between an end-user’s browser and Adobe’s servers can cause an interruption in content delivery.
To safeguard against service interruptions and connectivity issues, all locations are set up to include default content (defined by the client), which is displayed if the user’s browser cannot connect to Target.
No changes are made to the page if the user’s browser cannot connect within a defined timeout period (by default: 15 seconds). If this timeout threshold is reached, default location content is displayed.
Adobe protects the user experience by optimizing and safeguarding performance.
Adobe Target aligns with search engine guidelines for testing.
Google encourages user testing and has stated in its documentation that A/B and multivariate testing will not harm organic search engine rankings as long as a few simple guidelines are followed.
For more information, see the following Google resources:
Guidelines were presented in a Google Webmaster Central Blog post. Although the post dates back to 2012, it remains Google’s most recent statement on the matter and the guidelines remain relevant.
No cloaking - Cloaking is showing one set of content to your users and a different set of content to search engine bots by specifically identifying them and purposely feeding them different content.
Target, as a platform, has been configured to treat search engine bots the same as any user. This means that the bots might get included in tests you are running, if they are randomly selected, and “see” the test variations.
Use rel=“canonical” - Sometimes an A/B test needs to be set up using different URLs for the variations. In these instances, all variations should contain a
rel="canonical" tag that references the original (control) URL. For instance, if Adobe were testing its home page using different URLs for each variation, the following canonical tag for the home page would go in the
<head> tag for each of the variations:
<link rel="canonical" href="https://www.adobe.com" />
Use 302 (temporary) redirects - In the instances where separate URLs are used for the variation pages in a test, Google recommends using a 302 redirect to direct traffic into the test variations. This tells the search engines that the redirect is temporary and will only be active as long as the test is running.
window.location command to direct users to test variations, which does not explicitly signify whether the redirect is a 301 or 302.
Although we continue to look for viable solutions to completely align with search engine guidelines, for those clients that must use separate URLs for testing, we are confident that proper implementation of the canonical tags mentioned above mitigates the risk associated with this approach.
Run experiments only as long as necessary - We believe “as long as necessary” to be as long as it takes to reach statistical significance. Target provides best practices to determine when your test has reached this point. We recommend that you incorporate the hardcoded implementation of winning tests into your testing workflow and allot the appropriate resources.
Using the Target platform to “publish” winning tests is not recommended as a permanent solution, but as long as the winning test is published for 100% of users 100% of the time, this approach can be used while the process of hardcoding the winning test is completed.
It’s important to consider what your test has changed as well. Simply updating the color of buttons or other minor non-text-based items on the page will not have any influence over your organic rankings. Changes to text should be hardcoded, however.
It’s also important to consider the accessibility of the page you’re testing. If the page is not accessible to the search engines and was never designed to rank in organic search in the first place, such as a dedicated landing page for an email campaign, then none of the considerations above apply.
Google states that following these guidelines “should result in your tests having little or no impact on your site in search results.”
In addition to these guidelines, Google also provides one more guideline in the documentation to their Content Experiments tool:
Google states as an example that “if a site’s original page is loaded with keywords that don’t relate to the combinations being shown to users, we may remove that site from our index.”
We feel that it would be difficult to unintentionally change the meaning of the original content within test variations, but we do recommend being aware of the keyword themes on a page and maintaining those themes. Changes to page content, especially adding or deleting relevant keywords, can result in ranking changes for the URL in organic search. We recommend that you engage with your SEO partner as part of your testing protocol.
Adobe Target uses DeviceAtlas to detect known bots. Traffic that is identified as being generated by a bot is still served content, like a regular user, to ensure it is in line with SEO guidelines. Using bot traffic can skew A/B tests or personalization algorithms if they are treated like normal users. Therefore, if a known bot is detected in your Target activity, the traffic is treated slightly differently. Removing bot traffic provide a more accurate measurement of user activity.
Specifically, for known bot traffic Target does not: