Running Adobe Target, Adobe Journey Optimizer Decisioning (AJO-D), and Offer Decisioning (OD) together offers powerful personalization but introduces complex technical challenges. This blog explores why enterprises adopt this multi-tool approach, the testing methodologies used to uncover configuration hurdles, and essential insights for implementation teams.
Why organizations deploy multiple personalization engines
The decision to run multiple personalization platforms concurrently isn't made lightly. Each system serves distinct use cases that, when combined, create a powerful personalization ecosystem.
Adobe Target has long been the gold standard for A/B testing and visual experience optimization. Its Visual Experience Composer (VEC) allows marketers to design and deploy experiments without deep technical knowledge, making it ideal for rapid iteration on page layouts, hero images, and call-to-action buttons. Target excels at testing hypotheses and optimizing conversion funnels through data-driven experimentation.
Adobe Journey Optimizer Decisioning (AJO-D) and Offer Decisioning (OD) bring sophisticated, API-driven decision-making to the table. These platforms enable real-time offer management across multiple channels, leveraging Adobe Experience Platform's unified customer profiles to deliver contextually relevant content. While Target focuses on on-page testing, AJO-D and OD orchestrate complex, multi-touchpoint customer journeys with dynamic offer selection based on business rules, constraints, and AI-powered ranking.
This multi-engine approach delivers three key advantages:
-
Layered personalization: Different teams can manage distinct aspects of the customer experience. Marketing might control promotional offers through OD, while product teams optimize page layouts via Target.
-
Operational flexibility: Decentralized teams maintain autonomy over their personalization strategies without stepping on each other's toes, provided the technical integration is sound.
-
Sophisticated journey orchestration: Combining these platforms enables rapid experimentation across channels—web, mobile, email, and beyond—with each system contributing its specialized capabilities.
However, this flexibility comes with a price: ensuring these systems coexist without conflicts, data corruption, or degraded user experiences requires meticulous configuration and a deep understanding of the Adobe Experience Platform Web SDK.
Building a testing environment: methodology and approach
To truly understand how these platforms interact, I constructed a controlled testing environment that simulated real-world parallel execution. The goal was to identify potential conflicts before they impacted production systems.
Environment setup
The testing spanned two distinct environments:
- Adobe Internal Site: Used primarily for AT and AJO-D testing, this provided a sandbox for experimentation without customer impact.
- Customer Staging Site: This replicated a production environment where OD configurations were already in place, allowing me to test new AJO-D offers alongside existing decisioning logic.
By maintaining existing OD offers while building new AJO-D content from scratch, I could observe how fresh configurations interacted with legacy setups—a common scenario in enterprise environments where teams build incrementally rather than replacing entire systems.
Web SDK configuration variables
The Adobe Experience Platform Web SDK sits at the heart of modern Adobe implementations, replacing older, solution-specific libraries (like at.js) with a unified integration layer. Three flags proved critical in understanding system behavior:
-
renderDecisions: When set to true, the SDK automatically renders personalization content eligible for immediate display—typically HTML offers created in Target's VEC or Journey Optimizer's web channel. Setting this to false requires manual rendering, giving developers fine-grained control over timing and display logic.
-
sendDisplayEvent: Controls whether the SDK automatically fires a display notification when content is fetched. This event is crucial for analytics accuracy and frequency capping. Premature display events can inflate impression counts or trigger capping rules before users actually see content.
-
includeRenderPropositions: Determines whether the SDK includes already-rendered propositions in subsequent API responses. This flag is essential for re-rendering scenarios in single-page applications where views change without full page reloads.
Decision scopes
First, think of a “scope” as a numbered request ticket. When Target asks for content, it uses the default ticket, __view__. If AJO-D also uses that ticket, they both try to send their content to the same spot, resulting in a fight over the page.
During testing, I experimented with including and excluding these default scopes while adding custom decision scopes for OD and AJO-D. This revealed that default scopes must be reserved exclusively for Target to prevent unintended cross-contamination. Mixing scopes led to redundant calls, unexpected rendering, and analytics discrepancies—issues I'll detail in the observations section.
Sequence testing
One of the most revealing aspects of testing was varying the order in which propositions were requested and rendered. Does fetching Target content before OD change the outcome? What happens if display events fire in the wrong sequence?
By systematically testing different call patterns—fetching all propositions at once versus sequential calls, toggling flags between calls, and varying timing relative to page load events—I uncovered timing dependencies that aren't immediately obvious from Adobe's documentation.
Critical observations
1. System distinctions
The good news: the Web SDK can differentiate between Adobe Target propositions and OD/AJO-D offers when properly configured. Each proposition returned by the SDK includes metadata identifying its source and scope, allowing developers to route content appropriately.
However, this distinction only works reliably when scopes are correctly assigned. If OD or AJO-D accidentally use the __view__ scope (Target's default), the SDK may treat them as Target propositions, leading to:
- Auto-rendering conflicts: Content meant for manual handling gets injected automatically, often in the wrong DOM location.
- Analytics pollution: Display events meant for Target get attributed to OD, corrupting reporting.
- Caching issues: Single-page application re-renders may pull stale propositions because the renderAttempted flag is set incorrectly.
The solution is strict scope discipline: custom decision scopes for OD and AJO-D, default scopes exclusively for Target.
2. Timing and order of operations
The most complex issue revolves around when offers are fetched, rendered, and reported. Ideally, personalization happens before the pageview event fires, ensuring users see customized content immediately without flicker.
But in a multi-engine setup, timing becomes treacherous:
- Early fetch risks: If you fetch all propositions (Target + OD + AJO-D) in a single call with sendDisplayEvent: true, display events may fire before rendering completes. This creates false impressions in analytics and can prematurely trigger frequency caps.
- Late fetch risks: Delaying proposition fetches to avoid premature events can cause flicker—users see the default page before personalization applies—harming user experience and invalidating A/B test results.
- Flag timing conflicts: Using includeRenderPropositions during the initial fetch call, rather than a dedicated post-render call, can cause the SDK to return incomplete or stale proposition data. This is particularly problematic for single-page applications where views change dynamically.
The lesson: decouple fetching from display event firing. Fetch all propositions with sendDisplayEvent: false, render them manually with complete control over timing, then fire display events only after confirming successful rendering.
3. Display Event management
Display events are the linchpin of accurate reporting, but they're also a common source of errors. When sendDisplayEvent is set to true during the fetch phase, several problems emerge:
- Premature reporting: Analytics platforms like Adobe Analytics (A4T) receive impression events before users actually see content, inflating metrics.
- Frequency capping failures: If display events fire before rendering, users may hit frequency caps without ever viewing the intended offer, wasting budget and opportunities.
- Multi-system attribution errors: In parallel setups, it's unclear which system "owns" the display event, especially if scopes overlap.
The robust solution involves three steps:
-
Disable automatic display events during all fetch calls (sendDisplayEvent: false).
-
Cache rendered propositions in a client-side variable or data layer.
-
Manually fire display events using alloy("sendEvent", {...}) with eventType: "decisioning.propositionDisplay" only after rendering is complete and timed correctly with the pageview.
This approach mirrors traditional server-side analytics implementations, where you control exactly when impression pixels fire.