Journey Optimizer Experimentation Accelerator best practices content-experiment-best-practices
What is A/B testing?
A/B testing is the process of comparing two or more versions of something to determine which performs better against a defined goal.
Participants are randomly assigned to one version, known as a variant, and their behavior is tracked. The results show whether one version statistically outperforms the others.
Key terminology
Best practices for running experiments
-
Start with a clear hypothesis
A strong hypothesis includes what you’re changing, what you expect to happen, and why.
Example: We believe that changing X will increase Y because of Z. -
Define a meaningful success metric
Choose a metric that aligns with your broader goals. Avoid “vanity” metrics that look good but do not reflect real impact.
-
Test one change at a time (when possible)
Isolating variables makes it easier to interpret results accurately. If you test multiple changes at once, you may not know what caused the effect.
-
Let the test run long enough
Premature conclusions can be misleading. Wait for a statistically significant sample size before acting.
-
Be aware of external factors
Seasonality, holidays, and other changes in your environment can skew results. Document anything that might influence behavior during your test.
-
Use segmentation thoughtfully
Breaking down results by audience segment can reveal hidden patterns but avoid over-interpreting small sample sizes.
-
Document and share learnings
Keep a clear record of what was tested, why, and what you learned. This builds institutional knowledge and prevents repeat mistakes.
Common metrics
What makes a good experiment?
A good experiment does not just produce a win, it produces a clear, actionable learning.
Here is what to look for:
✓ Statistical Confidence: The difference between variants is unlikely to be due to chance.
✓ Alignment with Goals: The primary metric reflects meaningful progress toward a business objective.
✓ Secondary Impact: No significant negative side effects on related metrics.
✓ Scalability: The result can inform future decisions or be generalized to other areas.
✓ Clarity: The cause of the outcome is reasonably isolated and understood.
Experimentation is not just about finding the “best” version, it is about building knowledge through testing and iteration. When done well, experiments reveal insights that drive smarter decisions, better user experiences, and improved outcomes.
Example:
-
Company: Hotel chain
-
Hypothesis: If we use more urgent language on the home page, it will lead to more bookings.
- Control: Original version
- Variant: New version with urgency added
- Primary Metric: Booking rate
- Secondary Metrics: Bounce rate, time on site
-
Result: The variant produced a 14% lift in booking rate, with no negative change in other metrics.
-
Action: Consider rolling out the variant and running follow-up experiments to test similar approaches in other areas.