When sending subject-based A/B tests, the target size is flagged as a warning that it is too small (no documentation mentions minimum target sizes).
The winning variant was manually selected when the follow-up deliveries weren’t picking a winner.
The initial balance of customers was sent, but the two follow-up deliveries are now stuck in Sending (reply) status and will not send, with 43% of our sends stuck.
Set the aggregation to none.
The A/B test would not face the above issue.
The customer has a recurring A/B Test delivery fed to the population from a workflow.
The learning population is set as 5% per variant, and there are two variants with a learning period of 45 minutes.
The 57% population is the initial population on which the A/B test has run; after that winner has been pushed.
The aggregation period of the delivery is 24 hours, which is why the population is still being added to the delivery even after completing the A/B test.
The product has a limitation in that the subsequent population doesn’t get scheduled when the winner has been pushed.
This is reported as a low-priority bug in ticket CAMP-47125.