The Next Era of Experimentation: How Agentic AI is Fueling Smarter Testing and Growth

In this Experience League Live session, we unveil Adobe Journey Optimizer Experimentation Accelerator — a new AI-first application built to transform how experimentation, product and growth teams test, learn, and optimize campaigns and customer journeys.

Powered by the Adobe Experience Platform Experimentation Agent, this new application automates experimentation analysis while reducing manual effort, so teams can:

  • Discover what works and why with clear, AI-powered insights
  • Identify high-impact opportunities ranked by predicted lift and conversion
  • Accelerate optimization using AI and adaptive experiments on active tests
  • Centralize learnings to align teams and scale experimentation impact and ROI

Whether you’re a growth marketer, product manager, or optimization strategist, see how generative and agentic AI is fueling experimentation with smarter, faster decision-making, driving measurable growth.

Adobe Journey Optimizer Experimentation Accelerator seamlessly integrates with Adobe Target and Journey Optimizer.

Transcript

Hi everyone and welcome to Experience League Live. I’m Sandra Hölsmann, Senior Technical Marketing Engineer here at Adobe, and I am so excited to have you all with us today for a really special session. For those of you who might be new here, Experience League is Adobe’s customer learning community. It’s where you can find tutorials, best practices, product documentation, and the recordings of live sessions, like this one, to help you get the most out of Adobe Experience Cloud. Today, we’re diving into something brand new and pretty game-changing for experimentation and growth teams. The next era of experimentation is how a Genki is fueling smarter testing and growth. If you’ve ever wished you could spend less time manually crunching test results and more time actually learning, iterating, and scaling what works, this session is definitely for you. We’re going to introduce you to the Adobe Journey Optimizer Experimentation Accelerator, a new AI-first application that transforms how experimentation, product, and growth teams test, learn, and optimize. I’m joined today by three amazing guests who’ve been hands-on shaping this new era of experimentation here at Adobe. First up, let’s welcome Brent Kostak, Senior Product Marketing Manager for Digital Experience. Brent’s not only an expert on go-to-market strategy, but he’s also kayaked through Milford Sound in New Zealand and even gone hang gliding. Brent, you’re definitely braver than I am, and I’m going to ask you a couple of questions about this. Welcome to the show. But first, before we get to those questions, let’s get in the other guests. Next we have Justin Grover, Principal Product Marketing Manager for Digital Experience.

Justin is the driving force behind the development of this product. Outside of work, Justin’s passion for discovery extends to the skies. He’s traveled to see the last two total solar eclipses in the US. Last but not least, we have David Arbor, Senior Research Scientist with our Core Technologies team. David, you bring the science behind the AI map. Let me stick to you before I get back to Brent and Justin and introduce you a bit. When you’re not developing experimentation algorithms, you’re probably out on a run somewhere. Are you one of those early morning runners, or do you save it for after work to clear your head? It’s usually early morning. Earlier the better. I think the more hills the better is basically. I like to start off the day on good speed. Early morning, 5 a.m.? Oh my goodness. It depends. If I’m on my game, yes. If I’m not, then no. More recently it’s been after the kids leave for the bus, but hopefully it’s before. I imagine running gives you time to think. Do you ever get any research ideas mid-run? Honestly, I think it’s the only time I do get research ideas. It’s just complete lack of distraction. I think the main reason you do it is because you can’t do anything else. I don’t know about you, but otherwise I’m glued to my phone or my computer or just general life. It’s a great time to figure it out. Awesome. We’re very happy that you run before work so you can get your ideas into the office. It cuts down on the rants once I get there too. Justin, let me introduce you. As this is Experience League Live, I love to introduce you guys with your fun facts. You’ve traveled to the last total solar eclipses in the U.S. Where is the best spot you’ve watched an eclipse from so far? Where did you go? The last one was in Texas, and then before that it was up in Idaho. Then we’ve gone to a few of the partial ones as well. They were both great. It’s just such a surreal experience when everything goes dark for a minute, all the birds quiet down, or what’s going on. It’s pretty cool. You can see some of the corona of the sun, which is fun to see. You had good weather. It sounds like you didn’t have cloudy skies. I remember I managed to go to a total eclipse in Germany way back when. I’m aging myself here as well. Literally, the minute the sun went away, the clouds closed and it started raining. It was an amazing experience anyway, but still, I was like, yay! Let me go to you. You’ve kayaked through Milford Sounds in New Zealand, which sounds amazing. You’ve even gone hang gliding. I have to ask, which is scarier, the kayaking through those giant cliffs of Milford Sound or the hang gliding? Or is none of it scary for you? I would say hang gliding. We had a tour guide from Australia. I hiked up with him, carried some of the pack gear for the hang glider, which was cool. The story there was first time I was traveling around Europe, I was young. There was a slope that you have to practice to run down before you take off. It had rained the day before and the guide slipped in the mud. He got covered in mud. I was literally thinking to myself, this is the guy I’m about to run off this cliff with. He apologized. He’s like, that’s never happened before. You’re in good hands. I was like, oh, man, take it out of this now. I was very nervous because of that instance. Both experiences were really unique. If you ever have a chance to go to the southernmost tip of New Zealand, South Island, Milford Sound is beautiful. It’s these fjord mountains that are right up against the water. The guides down there tell these tales of all the gods that have chiseled all the rocks. It’s kind of this fairy tale, majestical place with the weather. It’s really, really cool. The hang gliding one was, I’m glad I survived that. That was pretty wild.

I’m guessing that might have inspired you to think about experimentation, maybe testing. On the way down, that was when the origin story happened. If I don’t make out of this, I have to launch this product. Experimentation and hang gliding go one in one. Thanks so much. Awesome. Awesome. Before we dive in, we’ll keep this session pretty conversational as it’s experiencing live demo driven. For our guests, for our viewers, please feel free to add your questions in the chat. We’ll keep an eye on it and answer as many as we can. Just a quick reminder, if you do want to join the conversation, drop questions or comments during the session, make sure you’re signed into YouTube. You’ll need to be logged in to post in the live chat. We’d really love to hear from you. Just go ahead, sign into YouTube and join the conversation. Let’s get started. Brent, why don’t you kick off the session by setting the scene? What’s driving this shift to AI powered experimentation? How is it changing? How do teams think about testing and growth? I think something that’s on a lot of people’s minds here, especially our target customers, how does this evolution of experimentation impact target and AJL? Very good questions. I think I just have a couple of framing slides here. First off, I wanted to kind of mention some of the summit announcements we’ve had. The experience platform agent orchestrator is a new platform that we’ve launched for our AI agents. So we’ve gone to market with, you know, you can think of the agent orchestrator platform as this control plane that is monitoring and governing all these agents. The agents that we’ve launched within experience platform, we’re going to be talking about the experimentation agent today and how it kind of surfaces within this new AI first product. But in terms of customer experience orchestration, Adobe focused on content, data and journeys. There’s a lot of growth experimentation as a core differentiator within a business right now. There’s a lot of, I think, emphasis on how can teams test and experiment between better content, understanding of data for context, and then applying this across your experiences across campaigns, journeys. And from Sandra, you mentioned Adobe target within our B2C customer journeys portfolio, we wanted to make sure that as we’re launching these new agents and focused on experimentation, we have this seamless workflow across Adobe target and Adobe journey optimizer. So that’s a key point that I’ll get into a little bit more context to. But this is kind of like our market texture. I’ve been getting a lot of questions around, hey, we’ve been doing a lot with AI. How are some of these innovations kind of segmented across what I like to call out these three pillars? I think across the different applications of experience cloud, we’ve seen kind of a reimagination of us launching like content generation and a lot of the embedded AI that you can see in this left hand side. The conversational UI, which is the AI assistants, Adobe’s AI assistant, that surfaces up in our applications for user interfaces. And it’s a conversational piece where our customers and users can now interact with these new agents that we’ve launched. So this second pillar is really kind of focused on where a lot of these AI agents are interacting, querying different data sets all through the AI systems. And then this third pillar where we’re launching on top of Agent Orchestrator and on top of Adobe Experience Platform, new AI first applications. And those are a complete product experience, a separate UI that you get a lot of benefits from in terms of how servicing some of these core capabilities that this experimentation agent is powering. So today’s conversation, we are excited. We’re excited to have you here this fall. We’ve had a lot of conversations, really excited conversations around, hey, we can finally see some of these unlock potential for Adobe Experience Platform and all these new agents with our target customer base. We’ve had Adobe Target in the market for 15 plus years now. And this new application, you can kind of envision this as a intelligence layer on top of the experiments that teams are running that might be in Adobe Target, primarily focused on our web and mobile applications. And then also this new AI first product works with tests and experiments, content experimentation in Adobe Journey Optimizer. So for folks that are interested in kind of seeing this evolution, this new era of experimentation, we wanted to make sure that this was purpose built in terms of our full experimentation stack at Adobe. And we’ll get into more product details in the demo and focus on these core capabilities. But this is really exciting. This isn’t like a replacement story for Adobe Target. This is really exciting to position these core capabilities, these generative and agentic capabilities to our customers where they are. And these two different applications that we’re seeing in terms of our B2C customer journeys portfolio is really kind of tying this better together story now for how folks can experiment, personalize for all their use cases. And we’re very excited to really kind of dive deeper into Journey Optimizer experimentation started today. So I want to just kind of frame up the story and make sure that as customers are getting more understanding around some of the new agents that we’ve launched, the experimentation agent is powering the core capabilities within Journey Optimizer experimentation accelerator. So very excited. There’s a lot of great conversations, and I hope to see some comments and engagement today’s session, but I wanted to pass it back to Sandra just in terms of taking us more into kind of like what this product does. Let’s focus on some of the seeing the product in action, right? Absolutely. And I don’t want to talk a lot. I really want to see it. So Justin, can we dive in? Can you show us what? Yeah, let’s do it. Okay. Okay. So I want to start off with two things. First, how do you get to this product? So if this product will be, it’s available, if you just have Target, it’s available in the product switcher here. And then if you also have Journey Optimizer, let me grab that one here, then I can get to it from the left nav as well in Journey Optimizer. So let’s dive in and look at it just a little bit. Justin, very quick question. Is this automatically available? Or I mean, it’s currently marked as better. How do I get this as a customer? Yeah. So this is a paid product. And so you’ll, as soon as you purchase the product, we’ll get it all set up for you and ready to go. What you’re seeing here is a list of experiments. And so this is, let me clear some of the filters here. This is a list of experiments from both Target and AJO. If I only have one of the products, then it would only be from one of the products. But here we can see, I’m bringing in automatically the experiments that I have in Adobe Target. And this does a couple of things for me. It helps me understand how my program is going and just keep track of all of the experiments that are being run in my organization. We’ll get into it in a little bit on how we look at some of those experiments across the experiments. But I do want to say that this doesn’t require a P. If you only have Target, and that’s the only thing that you have from us, this will still work exactly as you see it here. I’ll go through a couple of examples, and they are exactly the same for Target and for AJO. So we’ve tried to keep the parity there really, really tight and really useful. So let’s dive into an experiment. I’m going to do a couple quick filters here just to get down to one that I know is good. The ones I’m showing you here are actual experiments that have been run on Adobe.com. So if you’re trying to buy Photoshop or you’re trying to buy Acrobat or Lightroom or any of those kind of things, these are some of the experiments that we use to improve them. I wanted to show you some real stuff today so that it was useful for you and you could see how it works. So I’m going to grab this one. This one finished a little while ago. Here I’m looking at an overview of the experiment. This UI is really meant for the product, like a growth product manager or a growth product marketer. The person who wants to run the experiment, but not necessarily the technical persona that is in charge of setting it up, doing all the QA, all of that kind of stuff. So this is just a place where they can go in, view the experiment without having to give them access to full access to Target, full access to AJO, which can be kind of useful. Let me start with the setup. I’ll come back to the top in a second. There’s two things that we ask that you do when you create an experiment. The first is you tell us what your hypothesis is. Don’t worry if you’ve already started the experiment and you come in afterwards and add it, but we want you to tell us what you’re trying to test. That will help us generate better insights, better opportunities for you. We’ll get into those in a second. The second piece is that we want you to upload screenshots of what the actual end user sees of each of the treatments. We’ll try to go get these automatically. For a lot of the AJO stuff, we can pull them automatically and we’ll try to go crawl the website and get them. A lot of times though, it’s like behind a login form or something like that. In those cases, you’ll have to provide just a screenshot of it. All you do is you go into this review and then you can replace the image that’s there. While you’re in here, you’ll also confirm that the images are what you expect them to be. Once those are confirmed, then we start a lot of the additional processing that happens. Just a question. When I’m pulling the screenshot, I should pull it from a proof, for example, that I’ve sent so that I actually have the personalization in there rather than if I’m an AJO go from the, let’s say, email designer or the designer if I have a placement somewhere. It should actually be what we’re going to deliver. Yeah, what the end user would see. In the case of AJO, with email, we often pull those automatically. There’s only a few cases where we can’t do that. You shouldn’t have to worry about that too much. It’s mostly adding them if you’re doing experimentation on the web. A lot of times there’s bot restrictions and other things that make it hard for us to crawl. One more follow-up question. Speaking of, you mentioned web, you mentioned email. This supports any experimentation that we’re doing in the system. Whichever channel we have that supports experimentation, we can use the experimentation accelerator to report on it. Yeah, as long as it’s an experiment in AJO or in Adobe Target, you can do it in here. I think there was a question on AI automation. What we’ve done here is we’ve tried to automate everything from the test execution to the decision and to the planning of the next experiment.

You’ll see a bunch of things from our journey agent that will automate the setup of the experiment and the configuration of it. Beyond that, we’ll show you some things here in a second that allow us to automatically prioritize some things.

Let’s dive into some of these things here. Here I just have a quick summary of the experiment, the details. Back up at the top, we try to give you a quick view into what’s going on with the experiment, whether it’s reached statistical significance or not, and if you need to wait more, those kind of things. I can always go over here in the update experiment, and this will take me over into the experiment, whether that be in Target or in AJO. It’ll link me over there automatically.

Then we have the results table. Again, none of the stuff I showed you so far is new or novel. It’s just a friendlier way to view it so that you can give it to more people and they can monitor these experiments.

The results table just shows me the lift and the confidence and all that kind of stuff.

I have another one pulled up in front of me so that I can look into the camera and still talk. We have our results table here. Again, lift, confidence, those kind of things.

We did cheat a little bit on this one. We didn’t quite hit 95% confidence, but we fudged a few things to make the demo work really well.

Now I want to get into some of the AI capabilities. The first part is the experiment insights. When you hit statistical significance, we’ll go ahead and we will analyze all the content that you have run in the experiment in each of the treatments.

We automatically categorize that content based on a whole bunch of attributes. We’ve got 125 that we’re working with right now and we’re adding to those all the time.

These are attributes of the content that helps describe what’s going on with it. David, you were pivotal in coming up with this strategy. The first question that I wanted to ask you on this is how do we go through and do that categorization? Is that something the customer needs to do? Is that something that the AI does? Yes, it’s something that the AI does. Basically, what we do is we use a representation of the content itself. In this case, let’s talk about text. We have text, we put it in the AI-based representation. Then we have for each of the categories, we have the name of the categories and then we have representative pieces of content associated with that attribute. Essentially, we look at the similarity in this representation space and we get a score for it. We do that automatically. I will say that I know in the future there may be the ability to add additional attributes but basically using the same thing. The nice thing here is if you want to know what an attribute means, we have a very short description of what it means but it’s always based on easily interpretable things. It’s words and phrases in the case of text.

Let’s look at an example of it. Here, this first one, we’ll read through it a little bit.

We have some different attributes of the different treatments. In this case, the treatments have a lot of similar attributes and so they’re just weighted differently.

What we can see is in this one, the commitment to consistency is one that helps quite a bit. It says the top treatment emphasizes phrases like make your voice heard or collaborate with others, reinforce user engagement, and lower rank treatments lacks the strong call to action. Using more passive language like share your ideas, which fails to instill the same level of commitment to consistency.

Just real quickly, I just want to call it. One thing that I think is different here than what you’ll typically see is that there’s one version of this where you put in your results and say, Hey, this did really well into your favorite chat, GPT, LLM, whatever, and say, why? It gives you some explanation and maybe it accords with reality, maybe it doesn’t. We’ve taken a very different tack here and made sure that everything is very grounded to data. The reason we go to those attribute scores is because we actually do, underneath the hood, we do a lot of statistics and make sure that anything that we’re telling you can be backed up with data and reproducible. If you look back at this insight in two months, you’re going to get the same explanation of what happened. The insights are going to be consistent. Also, those attributes are consistent across experiments, maybe even across months, years, quarters, whatever.

Justin, let me quickly ask you to address a question that we have. We have a couple of questions here. The collective asks, and I know Mia Prova, thank you for answering. Do we need to be showing results in the app to get this? In the version of Target we’re using, there’s a choice between sending the results to analytics and showing them in app.

Yeah, that’s a very good question.

For example, in Target, when you create an experiment, there’s an option to either use Target as your reporting source or analytics as your reporting source and then even more recently, CGA as your reporting source. Whatever you select there, we will pull from that reporting source.

For example, if I select analytics and then choose a metric, that’s what will show up here in the UI. We support A for T, we support CGA for T, we support just Target only. We also support the AGO reporting and then there’s also an integration with CGA for AGO if you use that as well. We tried to cover all of the bases. If you aren’t using some of those reporting pieces where we can automatically calculate the lifting confidence and doing a very manual tracking on your own, we don’t yet support that. There’s some plans to allow some of those things to happen in the future. Perfect. You do not have to keep the data in Target in order to be able to use this accelerator.

Okay, great.

I wanted to go back and talk about the insights really quickly. You can see here there’s a couple of different insights for this one. The experiments will not always have an insight because there may not be anything that you learn from this. Usually there’s between one and four insights that are generated. What we do is we keep these as a history or a database of the things that you’ve learned about your audiences.

We store these in a fairly novel way. The reason we do that is so that as we get into trying to figure out what to do next, we come across these opportunities. Think of the opportunities as either what additional treatment could I add to this experiment that might help it or what things should I consider for my next experiment. Let me dive into one of these. These are pretty simple.

What we’ll do here is this one features social proof. In the past, we’ve seen social proof work pretty well. It’s saying, hey, in this experiment, you didn’t use social proof, but it has proven fairly effective in other experiments that you’ve done.

As you run more and more experiments, that knowledge base builds up over time and becomes more and more helpful. In this case, social proof is just build trust with others based on their positive experiences.

We give you a couple of examples. In this case, there’s only one. It’s like, hey, join the 10,000 other professionals using Acrobat for efficient teamwork. These are just ideas. We’re not saying that this is exactly what you have to do. This just gives you some ideas. As soon as you have a treatment, one treatment in the experiment, we’ll start to generate these. If you haven’t used experimentation accelerator before or don’t have any experiments completed, we do have a model that we use globally to try to assess the effectiveness of the experiment and then make these recommendations.

We’ll use that first. As soon as you have any experiments, we will heavily weight towards that.

David, I maybe want to have you jump in here just a little bit and look at or talk about what it is that the model does and how it comes up with some of these insights. Yeah, absolutely. I think it’s probably good just quickly to anchor on the difference between an insight and an opportunity. An insight is like, I ran the experiment. I want to know what about the content that I had in my experiment associated with higher outcomes. That’s why we really anchor on what’s happening inside the experiment. I like to call it the this is what happened.

Yeah, absolutely. Opportunities are sort of like what next kind of thing. The way that works is what we do is we actually look at historical experiments. We take those content representations that we talked about before for the insights and the attributes. We actually build a model that correlates those representations to the outcome. We do some additional adjustments to make sure that you’re not biased. We take advantage of the fact that we’re in experiments. We make sure that we have things that control for per experiment, things that may just be weird about your specific experiment. Then when you go to run your new piece of content, we say, okay, here’s the attributes you have. Actually, here are the things that we’ve noticed work really well historically. What we do is we find the differences between those things, essentially say these are the attributes that we notice work really well historically. It’s relatively low in your current piece of content. Maybe consider including this. David, can I ask you something? I think I’m pretty sure it’s a question that a lot of our viewers have. You said you’re looking at historic experiments. Is this only experiments that your company has run? Or do you take other more commonly available data into consideration? If I get started, I run my first experiment if it’s only on my company data, which is great. But that’s not the experiments. That’s right. It’s like the evergreen cold start problem, right? I don’t know what to run if I’ve never run anything. It’s a great question. We obviously thought about this a lot. What we do is actually we warm start in that case off of publicly available historical experimental data. We build that model. It’s the same methodology except it’s using publicly available historical experimental data. Then as you have a large repository of your experiments, then we can use the model on your data. Obviously, the more experiments you run, the more catered and customized it is to your setting and use case. But we actually found that even just using the cold start data and comparing to some of the experiments that we’ve run internally, actually it correlates pretty strongly with results that we’ve seen. Despite the fact that it’s completely different.

Sanjay, can I add to that piece real quick too? Of course. One of the exciting pieces I think of Adobe’s broader Agentic AI strategy is because of the orchestrator, some of the releases that we’ll be releasing for the agent composer. A lot of conversations right now with customers around, hey, we would want to extend some of these new models, these capabilities into maybe some custom agents that they’ve already built or customization a little bit more so of these capabilities that we’re talking about in the product.

That Orkenshare platform is an important piece that Adobe’s going to market with these out of the box models, capabilities, new applications. Then there’s more to come from a developer tooling and extensibility to customize to the business, to those use cases. We’ve got a lot of conversations from customers excited about. There’s a two piece model here where you get all of the enriched, automated AI that’s surfacing in these insights and these new test ideas. Then you can also think about how from a business context and a more infrastructure architecture, there’s a lot of companies that are already using some facet of AI already and they want to build that in or extend that in. Wanted to make that point as well.

Great. This is so exciting to be honest.

Justin, one thing that’s in my mind and you might be mentioning it, but looking at the opportunity now, I mean, obviously the input that we’re getting for the opportunity details is amazing. Now I’m getting these three opportunities and I would like to apply them.

Is there yet a functionality or is there a plan to have a button, just thinking of the integration with AJO, to click, okay, please apply or is this still, I see there’s a copy button, I can copy the text and then have to do it manually right now. What’s going on there? Yeah. First off, I want to state that we firmly believe that even the current state of AI and everything like that, that there should be, whenever you’re dealing with the experience that you offer to your customers, there should be a human in the loop there. What we’re trying to do, and you’ll see us build this out over the course of the next few releases, is trying to provide you everything that you would need to make a creative brief. This is just a really simple example, but we can get more complicated. The idea is that you can hand that to a designer, you can hand that to an AI, and they can come up with some alternatives. You could select the alternative and then we allow you to open up the experiment and it’ll go flip over to the tool.

That leads me into another direction that we’re going. I’m going to show you our multi-armed bandit that we released as part of this as well.

That’s the first step in a three or four step process to enable this concept of always on experiments. What we’re doing is allowing you to set up an experiment, say on a homepage banner, and then you can continually be adding content in of good ideas that you have. Then it will go through, figure out which ones are the good ones and which ones are the bad ones, and then recommend disabling those. We’ve done some interesting work with some professors at Berkeley that allow us to add treatments and still maintain the statistical power that you’ve gotten from the data you’ve run already. Pretty novel stuff. There’ll be more about that at a later date on how that works. We’ll make sure that it’s easy to understand and all that kind of stuff. Let me show you the first step.

I bring this in just because I wanted to switch to a different organization. Probably not so great if I’m demoing how to do this. I think we’re getting in trouble. Yeah, I’d probably get in real big trouble.

Before you continue, we have another question. The collective is asking, how about UX UI best practice opportunities? Can the AI suggest experiments based on any potential pain points based on best practices? Right now, we’ve got text out. In the next couple of months, we’re going to release images and layouts. The idea is that we’ll give you feedback on how is the composition of this image, how is this laid out well, those kind of things. We are looking at some more generic ones where you can just come in and say, hey, you know what, I’m looking for a test idea. I don’t know where to start. Here’s my website. Tell me what that is. Those are some of the things that we’re working on right now.

Interestingly, one of the activities that we’re doing to help train our layout model is we’re interviewing a whole bunch of designers and getting their feedback on different layouts and things like that. That will get fed into part of the model. Part of it will be based on results and whatnot. Those are some interesting concepts that we’re working with and should have available in the next couple of months.

Great question.

Here, I’ve just got a simple journey. I’m reading an audience and then I want to run an experiment.

This is the same for all the different experiments that you can run in AGO.

I’m going to show you the new optimized node. This is in a limited release and will be released in January.

I thought it would be interesting.

This optimized node is very similar to the split node that we had in the past.

What it does is it allows us to run experiments. All of this is just part of AGO. I haven’t gone into anything special yet. When I create the experiment, I can choose the metric that I’m interested in. I’m really interested in how many purchases there were. I can add the number of treatments. This is the number of splits in the journey. Do I have two arms? Do I have three arms? In this case, we’ll do three.

The part that’s new or that’s part of experimentation accelerator is I can then run a multi-armed bandit.

The multi-armed bandit uses newer technology like Thompson sampling.

When I do this, now I can build up to three different journeys. That’s helpful because I can do an email, a pause of a day, and then another email. I can do the same email, a pause of two days, and then another email.

Those kinds of experiments are now possible.

I could also do an experiment and say, okay, which channel works better? Yes, although you want to be careful there because there’s a lot of bias that’s inherent in the channel.

Usually what you’ll do is you’ll split it out by channel and then you’ll run three different experiments, one with a control and one with a variant. Then you can start to compare the experiments.

It makes sense. It’s definitely possible. We let you do it in the tool. Just remember when you’re doing it that there’s some bias in desktop versus mobile versus an email.

Hey, Justin, one call out to you. I wanted to make sure for the audience, especially target customers. Multi-armed bandit new to AJO testing. This same flow that you’d see these capabilities work within Adobe target, a lot of rich experimentation-type activities in target within the VEC. We’re showing AJO, but it also works seamlessly with Adobe target.

Adobe target has the autoallocate, which is a multi-armed bandit. It’s fantastic.

There’s one other option that I haven’t shown here. It’s one of those things that if you’re interested, you can just ask us and we’ll enable it for you. There’s the ability to bring your own multi-armed bandit. What we’ll do, the setup’s all exactly the same, except we expose an API for you to set the weights. If you have a custom algorithm that you want to run, you can run the algorithm, set the weights, and then we’ll adjust all of the rules and all that kind of stuff for that. That can be helpful if you’re playing around with different algorithms and you have some additional context about your business that you want to incorporate into it that may not be available to Adobe’s suite of solutions. That can be a useful thing. Say if it’s something you’re interested in, just reach out to your account manager and tell them Justin sent you and we’ll make sure you get hooked up with that.

We’re just keeping it behind a feature flag for a little bit while we monitor and make sure it works, make sure the documentation’s really good, all that stuff. Anybody that has experimentation accelerator can use it.

Okay. We’ve covered a lot. I want to go back here really quickly. We’ll just talk a little bit about how the data flows through this.

I think this will just be informative as we go through it. Just a quick review of all the things that we’ve done so far. The first thing that you do is create an experiment. Create an experiment. Can be a target, can be AJO.

What will happen is we will actually make sure all of that information gets stored in this experimentation service that just holds all the metadata for these experiments. Justin, I want to make sure you’re sharing. I don’t know if it’s on the screen.

Are you sharing the data flow right now versus AJO? Yeah, I’m in the data flow. Oh, no, we’re not seeing that. Okay, that’s not good.

Let’s just do this really quickly.

Thanks, Brent.

Yeah, okay, here we go. That should be better.

Okay, give us a second. There we go. Fantastic.

All right. Yeah. Okay, so once an experiment’s in the experimentation service and it’s live, then what we’ll do is we’ll go out and try to grab the screenshot. We will go onto the website, email, push, SMS. Those are all pretty easy for us to get. We’ll try to get it on the website. Sometimes it works, sometimes it doesn’t. Generally, what we’re asking for is just the full page screenshot. You don’t need to zoom in on a particular treatment. We want to see you the context that it’s deployed in because that can help us as we’re learning. Then once that’s there and confirmed, we’ll go in and pull the reporting data, pull it from Target, pull it from AGO, pull it from A4T, CGA4T, and then CGA. Lots of options there. Hopefully, that covers most of the bases.

Then you’ll go in and just make sure that all of the treatments look good.

We’ll give you that kind of UI where you can go and replace the images or confirm that they’re good. Once that’s done, then we can generate the opportunities. As long as you have at least one treatment in the experiment, we’ll generate opportunities for you. Then after that, obviously, wait until you get hit statistical significance. Life is grand and great. You’ve had this amazing experiment. Then the insights get generated for that. That’s how the flow of these different things works. I thought that’d be helpful to go through just from an informational perspective.

That is the content that I have. I believe I’m going to hand it back to Brie. I have one more question. You did mention earlier, I think Brent, you mentioned that the AI assistant can also surface the accelerator data. Can you show that? Is that something? I know you’re not in the same work there.

All of this is available in the AI assistant.

I can ask things like what experiments finished in the last or actually here, let me do this one. I don’t know if we have it.

We’re created in the last week.

That’s awesome.

As this is going through, what it will let you do is it gives you all of the steps that it’s taking. You’ve seen this in other AI tools, but it can be helpful to see exactly what’s going on in the AI.

Here, it gives us the list of experiments that we have. I could come into here.

Let’s go grab our experiment here.

I’ll just grab it and get it right here.

While Justin’s doing that too, this conversational interface with AI assistant, it’s a good question that I’ve gotten from customers is based off what applications you have licensed, say if you do purchase, you have access to the experimentation agent.

Based off your prompt, these agents are reasoning, querying different data, sharing different data sets, and information with each other. Then you’d have this AI assistant interface as the front-facing door to all the agents in some of these tasks. That’s the strategy right around the AI assistant. We’ve launched several different agents. You can interact with these core capabilities in both the experimentation of the product and then how Justin’s shown the AI assistant here for interacting with the experimentation agent.

I would go into the experimentation agent, sorry, the accelerator if I want to do in-depth analysis. If I want a quick overview, I don’t know, I have to report to my boss or to the team. I could do that very quick. There’s a question, hey, what happened with this experiment? I can go in and quickly ask the question and get some data out of there without necessarily- Exactly. You can also ask questions like what did we learn from this experiment? It’ll go through and summarize that for you or what should we try next? One of my favorite ones to ask is did this validate my hypothesis? It gives you analysis of how, what you actually test actually test the hypothesis or is it maybe something a little bit different? Lots of different things that you can do. This is just quick output of how this is. We don’t have enough time to go into this in detail, but it is also available there. I think we can spend another hour just going into detail on all the- One more point to make too is we’ve been doing some launch events for Experimentation Starter. Actually, when we went through some of the demos, one of the customers who was on the AI side, he was building out their MarTech platform.

I think one of the most exciting areas where these new agentic or genitive capabilities is bringing for experimentation teams, whether you’re on the analyst side, you are running deeper analysis and analysis workspace in CJA, or you’re running maybe you’re a UI, UX specialist and you don’t have a lot of statistical background and stats and experimentation, but you now have more contextual information from these new capabilities to inform decisions. It’s like a lot of the customers in the past, there’s a lot of information, there’s a lot of data that they’ve had. Now, this new unlock with these agenta capabilities is bringing a lot of reasoning, logic and context to help inform those decisions faster. So I’m excited that there’s a lot of potential for just experimentation in general across our customers and how we’re seeing some of these trends now surface within these new features. But that’s definitely, I think, this next era of how generative and agentic capabilities from an AI perspective are helping decision-making and the strategic prioritization around these experimentation programs, not just faster testing or making sure you’re increasing the number of tests per quarter. There’s a lot of more strategic level insights that you’re getting from the experimentation overall. So I just wanted to call that out. It’s really been great to get the feedback response from customers as we’re doing in-person launch tours and as well as virtual sessions like this.

Yeah, I’m very excited about what you can do now and what the opportunities are. David, I think this is just the start, right? You’re the scientist here.

Yeah. Honestly, I think this is just scratching the surface. I think this is super excited that this is out. I’m really excited for folks to start using it. But I think as you look forward, I think you can imagine a world where this really allows us to put experimentation much tighter in the loop and I think democratize it across organizations. Because I think right now, when you go to run an experiment, a lot of time it takes so much bespoke setup, the analysis takes a lot of work, and there’s just time lags in between there. And then afterwards, making sure that disseminating that information can be difficult. I think one of the things I’m so excited about here is leveraging the AI, leveraging the statistics. Part with that is that we can bake in, I think, going to one of the user questions, we can bake in a lot of those best practices and what I call experimental hygiene into the flow really naturally. So if you’re someone who’s experimentally minded but not necessarily a statistician, it’s accessible. And I think it makes it so every time you run an experiment, we can help you learn and you imagine a world where you ask the AI, hey, tell me about the experiments that we’ve been running over the last year. What have we learned? And so then you can talk about the trends of what’s working and what’s not, which in some ways sounds obvious, but in other ways I think is really powerful. Because right now when you do that, someone has to spend a week, two weeks, whatever, just going back through all the power opponent decks they made over the last six months. Great point. It’s been surprising too. Some organizations might not have a backlog of hypothesis that they know they want to test. There’s many different variations we see of different insights being stored in PowerPoint decks, wikis, Jira tickets. I mean, Dave, it’s a great, great point where you have a lot of this centralization to extend and democratize the value of your experimentation program holistically. So I love that point.

Yeah. I mean, this is truly bringing us into the next era of experimentation. I mean, this is accelerating us.

That’s unbelievable. I’m very excited. I’m very excited. I’m very excited to see where the future is going and everything else that you guys are going to, David and Justin, you’re going to develop and yeah, this is just the beginning for sure.

So we have one more segment in our show.

The unrelated quote, which Brent is bringing us today, but I do have the feeling there was a bit of experimentation behind that as well.

You know, what’s ironic is probably a lot of failed tests when I’ve been on the road traveling for some of these events. And yeah, so the cool tip is, you know, there’s different situations where you want to press or steam a dress shirt and you don’t have access to a steamer or an iron. So if you throw a couple ice cubes in a dryer with your clothes and you run it for five, 10 minutes, it actually helps get some of those wrinkles out. So just a couple ice cubes. You don’t want to overload your dryer with the whole tray. But let’s just learn this this year. And I’ve tried it out a few times and it’s worked to a degree where I grab my shirt and go. But it’s it is between travel or if you’re, you know, your in-laws or whatever your situation is on the road, it has been helpful.

So that was the unrelated culture for today. That’s amazing. I’m going to definitely try that.

Your shirt looks amazing. That’s right. Yeah, exactly.

So that brings us to the end of today’s Experiencing Live. A huge thank you to Brent, Justin, David, to the three of you for joining and sharing all these incredible insights. And also to Doug Moore, our producer who’s running the show in the background and responsible for the amazing sound effects as well.

And of course, thanks to all of you for tuning in, engaging in the chat and exploring what’s next for experimentation with us. If you missed part of the session, if you want to share it with your team, the full recording will be posted on Experience League so you can catch up any time. So check back in in a day or two. It will be available and we’re not done yet. We’d actually love to keep the conversation going. So join us for upcoming Ask Me Anything session in the Experience League Community Forum on November 12th. It’s a live chat with Journey Optimizer’s product team where you can deep dive or dive deeper into everything we could discuss today, as well as other topics around our agents.

We’ve posted, the link should be posted in the in the thread. And you can start posting your questions in the AMA thread by clicking on the link that you see in the chat.

We’ll be answering those questions live during the session. And yeah, you’ll find all the details in the chat. You can sign up for it or just join us during the session. And if today’s discussion sparked your curiosity about how AI is reshaping experimentation, be sure to check out the latest episode of the Conversion podcast featuring David, Brent and the Conversion team. And you’ll find the link in the chat as well. It’s definitely worth listening to. Very, very informative and fun. So thanks again for joining us for the next era of experimentation.

I hope you learned how agentic AI is fueling smarter testing and growth. Looking forward to what’s coming up next in this area. And until next time, stay curious, keep experimenting, and we’ll see you in the Experience League community. Bye, everyone.

Join us for the Adobe Journey Optimizer Community Ask Me Anything! on Wednesday, November 12th from 8am - 9am PT. We’ll be joined by Adobe Journey Optimizer experts: Cole Connelly (@coleconnelly) - Sr Product Manager, Huong Vu (@HuongVu) - Product Marketing Manager, Namita Krishnan (@Namita_Krishnan) - Product Manager, Brent Kostak (@bkostak) - Sr Product Marketing Manager, David Arbour (@user03474) - Sr Research Scientist, Justin Grover (@justin_grover) - Principal Product Manager, Sandra Hausmann (@SHausmann) - Sr Technical Marketing Engineer and Daniel Wright (@dwright) - Sr Technical Marketing Engineer.

We’ll be answering your questions during this live chat.

Additional resources

recommendation-more-help
c12bcbe2-5190-4aab-b93c-2bcff54a4da7