Agents Among Us: Embedded Intelligence Shift

Discover how artificial intelligence is transitioning from concept to capability within Adobe Experience Cloud. Understand the evolution of embedded intelligence and explore early applications that enhance analytics, content, and customer engagement. Learn how organizations can prepare their data, teams, and governance to adopt these capabilities responsibly and effectively.

Transcript

Good morning. Good morning.

How’s everybody doing this morning? It’s almost time to take off for the holiday season, but we’ve got a couple of good webinars still planned for you guys. We’ve got one today, Agents Among Us, with Christos presenting that one. And then we do have one more tomorrow, December 18th, Modernizing Your Digital Foundation, the strategic value of moving to AEM as a cloud service. So a couple of great webinars still left before we all break for the holidays. Agents Among Us, the quiet shift towards embedded intelligence. That’s a great topic. I’m looking forward to what Christos has prepared for us today. I will turn it over to Christos to kick it off, but watch the chat and I will drop a link as well for that webinar for tomorrow as well. So excited to hear what Christos has to say, so I’ll pass it this way.

Thank you, John. And thank you, everyone, for taking the time to join here today in a very busy week, I imagine.

Agents Among Us, the quiet shift towards embedded intelligence. Welcome to this session.

I think this is going to be a really interesting topic, a timely topic, and hopefully relevant to practitioners, to leaders within organizations alike, as well as non-practitioners and non-analysts. So without further ado, let’s get into the agenda of what we’ll be covering as a part of our conversation today. So we’re going to start out very, very simple. We’re going to align on what agentic AI means, what AI means in very practical terms.

We’re going to demystify a little bit of the confusion that may exist within some of the terminology that is thrown around in the industry, outside of the industry, and hopefully you’ll leave with a little bit of a better understanding of that.

From there, we’ll look at how embedded intelligence is implemented in Experience Cloud and into the platform, how it shows up, what are some examples of agentic interfaces and capabilities that you as a practitioner can begin using today.

And then lastly, which is really the meat of the conversation, is readiness and how you as a practitioner or a leader within an organization can begin to think about what needs to be true before expanding agentic responsibilities and agentizing processes. This is a really interesting topic. It blends a combination of governance, of organizational structure, process, all into this concept that, well, we have the access to these agents, but how do we begin actually getting real value? And there are some really helpful blueprints for how we can actually go about doing that in that third section. So sit tight as we begin our conversation around the foundations.

So again, before we get into the tooling or use cases, I think it’s helpful to be precise about definitions, as well as some of the boundaries between some of the terminology that is being used.

So when thinking about the evolution of AI, we want to kind of level set here and say, for we got predictive AI, which has been around and organizations have been using that quite thoroughly over, I don’t know, the last 10, 15 years in some capacities. And that’s really where AI is learning from your historical data to help classify, score, predict outcomes. It’s how we got to things like churn predictions, lookalike modeling or forecasting.

Now, this is valuable data, valuable perspective and valuable insights, but it’s fundamentally reactive in nature and it tells you what is likely to happen. But utilizing that historical data as its only source from there, we have the advent of generative AI, which showed up and added what I would say creation into the mix. And generative AI refers to the systems that are creating specific outputs in various media, whether that’s text or that’s images.

They’re extremely powerful and continuing to grow in their in their abilities, but are generally completing single tasks in isolation.

Now, agentic AI, on the other hand, is leveraging those generative systems with larger workflows. So generative and agentic AI, they’re closely related, but they’re not the same. So these agentic systems can handle multi-step projects. They can make decisions, respond to changing environments or goals and even collaborate with other agents. And this is you’ll find to be the case with the agents that we talk about in experience cloud as well. If you’re going to remember one thing from this slide, there was a quote that I heard. If generative AI is like a great playlist that you have curated for a dinner party, this playlist is super thoughtful. It’s sophisticated, but it’s fixed. And that’s that playlist is what’s going to play all night. While agentic AI, on the other hand, is like hiring a group of live musicians who can read the room, who can take requests and adjust to the dynamic nature of the room on the fly. So I think it’s an important distinction also that agentic doesn’t automatically mean autonomous. And again, that’s another thing that I think people are overwhelmed by the concept of. And this is getting back to the theme of our conversation today and that this is really a quieter shift towards this intelligence being embedded into your existing workflows, into where work is already happening.

Right, so now I want to cut through some of the Adobe jargon here. So those were some of our larger industry words that are being used and labels.

A lot of these terms are incorrectly used interchangeably, and they’re not necessarily the same thing. So at the top, we have agentic AI, which we’ve discussed in our previous slide. This is a broad concept, but it’s about the systems that can understand intent, reason about your goals, create plans, take actions and have guardrails and oversight as a part of Next is our Adobe Experience Platform Agent Orchestrator. You’re going to hear this term being used a lot in the upcoming slides, but this is the foundation that makes agenda capabilities real inside of Adobe. It’s the layer within platform that coordinates specialized agents and specialized agents or experience platform agents, and it manages those agents using reasoning, using memory and orchestrates those actions across Adobe products, as well as potentially third party systems. It’s really sort of the control plane for the agents within your instance of platform.

So taking it a step further into our third category here is AI Assistant. So this is actually what most people will interact with day to day from an agentic and Gen AI lens. This is the conversational interface that is more and more becoming embedded across Adobe applications. So we don’t go directly to agents. You work through this AI Assistant, which you prompt and it will interpret your intent, your context and will route you using the orchestrator to the right underlying capabilities while keeping you in control step by step. So I like to create that distinction between AI Assistant and the orchestrator, the orchestrator being that middle layer and the AI Assistant being the front plane of glass rather that you are interfacing with. From there, we have our experience platform agents. So these are commonly referred to as our specialized agents, but these are the purpose built domain specific agents, things like audience agent, data insights agent. These are designed to be to do very specific skills inside your Adobe applications. So this is really where the agentic behavior is most evident.

And we’ll talk a little bit more about some of those examples and what type of unlocks that offers.

I did mention the term skills, so skills are within the realm of these experience platform agents and new skills are being added to these agents. What seems like very often and continues to grow that list of skills and we’ll get into more details there. Now, the last two on this list are AI first applications and brand concierge. So we’re not going to spend too much time on these today because we’re going to really focus more on the products within platform that are most pervasive. CGA, AJO, CDP.

But I wanted them on this slide here because you may hear them referenced in other conversations. But there are examples of how agentic capabilities can get packaged into separate applications or in the case of brand concierge, a customer facing experience, which if you haven’t spent any time discovering or learning about brand concierge, it’s a really, really interesting and cool way for you to create compelling customer experiences.

Ultimately, the takeaway here is these aren’t competing ideas. They’re essentially layers of the same system from the orchestration side of things to our specialized agents with their skills to that single interface that makes this usable for practitioners and non practitioners alike.

So when thinking about the benefits of these agents and the agentic experiences, it’s quite simple and this maybe is quite self-explanatory, but these benefits to you as a practitioner, you’re spending less time manually assembling context. There’s fewer handoffs between systems and the path from insight to action becomes shorter.

I will say that none of these benefits replace human judgment and it’s simply reducing friction around applying human judgment. And it’s also provides democratization and empowerment to users who are in the non-traditional power user persona.

This is by no means a comprehensive benefit slide here, but I think some things to anchor ourselves on. And I would say also that the benefits today tend to be incremental and small, but they are meaningful first steps in realization of value from these agentic experiences.

So this slide here is to set some context. We’re not going to dwell too much on what this trend line represents, but we can probably all attest that AI hype is sky high. It’s mentioned all over the place in podcasts and the news and vendor conversations.

And we’re clearly past this over here on the left of the technology trigger, the generative AI. It normalize conversational interactions and agents are just emerging around this peak of inflated expectations.

What matters for you as a practitioner isn’t predicting where this curve is going to go next. I don’t think anyone really knows, but it’s recognizing that we’re moving from experimentation toward evaluation. And that’s the mindset shift I want you to take into account for the rest of this session and following this session.

We know the word agent is being used very loosely, and I will say in practice agents vary by scope, by authority, by the risk profile that they have. And I think it’s important to know that agents operate very much so on a spectrum, and that variation I would say is healthy. So while it adds to some confusion with labeling thing as agentic or not, I like to think of it. It’s just this is really good technology that is operationalizing things that were traditionally cumbersome and difficult to do.

And it’s a way for teams to apply intelligence where it’s most useful without overextending its responsibilities.

The bottom line here is just don’t get lost in the hype. We want to balance risk and return by starting small, by focusing on existing tactics and processes you already do, and getting your initial use cases right and ultimately socializing those successes and the key lessons from initial implementation of utilization of agents.

So I want to close this section by grounding us again here. One framing I find helpful is to think of agentic AI as this embedded capability rather far off destination and future state that we’re all marching toward.

Instead of really going somewhere to use AI, intelligence is present inside analytics and inside CJA, inside journeys and data operations and content workflows. And that’s where the real impact begins to show up in your day-to-day.

So with those foundations in place, we can look at how this is implemented in practice within ADP. So we talked about this a bit of what experience platform agent orchestrator is. Simply put, it’s the coordination layer. It manages how agents reason, how they share context, how they execute actions within defined constraints of your instance.

The role of it is to support these multi-step workflows while keeping the execution of those workflows grounded in governed, auditable steps. And this becomes increasingly important as responsibility of these agents begins to expand or you add additional capabilities within your stack.

So this slide is meant to show how agentic capability is actually grounded within AEP, not as its own standalone thing, but as something that’s embedded within these workflows. So starting across the top, you’ll see these three domains that agents are showing up most commonly today. And I think it’s important for everyone to know what and where these agents are visible. And we’ll start with over on the left, we’ve got content, commerce and workflows. This is where agents are helping teams operate at scale, supporting content creation, adapt adaptation and optimization while staying aligned to brand guidelines and governance. So this is a part of overarching operational workflows. And we’re not going to get into these workflows too much today, but some of the concepts around readiness also apply to this area.

In the center, we have data insights and audiences. So this is where some of these agentic capabilities become especially practical. So agents can help surface insights, interpret performance, support audience decisions, utilizing your governance frameworks, your first party data, all grounded in the same models that your day to day practitioners are already relying on.

And then over on the right, we’ve got our customer journeys. So here agents are assisting with the design, the orchestration, optimization of journeys from decisioning all the way out through execution. And then within the middle, we have our experience platform agents. These are the purpose built domain specific things like our audience agent, data insights agent, which we’ll talk about. You see the experimentation agent, our workflow optimization agent, and this list of agents is continuing to grow. So this is maybe a snapshot in time for what was GA in September, but we are announcing and releasing new agents. And these new agents, again, will function similarly in that there’s existing workflows that are in place. How can these agents be embedded in those workflows to help you as a practitioner get more value out of your Adobe investment and make you more efficient and make you and allow for ultimately more value from what you’re trying to do as a part of your role and help support overarching business goals.

So I flipped forward one slide here, but an important last piece to call out here is we’ve got agent orchestrator, which we’ve talked about in length, but this sits on top of experience platform. So all of your governance data, identity permissions, the things that you’re managing today from an AP standpoint that is.

Inherited in the genetic experiences. So this is really what allows these capabilities to scale responsibly as all of those existing access control policies are honored, and that’s an important distinction and call out that I want to make sure it is as well aware.

So earlier we drew the distinction between generative AI and agentic AI.

Generative systems, they produce outputs while agentic systems are more so participating in the work, and this slide shows what enables that shifts. So agents start by interacting. They are interpreting your intent through that conversational interface. So you can describe goals. They then reason instead of returning a single response, the system takes your contacts, breaks it down into steps and determines the approach that. Is most relevant within those defined rules, and then Lastly, it’s acting so that action. Is most commonly happening alongside the prompter and the human who is interacting with that agent. It can also act with other agents which also have the same permissions and constraints and human in the loop as as the one that that you initiated. Let’s say a.

A workflow on.

So to go a little deeper for those of you who have experience with AI assistant, maybe some of you are less exposed to this, but.

This is how we were actually accessing orchestrator and AI assistant is designed as that single plane of glass for interacting with these agents, so we don’t need to navigate between one agent to the other or try to pick the right agent. We can express our intent in one place and let Agent Orchestrator route that to the right capabilities.

Now, for those of you who have either been a part of our beta and alpha programs, you’ve seen some of the evolution already beginning to occur with this AI assistant.

Over time, this this interface is going to evolve beyond the right rail into what some people are already seeing, and that is sort of multiple modalities. The full screen experience, a split screen experience. All with the same structured responses that tie to the task you’re working on at hand.

We also have interactive conversational cards, which are a part of that evolution, which help, I think for myself personally, help me work through problems that rather than just sort of reading static text. And I think that’s that’s one of those very exciting next iterations and evolutions of this that are to come. But for many of us, this experience, the AI assistant is the first exposure to Agent Orchestrator. And I encourage you to explore here and push the boundaries.

It, I think, begins to crystallize some of what we’ve already discussed. And I know I always like to be able to touch the new technology to experiment. And then this is this is the difference if we’re going to use an analogy when it comes to swimming, the difference between reading a book about swimming and getting into the pool and actually learning how to swim. So we want you to interact with these and and we’ll get we’ll get into some best practices for how you can go about doing that and and how teams are already beginning to get some value out of these.

To give a little bit of a deeper explanation of how Agent Orchestrator stays reliable once we get into these multi-step workflows, the reasoning engine is what is introducing this structure that you see on this slide here. That your request is first interpreted as a goal, which represents the outcome that you’re trying to achieve, then that goal is decomposed into tasks, which are the high level jobs that need to be done. From there, those tasks are executed through actions, and these are atomic tasks that are formed by agents within platform apps. So and then alongside that, we have constraints that apply validation, permissions and business rules. So execution stays within your defined boundaries.

And I also call out that this is maintaining that full conversational context, which is immensely helpful as you’re working through, in this case, building an email campaign.

So that is at a very high level sort of how and what experience platform agents work. To give even more of a flavor into what it’s what’s possible, I wanted to spend a little bit of time talking about a selection of these agents.

Ones that I personally see a lot of value in, and I want to start with the data insights agent, and this is sort of the setup slide that talks about how as a user of customer journey analytics or a long time user of Adobe Analytics, we know that this has been a challenge. Democratization and self-service has been a challenge that has we’ve got a lot of different angles of approaching this, but incorporating data insights agent into the arsenal of capabilities is really in a league of its own with regards to other feature functionalities in CJA that alleviates the bottleneck that we see all too often. So we know that marketers, product managers, sales teams, the individuals that are non-analysts persona in organizations are dependent on data to do their job. And it’s oftentimes we see these individuals that are not they don’t have the data fluency that the CJA core team members have. So they’re creating this bottleneck where they’re forced to try to bother the team that’s working on developing regular insights.

Oftentimes that leads to insights arriving late, teams stop asking questions, and ultimately we become less data driven and we’re doing less experimentation as a result.

This is all while the data is there. It’s just the ability to use it independently isn’t there. So this is where data insights agent, which is automating that insight generation by allowing this non-analysts persona to come in and immediately begin answering questions on things that are probably pretty low level asks for a day-to-day power user. But you see in this screenshot here some of the capabilities, but it’s easing the onboarding for that non-analysts user by empowering them to self-serve via the data insights agent interacting with the AI assistant. So if you haven’t played around with this, if you do have access to it, I remember being first completely blown away by how quickly I was able to spin up a visualization in workspace by interacting with the AI assistant to trend out orders for last week in the state of California, for instance.

And that immediately sort of unlocked this next level of literacy and potential to empower these teams even more.

Next, we have audience agent, and audience agent is available within CDP and AJO, but this is designed to tackle some of the most persistent challenges that marketers are facing when building and activating audiences. So today we know that teams are manually configuring complex rules. They’re diagnosing issues within audiences and attempting to estimate and guess at audience performance. The actual creation of those audiences and volume is time consuming, and onboarding users into CDP and Journey Optimizer can also be an overwhelming experience for people that have not worked in those tools. So this audience agent is really meant to accompany and expedite audience management and exploration. So we can address some of those pain points by improving the findability, the exploration of audience attributes. Instead of sort of manually hunting for data, we can quickly service these needs and surface these relevant attributes to better create audiences and reduce the onboarding friction for users, whether you’re a new user or a day-to-day user. And what this means is that we can build more targeted and personalized audiences if the time to creation is shrunk and there’s more visibility into existing audiences. It tends to be the case where, going back to Insights Agent, the more visibility into the data and true understanding of the data, the better decisions are being made. Of course, there’s always the other side of things here when we’re granting access to things that are potentially too advanced or too difficult for non-analysts to grasp. And that’s a balancing act, and we’ll talk about that in our readiness section. But something to consider.

You’ll see this is available within CPA and Journey Optimizer. So lastly, I wanted to touch on Journey Agent. So Journey Agent is helping teams understand existing journeys. So let’s say you open up a journey that you didn’t build or one that you haven’t touched in months, you can ask questions around how it’s structured, what paths exist, where customers are flowing, where drop-off is happening. And you’ll see some of that integration from Data Insights Agent available within this Agentic experience, which is really cool. You start seeing that multi-agent collaboration in your interactions with this AI assistant that routes you to the Journey Agent.

We’re also able to build and modify journeys. It can help validate logic, surface, paths within journeys, highlight any conditions where there are issues. I would say also it doesn’t publish these changes for you, but it helps sort of catch these issues earlier before they show up in execution or in reporting. So it is an incredible, helpful co-pilot to have as you’re launching and iterating on journeys in the JL.

So we talked about some of these capabilities. We’ve got Insights Agent, we’ve got Audience Agent, Journey Agent, all wrapped up within the interface of the AI assistant UI.

We can access these agents from our home screen. We can access them within the actual apps themselves.

And as mentioned, there’s this automatic orchestration happening when we begin asking one agent questions that a second agent may need to be referenced for. All a part of the capabilities of the Agent Orchestrator in real time.

So we talked about this in some detail, but these agents are sitting on top of the platform. So there’s the semantic understanding of your customer data, the metadata for your content, and they’re not operating in disconnected silos or snapshots. They’re snapshots in time. They’re on top of your existing data. And they’re also purpose-built for experience orchestration. So they are designed to reflect real customer experience workflows that you’re already working on.

The data structures that are in place within your organization are incorporated into this, as well as the practitioner needs that are most common within each of those steps of a workflow.

Well, also something we didn’t spend a lot of time on, but these are extensible and allow for enterprise scale. So you as an organization can build on top of these pre-built agents, but in the future, we’ll have accessibility, extensibility for partners to build on top of these agents while staying within the same orchestration framework.

Lastly, these are designed to be responsible with accountability, with provenance and safeguards built in from the start. We talked about how this is honoring a lot of the, it’s honoring the governance criteria that you have already baked into AEP.

Now, let’s shift gears as we begin getting into the readiness portion. I wanted to pause here and share a few tactical things we’ve seen make a real difference in how these agentic experiences are landing in teams. The first one is scope and intent. So the fastest progress with these agents usually comes from starting smaller than you think you need to.

This isn’t because your problems are small, but because the outcomes are familiar to practitioners and practitioners or whomever is using these agents. When your early usage is mapping to something people already understand and can personally sanity check, we build a lot more confidence faster.

So in practice, clarity about what you’re trying to accomplish tends to matter more than how you’re building that prompt. And I think the biggest thing here is replicating your existing workflow. And trying to isolate parts within your existing workflows that the AI agents can help with. And like I said, starting small here is a really critical best practice, I would say. Then moving to trust and review. This, the big shift here is treating review as part of the workflow, not necessarily a safety net. So when you’re interacting with the AI assistant, you’ll see citations coming through and it will reference things within your architecture.

And these are helpful when they’re used in the moment. So the recommendation here is that you validate what those citations are. So we all know that there are mistakes that can be made when leveraging any AI solution. So doing this within the workflow, while it may slow things down initially, it helps you understand how responses are generated and where answers are coming from. And just as important, people do better when they know what to do if something doesn’t quite land. So is this a case that we need to retry? Should we need to reframe this or should we just move on entirely from what we’re looking to do here? That level of clarity, it tends to keep people engaged instead of second guessing the tool, which is a common thing that I think users experience when they’re first interacting with chatbots in general.

Lastly is the governance and progression. So the teams that do this well, they’re not necessarily reinventing new controls. They’re leaning on your existing roles, permissions, and expanding responsibility gradually to relevant teams. And they’re also paying attention to how people are using these agentic experiences. What works early on tends to point pretty clearly to what we should expand to next. So this is starting this conversation around readiness and thinking about what is a methodical approach to rolling out usage of these agents.

So let’s get into our readiness portion. So this is where the conversation begins to shift a little bit. Up until this point, we’ve been focused a lot on the capabilities. What we’re going to focus on now is the readiness, meaning the conditions that are required for these capabilities to operate responsibly at scale.

So what we tend to see pretty consistently is that readiness doesn’t show up as this big switch you flipped. And readiness in the sense of agentic readiness, it shows up in small, very deliberate steps in most cases. There are other philosophies out there that are on the complete opposite end of the spectrum that suggest that we need to radically change how things are doing, how things are done, and how processes are done. What we’ve seen and where there’s research, early research is suggesting that most teams get real value by starting small and narrow in scope and clear guardrails, not because they’re cautious, but because that’s how we can create meaningful value early on and prove out the success. Again, we want the early usage to be something people can recognize, people can understand, and ultimately trust. And as that confidence begins to build, responsibility can expand. So it’s less about governance being the friction point and more about a standard repeatable process for how we can incorporate agentic into our programs.

The important thing here is to call out that this readiness, it compounds through use. And it’s not about making that first perfect decision up front, but letting your own experience inform what the next step is.

And this goes back to that getting in the pool analogy.

So this next section, our next topic is around stewardship, stewards around these agents.

This is a new responsibility that many organizations are facing, the responsibility of agentic stewards and adoption. What we’ve seen is that adoption is working best when we have these stewards in place who have clear accountability, even if it’s an informal title. They don’t need a new job title, but individuals that have ownership for how agents are being used, what those agents are influencing, and where the edges are of what the agent can do and what the agent can’t do. And this is a big part of this shared clarity amongst the day-to-day users and the non-day-to-day users.

You need a common understanding of where the agents are contributing and where human judgment still needs to be the decision-making point. And in many cases, that is all the time. And when that line is fuzzy or not defined, we get either over-reliance on these agents or quiet avoidance. And neither of those two are scaling. So we want to think about how this steward helps to reskill and upscale team members so that they feel empowered to utilize agents.

And going back on our previous comment, they’re responsible for what comes next. What should we focus on within our existing processes to optimize with agents next? Scope of our agents needs to be earned. It’s not something that we just decide haphazardly because the agent exists.

We’re not simply following the next trends. We’re really thinking about existing processes. And these stewards are acting as the internal evangelists, you could say, for these capabilities.

So within the realm of governance of these technologies, agentic AI is removing the buffer. And this points towards some uncomfortable realities that you’ll encounter as you’re beginning to utilize agentic experiences. But I would say that agentic AI doesn’t introduce new problems. What it really does is remove the buffer that we’ve all been living with. So in most systems, bad data, fragmented identities, or unclear ownership, they exist. They don’t fail loudly. They just kind of get absorbed in your day to day.

A report may look a little off or a segment is a little less precise. Someone eventually fixes it downstream and we all move on. With an agentic experience, fortunately or unfortunately, things don’t work that way. The agents are participating directly in decisions and immediately amplifying issues with and if context is incomplete or outdated, the agent will use that data. And it doesn’t necessarily just crash. It will stay polite and it’ll stay confident. It just becomes less useful. So in those situations, when the relevance begins to erode, the confidence from an end user begins to erode and the practitioners are not getting the value.

So as access begins to broaden, governance becomes the line between safe participation and unintended exposures within agentic experiences.

And this is something that we need to juggle with.

There’s a lot of upside to incorporating data governance, prioritizing clean data, even without the mention of agentic capabilities.

But when you layer in prioritized clean data, this isn’t something that’s going to make the agents autonomous. It’s making the system and making those agents more predictable, explainable, and easier to run at scale. So the real takeaway here is that the agents aren’t making your problems disappear.

They’re rather making them more difficult to ignore.

So and a quote that I saw was, agentic AI doesn’t reward optimism, it rewards readiness. And this is the, I think, the imperative takeaway when thinking about preparing for using these capabilities in your day to day.

Now, last year, we have a couple slides that are some frameworks around how we can approach using agentic AI in our day to day workflows.

It’s worth calling out that this particular piece of research is, it comes directly from an Adobe research paper called the AI Inflection Point, which was informed by over 200 IT compliance and business leaders.

And I think one of the things that stood out was one of the biggest misconceptions with is that readiness is this decision we make once. We’ve got the green light and we just move on from there. In practice, the organizations that are succeeding treat readiness as this ongoing loop, not this launch moment. And it starts with this assess phase where the discipline for what to focus on really matters. It’s not about whether the technology works or not. It’s about whether the organization is actually prepared to use it responsibly. Do we have the clarity on data sensitivity, on governance, what we’ve talked about? Who owns the decisions when the system produces something unexpected? How are you going to actually measure the successful rollout of this? What does success look like for a successful launch of this agent? Most teams are discovering gaps here, and that’s generally a good thing because it’s a lot easier and cheaper to discover gaps in our first step than later on. Within our next section, the pilot portion, we are piloting these use cases and we’re doing it very intentionally. And it’s not to prove that AI is magical and it’s going to change everything, but it’s to learn how it behaves within your environment because every organization’s environment is different. So we’re focusing on those high impact use cases that define success beyond just the ROI.

We’re evaluating performance alongside responsibility criteria like transparency, like risk. And this is really in this pilot phase where confidence is built.

Then moving to our adopt phase, this is where things are getting much more real. The shift from experimentation to everyday workflows, training, enablement, clear uses, expectations.

This is where things become critical because the value doesn’t come from simply having access to the AI. It comes from people knowing how and when to use it well, going back to our steward conversation.

And then the monitor section here really closes the loop. So we know that systems evolve, data changes, business needs shift. And this is where continuous oversight combining automated metrics and human judgment sustains over time.

So I’m not going to spend too much time. We’re coming up on time. I want to make sure we have some time for questions here, but here are some readiness checkpoints for when you’ve gone through that cycle a few times. And how do we know we’re ready to expand from what we’re currently doing? And this is a helpful checklist to reference that goes back to some of these core tenets of data quality, integration, permissions in governance and access controls, and then our organizational readiness. All critical pillars to determining whether or not an organization is ready for expansion. Of AI in day-to-day operations.

And I’m going to leave here with this last slide that, you know, I think it’s an important takeaway that agentic success isn’t something that we’re turning on. It’s not a feature that suddenly makes everything work. It’s a practice that needs to start small and that we have deliberate stewardship around and we’re increasing the responsibility of by following standardized processes.

That being said, I’m going to stop sharing and flip it back over to John to see if we have any questions that have come through from the audience. John Defterios Thank you, Chris, for, again, an excellent presentation on the agents among us and shedding some light on what I think is an important topic. So much appreciate your expertise and your time with us today. I love the quote you said about the rewards. It doesn’t reward optimism, it rewards readiness. So a couple of, we’ll jump in the Q&A for a minute here, but also I’m going to launch a poll and we’d love to have you take just a minute and answer that poll. Please, today it’ll help us with feedback of understanding how we’re doing and of course, help us refine future webinars as well. So I’ll launch that and then if you don’t mind, I’m going to kick off on my own questions real quick here. If we know data cleanup is needed, where do you actually recommend starting without boiling the ocean? Chris Bounds It’s a good question. So the short answer is we don’t boil the ocean and we don’t start by cleaning data. We start by cleaning how the data is exposed and understood by the agent. So I think that the fastest and lowest effort place to start is within your CJA data views because that’s effectively the lens that agents are using to reason about your data. So we want to be intentional about what’s being exposed there. We don’t want to expose every single component in CJA.

I think another one that’s important to mention is fixing your naming, which can be done in data views and also the standardization and approval of what matters. So this isn’t about like rewriting pipelines or replatforming data. We want to start with these are things that you should be doing anyway within CJA. This is, like I said, just the advent of Agintic is just amplifying where there are current gaps.

So what is the difference between agent orchestrator and AI assistant? If you can shed some light on that. Chris Bounds Yeah, so I know that the terms are very interchangeably used, but simply that AI assistant is the interface and orchestrator is what is making this agentic. It’s that decisioning layer to coordinate the ask. So you don’t actually go to agent orchestrator. You go to AI assistant and in the background or underlying orchestrator is kicking off the analysis and routing you toward the specialized agents, which have those unique skills within them.

Awesome. Appreciate it, Christos. We are at the top of the hour and I appreciate everyone that has taken the time to join us today. We’ll wrap up now. This will be a recording that will be available, made available shortly. So those that want to go back and review it or those that signed up and missed it will have a chance to come back in and download or review it as well. So again, thank you everybody for your time today. Thank you for your for those completed the poll and we will wrap it up. Thank you so much. Thank you all.

recommendation-more-help
abac5052-c195-43a0-840d-39eac28f4780