To get the most from Adobe's Experience Platform AI Assistant, be clear, contextual, and outcome-driven in your prompts—think “what, where, why."
Introduction
I recently gave an internal enablement session on the AI Assistant. It was straight forward and didn’t go into too much depth. Out of it came many asks for any kind of rules or methodology I was following.
My approach is simple; just use it to do my job. As I think of any question, I quickly frame the question and ask the AI. Nothing fancy, not trying to trick it, not doing large verbose prompts or persona, just a straight forward Q&A. Break down more complex questions to simpler prompts and then tie these answers together for more critical thinking.
Basic prompts:
- Do you know any documentation about Edge API?
- What are segment guardrails?
- What is the difference between rbac and obac?
- Can I do an upset?
At some point, I started taking questions that others were posting in an email to me or more broadly to a group. Then have the AI Assistant answer them as is without changing anything.
Questions people were asking:
- How are profiles qualified for an edge audience?
- Does the _id need to be unique within one datasets or all datasets.
- How can I subscribe to an alert for a streaming segment?
The interesting part? About 70% of the time, I got a solid answer without needing to rewrite my question. Other times, a bit of rewording helped. And occasionally, I hit bigger challenges.
It didn’t solve everything, but what I discovered was genuinely interesting. When you use vague prompts especially ones that rely on context or pronouns like 'it' you’re not really asking a question. I hadn’t realized that, even after years in this industry, because I’d usually follow up with a few clarifying questions and get what I needed.
I gave a presentation on this recently, and it really opened some eyes. I showed examples where I thought the AI were wrong, but later realized my prompt was the problem: it was too vague.
Prompting 101 best practices
Be clear in your prompt
- Vague: Show me a list of audiences that were edited recently.
- 'Recently' can mean different time frames. The AI gave me audiences that were edited in the last 30 days.
- Better: Show me a list of audiences that were edited in the last 2 days.
- Vague: How many profiles have consented for marketing emails.
- The AI checked for consent values = y and dy.
- Better: How many profiles have consented for marketing emails = y.
Provide context
-
Vague: How many people have been to my website recently.
- 'Recently' again can mean different time frames.
- The AI returned a list of people who had a computed attribute called visited a website in the last 30 days.
- I wanted people in an audience.
-
Better: How many people are part of the audience 'recent website visitor.'
Be ready to reword a few times
- Prompt: List any audiences that have child audiences.
- It gave me the parent audiences, but I wanted the child.
- Reworded prompt: List any audiences that have a parent audience.
- Got the wrong answer.
- Reworded prompt: List any audiences that have child audiences and the name of the child.
- Now, I get the parent and child audiences.
Be aware that some words have multiple meanings or similar names (goes hand in hand with provide context)
-
Vague prompt: What is a tag?
- Gave me an answer about labeling objects in Adobe Experience Platform.
- But I thought tags were things that collected data?
-
Better: What is a tag in data collection?
- By providing the data collection context, it switches and now I get better info.
-
Misunderstood prompt: What attributes are in +[audience].
- My Audience is named Platinum or Gold Loyalty.
- The AI split the OR thinking that I was talking about two different audiences.
- WHERE (LOWER(S.NAME) LIKE LOWER('%platinum%loyalty%') OR LOWER(S.NAME) LIKE LOWER('%gold%loyalty%')).
-
Better: what attributes are in '+[audience].'
- By adding quotes around it, I was able to bring it back together.
- WHERE LOWER(S.NAME) LIKE LOWER('%platinum%or%gold%loyalty%')
- Best: Work on this one...
- Have the AI do an = rather than all the LIKE and wildcards so don’t include similar named objects.
- By adding quotes around it, I was able to bring it back together.
Understand the answer AI Assistant gives you
Read through the answer, many times it sounds right, but may not be. The AI is being asked to read through all types of docs or internal sandbox objects and generate a response. It doesn’t mean check-out and don’t think. If it is telling us it can’t save a Real-Time CDP audience because the tag on the website doesn’t have the right data element in the Journey, alarm bells should be going off everywhere. (I made that up — it never said that to me.)
Read the steps – do they make sense
Validate the Sources – go to the links and skim through them. Open the SQL to check it isn’t doing something out of left field.
Provide feedback
Make sure you give a thumbs up/down. Otherwise, the AI might assume it gave you the right answer. Say why, give some feedback, it takes only a minute including typing a sentence or two. Let’s all help it get better.
Get your job done, but push the boundaries
Given all that, the best thing that I’ve done is to start pushing the boundaries. This is both the most fun and frustrating part of the process. Here I get excited because I know I’m about to cross over the line where I’m at the edge of its scope/capabilities. Here I start to think… I wonder… Which is where new things can be tried. I like to expand and explore. My mind starts to ask questions I never would have before since it might take too long to get the answer otherwise… or I’ve headed down a path that got me curious about a topic.
I know that running into a wall is common at some point, but it’s worth it since I might discover something new. I leave you with one train of thought I did on one of these explorations. It now shapes how I think about my prompts.
Note: Since I’m pushing the edge a little when I do this, I end up having to sometimes:
- Tell the AI “Ask Anyway”
- Start a new Conversation
- Reword a few more times than normal
- Get the syllabus out
- State the obvious
Deep dive: A use case based approach
Use Case #1: Lately there seems to be numerous data going into profile. Which datasets contribute the most?
Prompt: List the top 20 datasets, their size, if enabled for profile, and when created, sorted by created ascending.
Too many system datasets.
Prompt: List the top 20 datasets, their size, if enabled for profile, and when created, sorted by ascending. Exclude system datasets.
I only want profile enabled.
Prompt: List the top 20 datasets enabled for profile, their size, if enabled for profile, and when created, sorted by created ascending. Exclude system datasets.
Now sort by size.
Prompt: List the datasets enabled for profile, their size, if enabled for profile, and when created, sorted by size. Exclude system datasets.
Now, I I’ve got a list of the datasets that contribute the most to the profile. I need to dig into some of these results that are large. But where is this data coming from?
Prompt: List the datasets enabled for profile, their size, if enabled for profile, and when created, sorted by size descending. Exclude system datasets. Add to the list the dataflow.
Wow, I have a lot coming into my WebEvents. Need to check that out.
I wonder what else I could do?
Use case #2: Learning and Exploring what AI Assistant knows about Adobe Journey Optimzer Journeys.
Prompts:
- List all journeys.
- List all journeys and their journey type.
- List all journeys, the journey type and audience they use.
- List all journeys, the journey type, the audience they use where the journey type starts with audience.
- List all journeys, the journey type, the audience they use, the sum of entry count, the sum of completed count and sum of failed count where the journey type starts with audience.
Conclusion
This excercise is meant to inspire your thinking. Now its your turn, what are you curious to explore with Adobe Experience Platform AI Assistant?