Supercharge Onsite Experiences for AI Agents with LLM Optimizer
Explore how Adobe’s LLM Optimizer enhances AI visibility for your website, making it discoverable and future-proof without altering origin code. Learn how optimizing at the edge improves large-language-model responses, ensuring your content is fully visible to AI agents like ChatGPT. This empowers your brand to capture AI-driven opportunities swiftly and securely, enhancing digital experiences across Adobe platforms.
Hello everyone. Welcome to the session. I am Ashutosh Roti from the Product Engineering team of AEM.
Let me ask you a question. Is your website content visible to AI agents fully or just part of it? Can AI search platforms like JetGBT, Perplexity, see what you have built? The truth is most of the brand’s key website content is not visible to AI bots and the content that you have spent time and effort to create is not visible to AI. So today I’ll talk about the solution that fixes it. It’s Adobe’s new platform LLM Optimizer and the opportunity is that it offers to make your content, to make your website AI visible and the opportunities that we can deploy from through Edge also doesn’t require any code changes. So it’s simple and very fast. So let’s get started. Okay, so to set the context let me introduce LLM Optimizer first. What does it do? So it tracks your brand presence in AI search platforms like JetGBT, Perplexity, Clouday, etc. And when users ask questions these platforms provides answers. Those answers may mention your website your page content or ignore it. So LLM Optimizer helps you track that visibility. Also more importantly it helps you to improve it. So the platform detects two types of opportunities on-site and off-site. On-site opportunities are the ones that belongs to your own website. Issues like crawlability, the content structure, the visibility that you can control and off-site opportunities are the ones which are third-party, your social media, the way brand is getting mentioned in the social media or Reddit. So those also influences the AI search systems and these optimizations you can deploy through three options. One is you can deploy it on your CMS origin. Second is you can deploy it on your and third is you can deploy it on both. So in this session we’ll focus on the edge deployment. The reason is edge deployment does not require any code changes. It is quick and instant. Second it can be easily roll back. So safe, zero risk, safe for the production and it’s all a simple setup. So it is easy to do it. And now let me explain you the problem in detail. So it has three parts. We detect issues instantly. We deploy the fixes slowly and as a result we lose opportunities. So see Adobe has many platforms.
These platforms run 24 by 7, LLM optimizer, sites optimizer and sorry, so these platforms run 24 by 7 and they detect the opportunities that or suggestions that you can fix on your web content.
But the problem is they are very problem is most of the time it takes lot of time. If you look at the numbers 85% of the backlog is 85% of the optimization remains in backlog. Average type of production is three to six weeks and every fix required, code changes, testing, deployment. So this slows us down and on the other side AI models evolve fast, algorithm shifts quickly but we move very slow and those six weeks we add 20 more issues in the backlog. So it gets compounded and while we wait we lose visibility and traffic and the result is opportunities get missed and that’s the problem we’ll try to solve today. So now let me talk about the solution.
Adobe’s optimization on edge remember earlier we said three to six weeks. Now this platform can resolve or optimize the content in minutes. So let me explain how it does it. First we optimize at the edge, so not at the origin. That means we don’t touch your website code. All optimizations happen at the CDN edge outside your CMS. Second it works in real time. So changes apply instantly. No waiting, no release cycle and the third is CMS agnostic. So it works with any CMS AM or WordPress or any custom stack you have. Then the fourth is no deployment, no testing is required. It can be done outside your code pipeline.
Then the fourth is, the fifth is the zero risk. So for content visibility opportunities, especially the one we are focusing on, focusing in this session, browser traffic will not be impacted and if needed we can roll it back also very easily. And the sixth is onboarding. Simple onboarding, minimal one-time setup on your CDN and then you are ready to start. So to summarize, fast, easy, safe and CMS independent and best of all, what used to take weeks before, now done in minutes. So that’s the power of the optimization on edge.
Let me quickly show you from the real life example. Okay.
Okay, so this is a page from Adobe’s Photoshop section and it has a lot of rich content, fully dynamic and very appealing. It took a lot of effort to create. Most of the important content of this page is getting loaded or getting rendered using JavaScript and that’s the problem for AI agents. AI search platforms like chat, GPT, perplexity, clouded don’t run JavaScript and and this content will not be visible to them. So these platforms look at the raw HTML and that’s the problem. So they miss most of this content. We have built a Chrome extension to simulate what AI sees and let me run a video quickly. Okay, so this is how the page looked a month before and when we used to launch the Chrome extension on this page, we could see 6% of the content was visible earlier and a lot of content was not visible to AI agents. For example, there’s JSON-LD, the metadata of the page, the main content, price and plant details, everything was invisible, FAQ.
So complete, the bot section was almost empty and most of this content was invisible to AI agents and if chat GPT can’t see your content, then it will not mention your content in its answers and that means the visibility loss and the maintenance loss. Also, if we look at how this used to, so this I have bookmarked a month before, so if we look at how it looked in the chat GPT interface, we can see chat GPT could not able to find or summarize much from this page.
So let’s fix this problem. We go to the LLM optimizer dashboard in the opportunities section and here we go to this particular opportunity, recover content visibility. In this opportunity, we have all the URLs that are not visible to AI bots and you can add more URLs to this. Then, there is this simple button there, deploy optimization and when user click on this button edge optimization takes over from that point and your live page will start getting more, will get optimized and AI agents will start getting more content, fully hydrated version of the content automatically. So you just need to click on this.
Let me go to that URL, the one we were talking about.
Okay, here is this URL and if you click on the details, you will see the same or similar view that I was showing with the extension earlier and if you, so this this optimization we have already deployed a month before as the caption says and if I show you now the citation score readiness, readability score, so you can see now AI bots are seeing 100% of the web, this page content and also, if we look at the whole content is now visible to the AI agent and if you go back to chat GPT and ask for the same prompt, now chat GPT is able to get more data from this page. It can extract all the page pricing and plant details which earlier it was not able to do it. So point is we can improve the page AI visibility without even touching the code. That’s the best part of it and it’s simple, safe.
So that’s all I had in the demo. I can go back to the slide. Okay, so now let me talk, let me share how it was working behind the scene.
So step by step, there are six simple steps. Step one, a bot. So like chat GPT wants to read your website, so it sends a request to your page. In step two, we can see, we detect whether the request is coming from the human or agentic bot and accordingly will trigger the remaining flow. In step three, we can see if it’s a bot, we call this Adobe’s Edge Optimization Service and in step four, if the page is not visible to AI, then we apply this specific optimization on it and in step six, in step five, we optimize it and deliver it. In six step, we cache this response. So first time we proxy the origin and from second time onwards, we’ll serve it from the cache. So even the first time the performance or the experience will not be degraded and then we continue serving this from the cache for the next request onwards and then we have all this TTL and everything is configurable.
If I talk about the architecture of the same flow, so if you look at this blue, this is your existing CDN and this is the incoming traffic. You have two types of requests, users and AI agents, charge PD, perplexity, cloud A and if they are not, as they are not running the JavaScript, so for this optimization, we’ll optimize those requests and in the CDN setup, we want you to configure this one-time routing rule setup and once that is done, then all those AI agents bot requests will start coming to the Adobe’s platform and from there we call this LLMO, LLM optimizer opportunity service to find all the relevant optimization that are possible for a given URL and then again edge worker is the center core piece of this flow. It will evaluate what optimizer should be applied on that URL and which URL and which type of optimization needs to be applied and then if it is related to AI visibility, we’ll call this P render worker and P render worker is the layer that runs the headless browser. It will run all the JavaScript and do all this, all the pre-rendering work that should be done beforehand, before serving the traffic to the AI agents and once that is done, it will cache it and then it will continue serving the same content or the fully hydrated content of the page going forward.
Now let me show you real results from the production deployment we have, which are coming from the production deployment right now and if you look at this data, first we saw the citation improvements from the day one. So not after weeks, not after a month, day one, as soon as we deploy optimization on edge, citation will start appearing, showing up immediately. Second, there were pages, new and old, that had zero citation before and no bots had seen them before before we onboarded them in this flow. After optimization, those also started getting cited from zero to visibility and the third is, we tracked few pages that already had some AI traffic and they were top pages. There we have seen three to five times of increase in their citations and mentions and fourth is, for some of those top pages, we are seeing 50 or 20 percent jump in their citations.
Numbers are very impressive and if we talk about the performance of the service, we are seeing 102 millisecond P90 request, 102 millisecond response time for P90 request and our cache-a-tier ratio is also 94 percent plus and error rate is zero. Numbers are really impressive and the service is really able to do a lot even when we are scaling with many customers across.
Then if I talk about the scale, so we are, this LLM optimizer and optimization on edge is already helping wide range of websites across different types of content. If we talk about the commerce and retail sector, so think of a product pages or product listing or PDPs. So these websites want Chai GPT or other AI agent to see their product and recommend those product to the users and then business websites having landing pages, FAQs, help centers, etc. These pages answers real questions and that’s exactly what AI is looking for. So business wants to be discoverable by AI and educational sites has and then content creators are having those articles, blogs, course content, how to guides, etc. And these sites also want their content to be cited by the AI answers. So we are working across all these all these categories of content and we are optimizing all this wide variety of the content and the visibility and the content improvement will result into the citation improvement. So if you are having a very good quality content, but if it is not visible to the AI agents, the citations will never improve. So that is one thing we are trying to solve with this platform and also the users coming to this coming to your website through the AI are coming with the intent. So it is very different from the web search. So the conversion is much better with the AI search platforms. That’s why it’s worth to solve this problem for the brands. So here’s what I want you to remember. The web search has fundamentally changed. Today AI is how people discovering the information. It’s not just search engine anymore. So people ask questions to AI and AI gives answers. So this is one thing we should or should start addressing or fix if there are gaps and if AI search platforms can’t see your content, you are invisible to the future. That’s the core problem we are solving with LLMO and that too we are solving it instantly and third takeaway that I want you to take from here is the no code changes, no risk, no delay. So it’s immediately make your content AI visible for your brand and for your products.
And so yeah, this is all I have in this and what you should do next. So I have kept it very simple five action item. One is the first one is try this free checker. So we have built a Chrome extension. It’s free. No setup. No login required and install it. Check your website. See what AI agents actually see from your website, how much AI agents are actually seeing on your web page content and it takes just a minute. So do it today right after the session and action to visit our documentation, login to the LLM optimizer and you will be able to onboard your website and start looking at the insight that we are we are able to set it up for you and also like you can here’s the mail ID. You can share your feedback and you can engage with us on LLMO and optimization on Edge and let us know how you find this this offering and how much it is actually adding or improving your productivity.
So Yep, that’s all from my side. Thank you for your time.
This session — Supercharge Onsite Experiences for AI Agents with LLM Optimizer — features Ashutosh Shroti demonstrating how LLM Optimizer’s “Optimize at Edge” rewrites markup at the CDN layer to improve visibility in large-language-model responses. See how any website can become agentic-ready in seconds without changing origin code — making sites AI-discoverable and future-proof. Recorded live from San Jose.
Special thanks to our sponsors Algolia and Ensemble for supporting Adobe Developers Live 2025.
Next Steps
- Continue the conversation on Experience League
- Discover upcoming events