AEM Smart Tags: Streamlining Content & Enhancing Discoverability

Is your asset taxonomy strategy set up to help you scale your efforts in Adobe Experience Manager Assets?

In this educational webinar, Adobe Champions Melanie Bartlett and Deepak Khetawat explore how they use Smart Tags for better content discoverability and how Smart Tags help DAM operations scale. We’ll cover tag taxonomies, Smart Tag training, and leveraging AI for metadata.

Transcript

learn from your peers for Adobe Experience Manager. We are glad you are all here. And we have a great session ahead. We’re gonna be focusing on Adobe Smart Tags, specifically looking at streaming content and enhancing discoverability. I’m your host, I’m Michelle Bach. I’m a Senior Adoption Marketing Manager at Adobe. And before we jump in, I have a little bit of housekeeping to cover.

The first thing we’d love to do is we’d love to learn more about you. So if anybody has joined and you haven’t introduced yourselves in chat, we’d love to know who you are and where you’re joining from. We’re seeing a number of great places, Utah, Minnesota, Kristen from Texas. So welcome everybody. We’ll spend a few minutes here. While we’re here, we encourage you to look in the upper right side of your screen. We’re doing a poll. We’re just trying to understand as you come into this session, what your confidence level is in using Smart Tags in your AM instance. So are you confident, somewhat confident, moderately confident or not very confident about using this feature? And the goal obviously is at the end of this session, we will hopefully have filled in a lot of the fuzziness for you and giving you some key actionable insights to move forward with trying Smart Tags yourself.

Great, welcome everybody. I love all the intros.

Perfect.

All right, a couple quick other bits of housekeeping. This webinar is being recorded and we will be sharing the link after the session. So feel free to just listen acutely and not have to take notes. You’ll be able to go back and look at any of the session contents later. We will share the slides on community as well. So not only will you be able to watch the recording again, you’ll be able to review the slides. And we’re gonna do two different types of chat in today’s session. You are all posting in an area called the attendee chat.

We will also be opening up on the left side, an area that you can see now, it’s called ask the presenters. So as we get into the content, if you have thoughts and specific questions for our speakers on the Smart Tags topic, if you put it in ask the presenters, we will circle back to those in a Q&A session after the presentation, and we’ll get to all those questions. So don’t be shy about putting questions in there as we go. And then we really wanna encourage you at the end, we will also be putting in an end of event survey. So we use those event surveys to understand how well the material resonated with you and also to get ideas on future sessions. So if you have ideas on future sessions, you will get a chance to convey all those interesting ideas in the end of the session.

For now, I’d love to introduce our speakers. We are lucky enough today to have two of our Adobe champions as speakers. Adobe champions are some of our top customer and partners who are experts in a particular Adobe solution. In this case, Deepak and Melanie, our specialists in Adobe Experience Manager. Champions act as thought leaders and brand ambassadors for Adobe and share their knowledge and best practices with the Adobe community. So we are thankful and lucky to have both Melanie and Deepak here today to present. I’ll do a brief introduction of them and then we’ll get them underway. So meet Deepak Kedwat. Deepak is a technology enthusiast and AM architect with over 13 years of experience in bar tech solutions and enterprise integrations. He is a two time AM champion and he is a winner of the AM Rockstar Award. He is recognized for his leadership in the Adobe community in many ways. He specializes in building scalable, multilingual, digital experiences and seamless integrations. He also has a strong interest in AI driven innovation. And let’s meet Melanie Bartlett. Melanie Bartlett is a partner development director at MRM. With over 25 years of experience in advertising and tech, Melanie is a digital solutions expert specializing in enterprise asset management. She is a current AM champion and a qualified asset librarian and she excels in AM assets, metadata taxonomy and workflows. She’s led global implementations across industries, designing business goals with tech execution. And she’s keen at shaping how leading brands manage assets at scale. So a lot of expertise here in the room today, we can’t wait to get underway. So without further ado, let’s get smarter with SmartTags. Take it away, Melanie and Deepak.

Deepak, are you there? Hey Melanie, yep. Hey, so I’m Melanie Bartlett again, as you heard, director of partner development at MRM. I’m Deepak, I’m a principal engineer at Palo Alto Networks.

Perfect, and together we’re going to tag team today’s webinar to bring both strategic and technical perspectives on the development So today we’re going to review the essentials of metadata and tagging, how SmartTags boost discoverability, how they scale AM assets with SmartTags and AI, and some best practices for your tagging taxonomies. We’re also going to walk through some real world use cases.

So let’s start with the foundation, metadata and tagging. These are the building blocks of content organization and discoverability. Now I’m going to tag Deepak to review some of those essentials.

Thanks, Melanie, for tagging me.

So what are the needs and essentials of metadata is important for us to understand. Just imagine a library without the cataloging systems. It will be nearly impossible to find a specific book. Tax and metadata serve the same purpose for our digital assets. Tax help organize content for easy retrieval, improve consistency and enhance search engine optimization.

There are two types of tags. First is the manual tax. Those are user applied. Whereas the SmartTags are AI driven auto labels, which are added to the assets by Adobe Sensei, which offers faster search and streamline workflows.

There are some challenges for manual tagging. Exponential growth of assets. Since the libraries are growing at an unprecedented rate with thousands of images, videos and documents being added every month, manual tagging this volume of content simply doesn’t scale.

It’s very time consuming and labor intensive. Manually reviewing each asset and applying tax to them takes a lot of effort of the team. It drains the resources that should be better spent on higher value, creative and strategic work. Also, this leads to inconsistent and incomplete tax because the tagging depends on human input in this scenario, which is very inconsistent. Different team members will use different terminology and sometimes even skip altogether, which leads to gap in metadata. It leads to lost productivity of the team. When the tax are missing or inconsistent, the teams just waste the time to search the tax instead of what they need. They dig through all the folders and do the duplicate work because they can’t locate the existing assets. All these factors led to a missed opportunity. Ultimately, if the assets cannot be discovered, they cannot be reused. This leads to underutilization of our creative content, missed opportunities for all the campaigns and unnecessary recreation of assets which the team already own. Going to tag Melanie to review Smart Tax.

Thanks Deepak.

So now let’s explore how Smart Tags and AI can scale your DAM operations.

Smart Tags uses Adobe Sensei by analyzing assets and applying relevant descriptive tags. It automatically tags images, videos and text-based assets with specific keywords, allowing users to find those assets in just a few clicks.

So what can be Smart Tagged? Well, Smart Tags works across images, analyzing the visual aspects of those objects, perhaps a person or car. For videos, it analyzes the videos for objects, scenes and actions. For text, it’s going to extract keywords from the text. So as you can see, it identifies objects, scenes, actions and those keywords, making all types of the content more searchable and manageable.

So here’s how it works. What you’re going to do is upload an image, video or text-based asset to AEM assets.

AEM is going to trigger a workflow that sends a secure request to Adobe Sensei Smart Content Service to analyze the asset. That workflow is going to generate the Smart Tags with a confidence score and apply those Smart Tags to the assets metadata sorted by that score.

So in this case, we uploaded an Adobe-provided night camping image to AEM assets. Adobe Sensei analyzed the image and started generating the Smart Tags automatically to that metadata. Smart Tags appear on the Basic tab. So when you view the asset properties, let’s zoom in on the Smart Tags that were actually generated. So here you can see both color tags, which we’re going to get to in a second. I’m going to show you just very shortly, as well as the Smart Tags. The Smart Tags that were identified are keywords that are based on the image, such as night photography, sky, campsite, or activities that are in the image, such as camping, backpacking, adventure, exploration or hiking. Objects in the images, such as the tent, the campsite and hiking equipment were also identified.

So now let’s get back to the Smart Color Tags. So Smart Tags can identify and take colors in the images as well. This enhances your search capabilities and allows filtering by color composition using formats such as RGB or hex. So here you can see the Smart Color Tags that were identified are eight primary colors of black, dark gray, gray, dark brown, dark blue, orange, maroon and brown. All of these were found in that image.

Keep in mind, Smart Color Tags can automatically tag product images with dominant colors. So the benefit is going to enable a granular product categorization and facilitate some accurate color-based recommendations to your users. This is leading to improved experiences.

Take back to Deepak.

Thanks, Manli. Now let’s talk about how Smart Tags improve content deliveries across different roles and personas.

First, we will discuss for Authors and Librarians.

Smart Tags means we can quickly find the right image or file without digging through endless folders.

This reduces duplication because the right asset can be discovered and reused instead of being recreated each and every time, which is very helpful to dam authors and asset librarians.

How it’s helpful for developers? Developers also benefit a lot because Smart Tags provide structured, accurate metadata that can be leveraged for syndication across channels. Tags also power personalization engines, ensuring content variations are delivered to the right audience segments.

Now the most important, how it’s helpful for the users, both our internal team users and also for the external team users.

For the end users, working in AM Smart Tags make asset discovery much easier. Instead of guessing the folder locations or relying on inconsistent metadata, they can search naturally and get relevant results faster. This makes navigating massive libraries very simple and super efficient.

From the external customer’s perspective, Smart Tags power personalization and search experiences. And since the assets are tagged with accurate and AI-driven metadata, the customers are more likely to see the right image, video or documents rendered to them at the right time.

Now we will discuss what is the need for custom training. We already discussed what are Smart Tags, how they are helpful for different personas. So now we will understand what is the need for custom training. The Smart Tagging feature that comes out of the box is provided by Adobe Sensei and it’s trained on a massive general purpose dataset. Think of it like an Adobe Stop Level variety. It is very great for identifying everyday objects, but it doesn’t understand our company’s unique taxonomy.

So it can easily recognize something like car or an office, but it won’t know your product codes, model names or brand specific terms.

You will need your assets tagged with very specific product names. For example, for automotive sector, something like sedan LX or Rottster GT will be very specific product names. Hence the generic AI will only come back with broad terms like car, vehicle or automotive, which is not enough or precise for our need. So our teams will have to spend time manually adding the details. This is where the Smart Text training comes in. It lets our Adobe Sensei to train on the vocabulary, the brand names, the product lines and our industry terminology. Once we have trained it, then artificial intelligence can recognize and apply these business specific tags automatically. So instead of labeling something just as car, it will correctly tag it according to the product names which you have tagged like sedan LX or other product Rottster GT mentioned in the example, a very basic example, which aligns with our taxonomy. This makes more accurate personalization and powerful assets which are very easier to reuse.

There are some prerequisites for Smart Text training.

First, in this slide, you will see a side by side view of what needs to be done in AM 6.5 versus cloud to enable Smart Text and make sure they are working correctly. In cloud, most of the things will be enabled by default, but you can see a side by side comparison of on-prem versus cloud.

And also one important thing, once we have enabled the Smart Text, we need to make sure that the Smart Text can be enabled either at the folder level as shown in the screenshot, either it can be a checkbox in on-prem or a dropdown on cloud, or we can do globally by editing the dam update asset workflow to always include the smart tagging step, but the folder level control avoids unnecessarily tagging and reduces the processing overhead.

So just take a note of these snapshots from on-prem versus cloud to enable the Smart Text at your folder level.

So now I will discuss on some of the steps which we should use for our custom training. Here is the first step. We should define our custom text.

This should come basically from our business taxonomy and we should have it in our AM tag manager.

For example, instead of a broad tag like car, we should define specific product tags like products, sedan, Alex. It ensures that AI is learning terms that matters to a business.

Then the other important step is we should gather training assets. We should curate a set of training assets for each tag. Adobe recommends that at least there should be 30 assets per tag and more examples you provide, the stronger the model becomes.

We should ensure that there is variety of assets.

So the variety is the key. If we only train with one angle or one background, then AI will struggle to recognize the tag in other contexts. Hence, by including the images with different lighting, settings and perspectives, we will help the AI generalize and become more accurate.

And then we have to apply these tags manually.

Hence, it’s critical that we manually apply the correct custom text to the assets in our training set. Think of it from this perspective that teaching the AI is important and if the training data is mislabeled or inconsistent at the first step itself, then AI will learn in the wrong thing, in a wrong way. Hence, clean consistent tagging is the foundation for good results outlined in this step one.

This is the second step. We should select our training folder. Once the training data is ready, we should initiate the training process. So what we should do, we should navigate to the folder which contains the curated assets we have prepared and then we should start the smart tag training workflow from there. This workflow is designed specifically to package our assets and tags for training.

And then we should monitor the progress. The training workflow will run in the background. So we should monitor its status and we will see either the training completed successfully or if something went wrong.

So as we discussed in the previous slides, behind the scenes, AM sends our assets and their tags to Adobe Sensei. Sensei uses this data to train a custom model, the model which understands all business vocabulary, not just the generic tags. This is the foundation for enhanced smart tagging.

Now the last step. We should apply this new model. Since the model is trained, it’s the time to put it into action.

We should run the damn smart tags assets workflow on the folder of untagged assets. Hence, the AM will apply both the generic tags like the car or a vehicle, as well as the business specific tags which we have trained like sedan LX. We should review the results directly in the asset properties So along each tags, AM will also provide a confidence score which helps to decide whether the tag is reliable for us to keep. And we should do refining and repeating in an iterative way. Since the smart tag training is not a one time event, if we notice some gaps or errors, the solution will be to add more training data with more variety and rerun the training workflow. Over time, the model becomes more accurate and it learns from your content.

So after going through the steps for smart tag training, there are some best practices which we should keep in mind. Point one, to get the most of our custom tagging model, we should always prioritize quality over quantity. When we select our training assets, we should ensure that they are very high quality, clear and relevant, as it will help the AI learn more effectively. For example, if you see these two screenshots on the right hand side, with this car you will see a distracting background. So this will deter AI. And in the left hand side for all the cars, you see very clear background with different angles. So this training dataset will be very helpful.

The second best practice recommended is, whenever we are training the smart tags for the first time, we should not just start with one tag. We should at least include two distinct tags. Because the AI learns through comparison, if you only show it cars, then it don’t know cars. But if we give it two categories, like for example, sedan Alex and Rottster GT, it can build a precise boundary between them. This improves accuracy, not just for those two tags, but it also helps the model recognize when an image doesn’t fit either category. This makes our smart tagging much more reliable, just from the start.

For the best practice number three, it’s very important.

One very important point for us to understand, that this training is an irreversible process in some extent. Once we send our training dataset to Adobe Sensei, the data already becomes a part of custom model. There is no undo button. So what we should do, that our training data is correct before we start. We should make sure that our tags are very consistent and accurate. For example, we don’t include SUVs or trucks while training sedan Alex. It will confuse the AI, because it wouldn’t know what features actually defines a sedan. Secondly, the asset themselves need to be very high quality representative of that category.

And the poor quality or irrelevant images weakens our model. The takeaways are here. It’s better to spend more time upfront, getting our training data right, because fixing a faulty model later takes much more time and effort than preparing well in the beginning.

As a best practice number four, we should always remember that training our smart tags isn’t just a one-time exercise. Since our business evolves with new product lines, updated branding or shifts in taxonomy, our model needs to evolve too. Hence periodic retraining is essential. And the process is very straightforward. We identify what are the new assets which are changed. For example, a new category in our taxonomy or a new car model. Then just like earlier, we apply the correct tags manually to those assets. This ensures that the AI is accurate, labeled examples to learn from, and then once we applied the manual text, we re-initiate the smart text training workflow, which we did in the training steps. The Adobe Sensei doesn’t throw what it already knows. Instead it strengthens the existing model and adds new knowledge on the top of it. Think of it as teaching an experienced employee about a new product line. They don’t forget the old products, but now they can recognize and categorize the new ones too.

As a last best practice, which is very important, we should make use of the reporting feature AM provides on the smart text training to understand how well our custom tags are performing and to identify what are the areas for improvement. Once we have trained our custom tags, it’s important to validate how the AI is performing.

Hence, AM provides built-in reports to help with this functionality. We can generate this report under tools, assets, reports, and create the smart tag report. What we will get is a summary of all our custom trained tags and their training status.

It lets us quickly see whether the AI has successfully learned from the assets we provided. There will be different color indicators to make this very clear. The green means the success that the tag is well-trained and AI can confidently apply it going forward. Yellow means that training is partial. The AI needs more example to boost confidence. This is just essentially a nudge to expand our training data set. Red means failure, that the training didn’t stuck, usually because examples were too few or were highly inconsistent. In this scenario, we’d want to review our training assets and try again. Think of this report as our health check for smart text. It’s not just about whether training ran, but whether the training was effective and whether our model is truly ready to support our business needs. Take back to you, Melanie.

Thank you, Deepak.

All right, so now let’s review how you build a clear taxonomy and categories. Let’s train smart tags using real-world examples and maintain your governance with blocked tags.

So far, we’ve reviewed building a tag taxonomy and smart tag training. Now let’s dive a little bit deeper with some governance tips. Tags are critical for findability, providing the necessary structure to manage assets and improve your content and investment scale. While smart tags specifically reduces manual effort and supports your AI-led approach to asset management, a strong and well-defined taxonomy is critical to ensure your consistency and long-term value. By investing in a robust metadata strategy, you can unlock a significant long-term efficiency and improve your content’s return on investment. So when building your tag taxonomy, use categories and subcategories, something that reflects your business-specific terminology. When you’re smart tag training, use real-world examples that are gonna help improve the accuracy and make sure you exclude any irrelevant tags. For governance, maintain a list of your block or your restricted tags.

And just so you know, this is not a set it and forget it. You always, always, always need to review and refine your tags on a regular basis.

So let’s talk about managing your smart tags. So AEM Assets provides the ability to manage smart tags. This allows asset administrators or asset librarians the ability to review how assets are tagged by AI. You can quickly review the smart tags that were applied and remove tags that are not applicable or not accurate. You can also add additional tags from your taxonomy to your images. To manage tags, you’re gonna select the asset and click on Manage Tags from the toolbar. The following screen is going to appear, which allows you the ability to, under tags, you can select the folder icon and select tags from your actual taxonomy. Or here, you can also select a tag that you want to be promoted so that these certain tags or keywords will appear higher in your search results based on those keywords. You can select tent as, in this case, as an example from our camping image. Now to remove unwanted tags, what you wanna do is manage the tags and you’ll notice there’s an X next to the smart tags. You’d simply click the X to remove the smart tags that you no longer want on the image. You would hit Save and Close to save those changes. But also note when you’re in a folder, a quick way to manage your smart tags in the folder is notice there’s little arrows down at the bottom under the image, and you can scroll and manage the tags of multiple assets by clicking back and forth on those images. Again, Save and Close will save your changes.

Now the other feature that’s available that you’ll notice while you’re managing the smart tags, you may have identified maybe keywords that you want to exclude from being applied to your assets. This is called block tags. This allows asset administrators or asset librarians the ability to prevent the application of irrelevant tags. So to access block tags feature that’s available in AEM as a cloud service, you would go to your assets view, select the block tags, and then add tags that need to be added to the list. So a few common reasons why you’d want a block tag. This may stem from categories that are maybe culturally insensitive or sensitive, irrelevant or controversial. They could be non-business related smart tags to your organization. But this can lead to inconsistency in tagging and require additional manual cleanup review later.

So overly specific niche words, tags that are too granular, or they don’t align with your broader taxonomy would be considered for block tags. Now this feature is intended to restrict those tags to help maintain your brand compliance and your taxonomy, your governance. So your business may have incredibly strict data governance requirements, or you can utilize this feature to block certain tags just inappropriate terms. Perhaps it’s a depreciated product name that you no longer want added to the image. Of course, blocking certain tags does not guarantee the content will not be found, but the goal is the intention to minimize those opportunities for those keywords to be removed. Another common use case is to block gender assignments, maybe male, female, girl, boy, et cetera, maybe for compliance purpose, or they’re just inaccurate.

Another tip for block tags, always review your block tags list to adapt to any of your changing content strategies or any of your governance that may change. You can also have the ability to export this list as a CSV to have your legal teams or others review.

Now manual tags will be utilized to remove those tags manually from the assets.

Block tags is going to be utilized to not apply the keywords to the assets when the workflow runs.

Let’s also talk about AI generated metadata. Now this is new functionality that’s available in AEM as a cloud. It’s complimentary to smart tags. And what it does is the generative AI will auto generate titles, descriptions, and keywords currently for image asset types. This is saving time and reducing those errors.

This is also another approach to help you eliminate some of those manual tagging efforts.

So a quick summary of benefits. Smart tags is to save time, improve your accuracy, boost discoverability, support your governance and customization, making it valuable across industries. Take back to Deepak to review some deep dive and use cases.

Thanks, Milly. From e-commerce to banking and finance, smart tags streamline asset management and shows consistent and consistent performance compliance and enhanced collaboration. Let us see the use cases which shows the power of our tailored tagging.

So we have one challenge for the e-commerce apparel store example. Our apparel store has thousands of product images manually tagging each with details like color, fabric, sleeve type, neckline is very time consuming and error prone. Without proper tags, the search and filtering of customers is inconsistent.

Hence the smart tag solution to use AI powered smart tagging to learn from our product catalog comes into picture. We should train the AI on specific apparel attributes like V-neck versus Q-neck, fabric types and even brand specific details. Once trained, when we upload the new product photos, the system auto generates the tags like blue, cotton, T-shirt, long sleeve, V-neck, which we need. What are the benefits? It saves manual effort for our merchandising team. It enables faster product launches since assets are instantly searchable. It delivers better faceted search for customers. They can filter by neckline sleeve color and hence an overall streamlined catalog management gives better shopping experience.

Let’s discuss one more use case for the finance services and banking team. So team needs quick access to the right approved assets. Whereas the compliance request that only current legally cleared content, for example, a image of young couple signing mortgage papers is needed. In AEM, every assets can carry metadata like an approval status. This compliance or legal team then updates this field whenever they review and approve the content. For example, an asset will be tagged legal, pending or expired.

When marketers search for the assets, this metadata shows up as a filter in the search panel. So they can type in mortgage and then simply click tick the box for approved by legal. This way, they only see the images that are not just relevant, but are also cleared for use. What SmartTechs do is they make the image discoverable by their actual content like couple mortgage signing papers.

What the metadata filter does, on the top of it, it adds the compliance control. Together, it means that the marketer gets the right image instantly and with the confidence that’s most up-to-date legally approved version, which is very important in financial services and banking domain. Tagging Melanie to review the summary.

Thanks Deepak.

So in our summary and final thoughts, AEM SmartTechs is a powerful solution that directly addresses the critical problem of asset discoverability. With its core service providing powerful generic tagging right out of the box. To truly unlock its potential, custom training is essential for recognizing your business specific taxonomy, allowing AI to understand your unique products and brands. No tag backs Deepak.

Yeah, mainly.

Now we understood that the training process itself is very straightforward. We simply curate, tag, and then run a workflow. And the payoff is a more efficient digital asset management system, which in turn leads to higher productivity and better content experiences for everyone. Hence, be smart in tagging, or we hope this boosts your confidence in becoming smart in tagging. Thanks for playing with us.

Amazing. Thank you Deepak and Melanie. That was a huge roundup of tips, tips, best practices and how to’s. We are going to move into the Q&A round. And there are a lot of great questions already in there. For those of you who are joining us, the screen is gonna change a little bit. We are going to have directly under the slides, we will have the ask the presenter area. That is where you can continue to put questions in for our panelists. And there will also be some resources for you to take with you. And some surveys along the right side. So let’s dive into some of the questions. We have a really, really great list to go get us started.

So possibly the first one is, how are tags and metadata passed along with images as they are published? And are these hosted by dynamic media? So I think Deepak you wanted to maybe cover this one? Yeah, sure. So I think this is a nice question and a good use case where we leverage the smart text. So whatever the smart text we added and we are publishing those assets to the dynamic media. So in the end URL provided by dynamic media, that is the CBN URL also carries these tags which we have applied. Hence in this whole workflow, whatever the assets which we uploaded, this dynamic media serves it to different renditions and touch points like your mobile web and whatever the URL is served via CBN URL. So that contains the smart text with it, this metadata smart text with it to increase the searchability. So yeah, it is being done with the smart text workflow or dynamic media also. Amazing. Thanks Deepak. Here’s another one. We’ve trained our instance on smart tags but it has an error rate in identifying humans in images. Is there any advice on this? I can chime in here. I would definitely say take a look at managing your tags and looking at those images as we just reviewed.

There is, when it tags men or women, there are some difficulties that it may not be able to recognize that. So the goal is if that’s important in your taxonomy, perhaps use the block tags to either remove them and manually add the more accurate tags as needed or use the manage tags to kind of go through those and check them out. And one more step. So I will say like go through the tips which we discussed in this slide. So that will be helping as Melanie mentioned. So our training set is good. Then it will be helping and this managed text will be also a great addition. Great advice. Thank you. Is there any relation between the image name or smart tag with the prompt used to search for an asset? So this is on the content consumer side. How is the image name or smart tag surfaced? That’s a two-part question. So you actually have the ability to search by the image name or by the title by just clicking on search and typing that information in. If you’re specifically looking for the title to show up in the smart tag, you would have to train your taxonomy and train smart tags to be able to identify that or in the generative AI, you can also see that the title is also generated.

Excellent. Basically, yeah. Just to add on this, so we should make sure that this metadata is indexable. So whether we want to do a specific mapping to our structure or we just leverage out of the box. So if you have the smart tags generated with the proper titles, it will be indexed. And in the omni-channel search, we can search it. And so, as mainly mentioned, that this proper training and title is important. Yeah. So the indexing is very important so that it’s easily searchable and discoverable.

Perfect. Thank you.

I’m wondering if the process is different for text-heavy assets like sell sheets or brochures I’ll take that. So smart tags, when you upload any type of a text-based asset, it is going to look at the asset and pull out some of those keywords that are found inside of the asset. So you would notice in a PDF file, for example, that has been uploaded, AEM smart tags is going to scan the document and pull in multiple keywords that it sees, assigning that confidence score. And again, if you need to add additional keywords to that, you would be able to manage the tags and add the tags there, or go into your own taxonomy and add the tags through how you have that identified. But it will pull in those keywords from text-based images or assets.

Excellent. That is great news. All right.

Can smart tags be trained in any language or is it English specific at this time? So I will take this question. So currently it’s English specific. And if you want to do for different language, think of custom implementation at your end, but currently it’s only for English specific language.

Okay, thank you. I missed where to actually access the block tag information. Can you confirm where in assets you can see block tags? So in AEM as a cloud service, you would go to what’s called assets view, which you can see by going to experience.adobe.com to your instance. And in there on the left-hand side, you will see block tags available.

Perfect. Thanks, Melanie. Okay, there’s two questions that are almost the same thing. So I’ll read those out. How does smart tagging differ from the newest AI generated tags? Is it recommended to use one or the other or both? So let me take this question. So this newly generated functionality currently is we need to have an agreement. We need to accept it. So internally what we are doing, we are using the custom models, like your custom GPT models to get the data and improve your discretability. So one thing good with it is it is using the new set of models and the titles descriptions will be more better. But in that context, you need to have your permission for training some of the data set may be confidential. And in the other way, currently Adobe uses Adobe Sensei service for producing the smart text, which is also very great model, which is a past model and Adobe is consistently working on the top of it. So these are the two differences. One is internally for smart text, we are using Adobe Sensei service. And for the AI generated metadata, we might be using custom GPT models and it needs your licensing and purchase.

Yeah, I think there’s a related question there. We are on a cloud service, but we don’t see the AI generated title and description. Is there a step we need to take? Yes. If you went to the experience league URL, you’ll see that there’s a note there. You do need to contact Adobe as well as sign a Gen AI agreement to have that enabled in your instance.

Thanks, Melanie. All right. Does this smart tagging process apply only to new assets ingested into the dam? I can take that one. Once you’ve enabled smart tags, all image assets in that uploaded as Deepak covered would have smart tags applied to those images. If you had existing assets already in your system, you would simply need to select that asset and do a reprocess workflow. So that way it will actually generate the smart tags, but you do need to make sure that you have it turned on for the folder that that asset is located in.

Okay, perfect.

Yeah, related to that, and maybe we covered it. Does the retraining allow you to start from scratch? So I don’t know if this hearkens back to, this is a question from Tom. I don’t know if this hearkens back to it being irreversible or just applying to a new set of assets or going back to a previous asset. But the question simply was, does the retraining allow you to start from scratch? Yeah, I think we can remove some set of assets which we don’t want, and we can train the new set of assets. For example, if our business model is completely evolved, then I think we can remove those assets which are already tagged and have a new set of assets with a new tag. Yeah, so in that case, we just don’t want data. We want a complete retraining. So that is the process we should look for. Excellent. All right, we’re gonna hearken back to the dynamic media question.

Will those tags be searchable on the sites hosted by AEM sites when the content is delivered by dynamic media? Yep, it will be. As I mentioned that the tags are carried forward, just like in what way you repurpose or use this essential, like whether you in the sites use it as a filter or a different way, but yeah, those tags will be carried forward. And depending on your functionality, it will be shown to the end user.

Excellent. Thank you Deepak. All right, at my company, we have a custom metadata schema so we can select product names in a metadata field and use it to filter. Is there a benefit to also training smart tags to recognize our product and have the product tagged as well? So I want to pick, yeah, my name can be written.

So it’s very common to have custom metadata schemas with your products and your brands. You do want to evaluate what you actually need to have it trained. So if you’re currently already tagging your assets, why would you also need smart tags to do that as well? However, training smart tags to recognize your products is going to be helpful. So some of it would be evaluating your current processes to determine the effectiveness of training smart tags to recognize those. If they’re key products that are always tagged, perhaps that training the smart tags would be applicable here. So it can kind of be a little bit of both.

Perfect. Thanks, Melanie. All right, this is a little bit related. What is the best practice to follow? Do you want to manually apply the smart tag workflow at the four folder level or as part of a dam update workflow? The question is not only around how to do it, but also the performance aspects.

So one thing I want to add on this, if we add in the dam update workflow and enable the smart tech step and say to process for each of the asset, then it might hit a bit on processing. For example, there might be unwanted folders which don’t need smart text and tagging. So we don’t want to waste processing efforts in that. So the better way will be just if we enable at the folder level, for example, our project level at the top, which we want to include in the smart text and in the dam update workflow also, just disable or enable that checkbox, that whether we want to have smart tag by default at all places. So we can just disable that and just add it on the top of your folder. So that’s one feature which we need to process for our business understanding that only will be processed. So in that way, we should be smart with this and by default, don’t process for everything with your dam update as a workflow.

All right, thank you. So here’s another question. Consider that you had tags that had a low confidence score. Is there a bulk way to manage those tags? So two things I want to add. One is at the config level itself, we can define what is the confidence score we want to keep for the smart text. And secondly, for the bulk way, I think manually we need to go through the managed text and delete it. But the better way will be to be managed when config at our end configuration and we added like a confidence score of 0.8 or 0.75, just have the smart text with that. So that might be a better approach instead of manually removing the text with low confidence for and keeping all the tags.

Okay, great suggestion. Great suggestion. All right, the last question that we have so far in the chat is can we undo the AI smart tagging or smart training already done? Can we have different models for different folders or sections such that only respective training models is applied? So here also, I want to add a few things. As I mentioned in the previous portion also, if the training which we did in past is all out of the context, then we should delete all the old assets and tags and just upload a new one and train them. So that is one perspective. And secondly, I think we can train our products with a different lines. For example, for the e-commerce also, we can train it in a different way. One is your vNEC versus crew neck was one of the types. On the basis of fabric also, you can train, whether it’s linen or cotton. So on basis of different aspects also, you can train your products.

Excellent. All right. I think we’re through with most of the questions. So I would love to, first of all, thank our speakers today. Melanie and Deepak, you’ve shared a wealth of information and bringing in your actual experiences using these is extremely helpful and insightful. There are a lot of great key takeaways for you guys who attended today to go try this.

And so we really welcome, really encourage all of you to go ahead and try that. As we wrap up, I have a few things to say. We do really want your feedback along the right side there. There are a few different surveys. We’d like to understand your comfort level after learning about assets today. So definitely jump into those surveys before you drop. I also want to encourage you to join our Adobe Experience Manager User Groups. So Melanie Bartlett actually happens to head up a special Adobe User Group just for AAM assets. So if you guys want to keep learning more about AAM assets and learn from your peers type sessions just like this, that is an option for you. We also have many other Adobe Experience Manager User Groups around the globe focused on different topics. So definitely visit our user group site and sign up. If you sign in, you can join different chapters and by joining different chapters, you’ll be notified about their upcoming events. So a great way to jump into lots more learn from your peers sessions. And finally, Adobe Experience League is our learning site for all Adobe Experience Cloud products. If you are visiting experienceleague.adobe.com for learning, we now have a much more personalized experience. If you create a profile there, you can indicate products of interest and your experience and expertise level. And we will give personalized content and recommendations for learning to you based on that profile, as well as it will start learning what you’re working on and tee up related or similar topics that other users like you have read or used and leveraged. So we highly encourage you to take advantage of that more personalized experience on Experience League. And above and beyond all, thanks everyone for joining today. We were thrilled to have such a huge group from around the world here, and we will be posting the slides and recording. So if you wanna go back and revisit some of the questions and revisit some of the session topics and content, this will be posted soon. You will get a post event email with details on how to access all that. So thank you everyone for joining today. We really appreciate our speakers. We couldn’t thank our champions enough for all they do for Adobe, and thrilled that we had such a great community join us today. Have great days everybody. Thank you. Thank you. Thank you.

What you’ll learn

  • How Smart Tags function within AEM’s metadata framework, and how they support different personas across the content lifecycle
  • How to train and refine them for better accuracy
  • Tag taxonomies, metadata schema setup, and the latest AI-driven capabilities for auto-generating metadata like titles, descriptions, and keywords.

Whether you’re an author looking to improve searchability, a developer syndicating content, or a DAM librarian scaling operations, this session will equip you with actionable insights and real-world use cases to elevate your asset strategy.

You can access the presentation slides here

Unlocking Efficient Asset Discoverability with AI

Discover how Adobe Experience Manager leverages Smart Tags and AI to revolutionize digital asset management:

  • AI-Powered Tagging Automates metadata creation, reducing manual effort and boosting search accuracy.
  • Custom Training Tailors tagging to your unique business taxonomy, ensuring relevant and precise asset categorization.
  • Best Practices Emphasizes quality training data, iterative improvement, and governance for long-term value.
  • Real-World Impact Demonstrated benefits in e-commerce and finance, from faster product launches to compliance assurance.

Harnessing these strategies can streamline workflows, improve content ROI, and empower teams to find and use assets more effectively.

recommendation-more-help
76e7e5df-e11d-4879-957b-210b1656662e