Dynamic Media- Adobe Experience Manager Asset Series

Use this five-part webinar series to build your knowledge base and maximize your investment in Adobe Experience Manager Assets. If you’re a practitioner who is new to Adobe Experience Manager Assets or have been using Adobe Experience Manager Assets for a while and are looking to brush up on your existing skills, this is the ideal way to get a deep dive into five of the most important areas of the solution. Adobe experts will review the basics and also provide advanced insights that will leave you with actionable next steps you can put into practice immediately.

Transcript

All right, let’s get started. I’m pleased to introduce our presenters for today. Bridget Roman, Senior Product Marketing Manager, AEM Assets, and Joe Pearl, Senior Practice Lead Solutions Consultant. With that, Bridget, you now have the floor. Great. Thank you so much, Anu, and welcome everyone. We’re very happy to have you all here today. I’m seeing some familiar names in our list of attendees today, so welcome back if you’ve been participating in the series. And if this is your first one, welcome. We’re glad to have you here. I just wanted to take a minute to review where we stand with our AEM Assets Skill Builder webinar series. This is number four out of five. So of course, you’re here today to learn from the pro, Joe Pearl. He is going to take the floor and use the vast majority of our remaining 55 minutes together to really take you through everything you need to know about the basics of dynamic media. So welcome to Joe. And then next week, we’ll finish up the series with Greg Klebus, who is our resident expert on Adobe Asset Link. And if you’re not familiar with what Asset Link is, it’s the native connection with Creative Cloud. So this is where users of Photoshop, Illustrator, InDesign, and XD through a panel can actually access AEM assets in the dam. So be sure to join us for that one. And you are going to receive all of the recordings from each of these sessions. You’ve been receiving them over time. If you’ve registered, feel free to share those with your colleagues. Okay. So I think with that, I’m going to just take a second to talk about what we’re going to cover today. Joe is going to take us through just high level, what is dynamic media? So if you happen to be new to dynamic media and you’re here today to learn from the very basics, that’s what we’re going to do first is talk about what is dynamic media. We’re going to go through the key use cases, talk about smart props, image precepts, viewers, the dynamic media architecture, and then Joe is going to dive into a demo and we will do Q&A at the end, but we will. We’ve got some experts here taking some of your questions as you load them into the Q&A pods. We’ve got Mark Dean on the line. He’s going to help answer a lot of the questions as we go along, and then we’ll do more questions at the end. So with that, Joe, I’m going to hand it over to you to take it away. Thanks, Bridget. What is dynamic media? All right. So dynamic media, first of all, let me just start out by saying I’m Joe Pearl. I’ve been with Adobe six years and I have been part of the retail group for the entire time that I’ve been here. I love this stuff. I love assets. I’m a photographer on the side in my, well, I don’t have a lot of spare time, but in what little spare time I have, I am a photographer. So I understand assets. I understand some of the challenges that come about with managing assets, albeit at a small or smaller scale. The dynamic media is an add-on to AEM assets or sites. It allows you to use a single file and create multiple derived renditions. It works with both images and videos, and there are many benefits. And I do see that there’s a question about dynamic media versus dynamic media classic, and there may be more than one of you who has that question. Dynamic media classic is the rebranded name of Scene 7. The underlying infrastructure is exactly the same. We’ve changed the name. We like to keep a theme with our names. The dynamic media classic image and video delivery servers are, in fact, the exact same delivery servers that handle dynamic media. Where it changes is in the management of those assets. Dynamic media, not classic, adds AEM assets, a digital asset, sorry, Adobe Experience Manager assets. I will refer to it as AEM going forward. And it adds the digital asset management functionality on top of the delivery. Some of you have answered the question regarding the, you know, the poll questions regarding a lot of different renditions, having the need to be able to find them, having to have all of those assets available from multiple screen sizes. And I think we all understand that screen sizes keep changing. With a single file, such as what you see here, that original is almost 4,000 by 2,600 pixels, and that’s a 300 DPI. But you’re not going to deliver that through a website. That’s just way too big. You may want to create the landing page version of it, a hero version of it, a PDP. Wherever that image needs to go, it needs to be delivered at the right size. And looking at the screen, traditionally, without dynamic media, you’d be looking at a minimum of four images. Now, that’s four images that you have to manage, which means if this melon on the table needs to be retouched after you’ve created all of these images, well, now you’ve got to create it and then generate all of those other renditions. With dynamic media, that’s not the case. You’re working off of a single primary file. With video, we have similar challenges. We need to be able to have renditions that go out in different sizes, different formats, different crops, and so on. When you think about the video, you’re not necessarily going to be able to deliver a full-size video over, believe it or not, a 3G cellular network. And I did actually have a 3G connection this past weekend, so it’s still out there. I was in a remote area and I was camping, and that’s the best I could do. When you’re used to having a high-speed connection in your home office and you’re out remotely, it really changes your experience. Being able to deliver a video to a 3G mobile device so that it looks good is just as important as having a full-size video for a large device with a high-speed network. We’ll talk a little bit more about this as we go. Smart imaging is a function of dynamic media and dynamic media classic that allows us to determine what the device is, what the browser is, and be able to deliver an asset so that it is best suited for that particular browser and device. For example, with a Chrome browser, that can handle a WebP image in addition to PNG and JPEG, and that WebP is typically going to be smaller than PNG and JPEG, and it doesn’t have the kind of lossy compression that a JPEG image might have, so you get better fidelity on the screen. All of this is built in and it’s all done automatically. So if you’re currently using dynamic media or dynamic media classic, turning on smart imaging will allow you to take advantage of it without changing your images. You start to think about some of the use cases with dynamic media, or let’s say some of the use cases that dynamic media solves. I’ll show you some of this in the demo. But in this particular case, I’ve got an image here of a woman with some jewelry and some really sharp makeup. You have the issue of how do I manage all of this? I was talking to a client the other day who has a particular workflow where at the end of the workflow they’re creating at least six versions of every single asset. Some will go to partners, some will go to digital channels, some will be created for print. That gets to be very challenging. In addition, you want to have a version history. When somebody makes a change to an asset, you want to know what that fundamental change is, and you might need to go back based on what the change was, where it was being made, what happened to it along the way, whether it was approved by a creative director or not. Dynamic media gives you the ability to solve the too many files to manage problem. Video delivery. Now, I did mention a little bit about this before, but you want to have a video delivered to a device so that it doesn’t buffer or stall. That requires the ability to step down and step up the video rendition as a user moves, let’s say, from a 5G environment to an LTE environment to a directly connected environment. You want the video to continue to play in its most optimal form for that particular device. With adaptive video renditioning, dynamic media takes care of this for us. Single source of truth. I have, as I said before, I’ve been working for Adobe for six years. I understand digital asset management, and I cannot tell you how many times I go to a client and I hear, well, we’ve got most of our assets on Jim’s computer. Or I’ll hear something like, we have a network share, but we don’t have any security, so anybody and everybody can access it and work with it. Putting Adobe Experience Manager assets or a digital asset management system on top of a dynamic delivery environment that has been proven gives you the opportunity to keep all your assets in one place, provide role-based security and permissions, ensure that only the right people see the right assets, and you can have a folder hierarchy. That folder hierarchy can look an awful lot like a Windows Explorer or a Mac Finder window, and you can search via metadata. When it comes time to find an asset, marketing departments may be looking for a particular word in the description or a title. Merchandisers may be looking for a SKU. If you’re dealing with finance or media and entertainment, there’s metadata that’s part of those assets that you can use to search. Dynamic media allows you to search via any metadata field, and I do mean anyone, including the brand of camera that was used to photograph that particular image. We have smart cropping as part of the solution. I will show this to you as well, and this will work really well when you’ve got subjects such as what you see here. You can create as many smart crops as you would like, and this is all done using our AI-powered Sensei functionality. We take a look at the image. We try to determine what we believe the subject area is, and we then automatically crop it. Creatives want to spend time doing something other than cropping and then file, save as. You’ll see more of this in the demo, but this is just to introduce the topic. A little bit further in smart crop. We will automatically look for the focal point. We try to determine what that area of interest is. If you don’t like the way that Adobe Sensei has done it, you can easily modify it without going into Photoshop or any other image editing tool. Again, I’ll show this in the demonstration. Additionally, for video, we have smart crop capabilities. The smart crop will, and I actually have a video where I show this as well, but the smart crop with the video can take a horizontally shot video and fit it into a vertical screen. More of that in the demonstration. When you’re setting up your smart crop, you’re looking at a screen that looks a lot like this, so you don’t have to be a developer to do this. This is a configuration. Like many other functionality within AEM dynamic media, this is permissioned. So the people that know how to do it can be permissioned. These could be administrators, but they definitely do not need to be developers. Image presets. There is so much you can do with image presets. You can consider that to be a recipe for how you want the image to be displayed and delivered. In this particular screenshot, we’ve got some of the elements about sharpening, about color management. You can create as many image presets as you like. I do suggest understanding – sorry, I do suggest utilizing a naming scheme that makes sense because I’ve gone into different accounts where they have shown me what they have, and they don’t necessarily make sense. And you’ll see in my demo, I’ve got some names that do not make sense, but I cleaned it up a little bit so you won’t see something that says Demo Tuesday. But there’s no code necessary. This again is a configuration option and not a customization option. Why do you want to use image presets? I’ve got several specific elements why you want to use the presets. If you look at this particular screenshot, these images perhaps might be 250 by 400 pixels. We all know that sometimes the container size on a home page, on a product listing page, on an informational page, if you’re in travel and hospitality and you’re talking about different kinds of adventures and experiences, you might find that a certain size image works better than a different size image. What if you wanted to change, in this particular example, a 250 by 400 pixel image to 350 by 500 pixels? Additionally, without image presets, you would have somebody go into Photoshop and resize all of the images that you see here in a different size. You then save it out. You then publish them out to the web page. Or you use an image preset. You go into that preset. You make the change once and every single asset that is using that preset will then take advantage of that new change in the preset. And you’re done. Again, I’ll show some of this as we go into the demonstration. All right. So this is another part of the image preset where you can give it its name and you can specify the height and the width. All right. In terms of experienced viewers, this particular example shows the ability to have an image preset that is going to allow you to use multiple images. So this can be a mixed media set. I have an illustration of this as well in the demonstration where you can automatically select the images that you want, put them into a single viewer that you can then publish out to a web page, which then gives your user the ability to navigate through them. And some of these include Zoom functionality as well. But this is a built-in viewer. It’s out of the box viewer. If you need additional functionality, you can create custom viewers. We also have a viewer that allows for mixed media. So you can have a video as part of your image set. When you deliver it, you can have stills and the video. In terms of 3D, we’re constantly making adjustments and advancements in what we can do with 3D. We have the ability now with a 3D viewer built into AEM dynamic media. You can take that 3D model, you can take a 3D rendered scene, put it into Adobe Experience Manager, and then utilizing our built-in viewer, you can add that to a web page and have that 3D capability available wherever you need to deliver that page. All right. The architecture of dynamic media. I did talk about how it utilizes the same servers as dynamic media classic. So what you see in the cloud in the middle are the delivery servers. They are the same for dynamic media and dynamic media classic. Everything to the right of that being with those assets being delivered through a CDN or whatever endpoint they need to go is the same for dynamic media and dynamic media classic. The left side of the screen shows the asset authors or your contributors. They add assets into AEM assets, which is how many times can you put assets on the same slide? I’m the one that built this slide, and I’ve got it one, two, three, four, five, eight times. Anyway, it’s like many creatives. I see a squirrel, and I want to go chase it. You’ve got the assets sitting in the repository, and then they get published to the delivery server. Your source of truth is AEM assets. All right. Let’s switch over to a demo. Anu, thank you so much for anticipating my need. Right now, you should be seeing a browser which has Adobe Experience Manager assets. If this is not the case, somebody let me know. You have a bunch of folders here. I have several that I’ll probably dive into as we go, but I want to start off by pointing out this demo MS2020. There are two tags on this particular folder. One is the basic Smart Crop profile, and one is the video Smart Crop. I’ve assigned these profiles as part of my folder so that my assets that go into this folder will automatically be processed by those different profiles, depending on whether it’s an image or a video. You also might have seen how my page adjusted when I made it bigger. This is a responsive page, and it is built for a touch screen first. I’m just going to put it back down to a regular size so I know where things are. When I go into my assets folder, you’ll see that I’ve got several videos here. They’re tagged with new. I added these this morning as I was trying to put together an additional piece of the demo. You also see that I have the canning line that you saw. There’s an image set in my canning line. You also see the woman that I showed you as part of the screenshots. There are also some 3D assets. This armchair GOB on the left side of the screen. All of these assets are here, including, for example, this one that we saw when we started. If I look at the renditions for this particular asset, I’m going to point out several things. I’ve got my original, which is about 3,900 by 2,600 pixels, and then I’ve got several static images. Wait a minute. This is dynamic media. Why do I have static images? Well, these static images, for the most part, are used throughout AEM as thumbnails, and you can use them elsewhere, but really where the power comes in are these dynamic assets. We’ve got a 1,200 by 700 pixel version, and you’ll notice that there’s a dark bar on either side. The reason for that, the proportions of this delivered image do not match the proportions of the original image. Rather than change the image ratio, what dynamic media does is it goes to the largest it can in one dimension and then fills the rest of the space with white. This particular image preset uses a command. There are about 70 different commands that I can utilize to change that white to a black, or in this case, it’s just a very dark color background, and I do this specifically for demonstration purposes. You’ll see that in this case, I’ve gone to a 500 by 700 also. Same filling. When we get down here to some of these others, we notice that I’ve got smart crop, but not all of them have been processed. When an asset gets ingested into dynamic media, we will process the smart crop. The smart crop will allow us to take a look at that image, determine the subject area. We do this by calling a service. If we’ve added additional smart crops after this asset has been ingested, we cannot determine what that smart crop would be until we’ve reprocessed the asset. And that’s a pretty straightforward task. We do have these large, we have the large, we’ve got the medium, and if actually if I go up to the left and click on the smart crop, we’ll see what smart crops have actually been created as I roll over these. Now the ones that have been unprocessed don’t have the blue handles, but if I don’t like the way it was cropped, I can adjust it. So I can change the size and the location of my crop, but not the aspect ratio. The aspect ratio is defined by the smart crop settings itself, and I’ve just moved this around and I’ve changed the image that will be delivered for this particular square smart crop. All that’s done without creating a brand new rendition or version of this image. So here you can see that I’ve got 10 different images and a swatch. The swatch in this particular instance doesn’t necessarily make a lot of sense, but for clothing and a variety of other use cases, it does make sense. What our AI sensei functionality does is it looks for an area of consistent image and color density and considers that to be the swatch. Well, like you saw before, if that’s the wrong swatch, you can move it. When I save this, I’ve just made changes to the renditions of this asset that will be delivered. Let’s go a little bit further. Let’s go in and take a look at viewer presets. Oh, wait, sorry, I want to go to image presets. Let’s go to tools, assets, image presets. Here’s that DMMO medium presets. If I edit this one, you’ll see that there’s where the size got configured. I can set my JPEG quality. I can also set the output of that particular asset. I can go PNG or TIFF depending on what that endpoint needs to be. Under the advanced tab, I can change my color space between RGB, grayscale, and CMYK. Now this almost always brings up a question of RGB, CMYK from the same asset, and the answer is yes. If you change to CMYK, you can change your color profile as well, and you can deliver pretty much whatever you need. I was working with one client who had a need to put all of their assets into AEM in CMYK for print, but they needed to deliver an RGB version to a mobile device. So by utilizing the combination of color space and color profile, we were able to put together exactly what they needed from a single CMYK asset. Now if you look down at the bottom, here you can see BGC is background color, and I set that to 111111, which is actually a dark gray. That’s an RGB value. Let’s create a brand new image preset, and I’m going to call this one Demo Tuesday. And I’m going to set this to, let’s go with 1700 pixels wide and 1650 pixels tall. I’m going to bump up my quality a little bit. I’m going to come over to advanced, and I’m going to rotate my image by 45 degrees, and I’m going to add in a background color of, that’s a dark red. Let’s go with a brighter red. Did I get that right? Red, green, blue. No, let’s just go with the purple. Okay. And I’m going to go ahead and save that. And you’ll notice it says unpublished. Now this will automatically go to a published state, and there it goes. Now that I’ve added this new image preset, let’s come back here and come back to my renditions. And it’s not here, but let’s go ahead and, let’s just refresh the page. Now you’ll see there’s my Demo Tuesday version. Let’s go ahead and take this URL, and when everything is properly published, I’m actually going to change to a different asset in just a moment. Actually, I think I know why I just did that. This will give me the opportunity to utilize that particular asset with a new preset. Now these dynamic presets are available theoretically as soon as they’re created, whereas the Smart Crop has to be processed. While I’m on the topic of Smart Crop, let’s take a look at this particular video. In this video, I have multiple viewers for it. And I’m going to take the Smart Crop video social. I’m going to take the URL. We do provide the embed code if you wanted to add that directly into a webpage, and this will include the player. In this case, let’s just take the viewer. I’m going to show you something here. I’m going to paste this in and allow this to start playing. It’s a 30-second video, and you can see that I’ve got the woman on the right side, the flame is in the middle, and editing this video so that it works well on a tall device such as a smartphone held in a vertical or portrait fashion. Where do you edit this? How do you go ahead and create a pleasing experience? So, I’m going to restart this video after shrinking my browser.

And this is what Smart Crop has decided is the subject area. This, without any human intervention at all, invades the entire story of that video in a vertical format so that the viewer can see and understand what’s going on. This is 100% AI-driven. Now, if that video is playing and I change the size, it adapts immediately to what can now be delivered through that particular viewport. This is a really cool and useful way of editing videos, saving a lot of time when you need to deliver that kind of an experience. All right. Coming in and talking about my image processing profiles, this is how I defined all of those Smart Crops or any assets that I bring in. This is for still images. You can see that I’ve defined all of the names and the sizes and I’ve turned on the color and image swatch. I can choose different kinds of cropping. I could crop by pixel, but I like showing the Smart Crop because it’s very, very effective. Now, with that particular Smart Crop capability, every single still asset that I drop in will be processed according to the definition that I’ve put in place. So if I come in and just take this one, for example, and just drop this asset in, it’ll be processed, it’ll be uploaded, processed, all of the metadata within that asset will be extracted. It will process all of the Smart Crops and it will also process it so that it can be used with any of those image presets. Now, while that’s processing, let’s go ahead and take a look at setting up my video profiles. One that I’m using is this one, which is my Video Smart Crop. And I’ve got two different Smart Crop ratios. One is 16 by 9 and the other is 9 by 16. I can easily add another one just by clicking Add New, give it a name, and let’s just call this Old Fashioned. And we’ll give this a ratio of 4 by 3. All right, so the ratios that are permissible, 1 by 1, which essentially means you can make it a square, 4 by 3, 4 by 5, 9 by 16, 16 by 9. So once I’ve added this in here, any new videos that are added will be processed according to these Smart Crops. Let’s come back here. Should be done by now. It is. I’m into my renditions. And here are all of those Smart Crops. So let’s see how Sensei did. All right, well, this looks pretty good. Some of these don’t make any sense. And I include some of these to show you some of the capabilities that are built in that you may or may not use. But I can actually have any kind of an image ratio that I like. And Sensei did an excellent job with this. And it gave me something that I can use. But maybe this one I want to focus a little bit more on the hands. So I can do that just like that and convey a sense of companionship just with the hands all off of the exact same image. And now I’m done with that. So I can save that. Finished. Done. Well, I did mention something about viewers before. And we do have several viewers. Let’s take a look at the flyout viewer. And this particular one will allow me to mouse over my image and zoom in right within that image but right next to it. Inline gives me the ability to zoom in within the container defined by the delivered image. The zoom light and dark is pretty much the default. So I can zoom in. And as long as I’ve got the image quality in my original image, I can get really great fidelity. And then, of course, I’ve got a zoom vertical where I can zoom in over like that. And it just zooms in the space next to it. When I look at the image sets, remember I said I have an image set here. Let’s take a look at the image set. I’ve got the same zoom, but I also have the light and dark image set, which allows me to just look through and see all of the different images within this set. Now this gives me the opportunity to take this code and put it into a website. It drops in the JavaScript viewer and it understands exactly what I’m building and delivering. All right. Let me just take a moment to think back, see if there was anything else I wanted to cover here. Okay. I feel like I went really quickly. Oh, search. There’s a big one. And many of you have commented that there’s a challenge with search. And I also have to show 3D. All right. When I want to search for an asset, I want to be able to just type something that makes sense. Perhaps it’s dog. And I get back images that match dog. Now this doesn’t mean that it has dog in the description. It could be dog in a tag. It could be dog in the metadata that’s deeply buried. Well, what about something like cannon? And this will show me all images that was shot with a cannon camera. All right. Well, some of you are thinking, yeah, well, that’s not really meaningful. Well, what if you’re running an organization that sends representatives out to photograph an end cap and they’re going to use an Apple phone, for example. By searching for Apple, I can see which of these were photographed with an Apple. Now these images might not have been tagged with it, but if we come over here to the camera data, when this asset was ingested, this was ingested with it. Every single element that you see here can be used for searching in addition to the dynamically created smart tags upon asset ingestion. It probably hit on dog. And this again uses Adobe Sensei or AI capabilities. So when I’m looking for any kind of an image whatsoever, root, red, abstract concepts work as well.

And we can also look inside of some assets such as PDFs and Word documents.

Now the Sensei does understand, and this is the example of the Sensei that I was bringing up, it does understand what this image is. It’s woman, people, girl, female, Caucasian, so on. But there are cases where these abstract concepts will be tagged appropriately by our smart image search capabilities. All right. Lastly let’s take a look, actually let’s go back here and go to armchair. Armchair. And here is my armchair GLB file. I have a dimensional viewer which will allow me to see what this armchair looks like.

Takes a moment to load. And just like you saw before, okay, not like you saw before, my AEN seems to be stuck.

Well, that’s unfortunate.

Maybe it’s my browser. So I will stop sharing and then reshare. Good browsers. Sometimes they get confused. All right, Anu, can you still hear me? Yes, I can hear you. We can hear you. All right. It appears my computer is stuck. All right. So I will keep on going until my computer comes back or my voice goes away completely, meaning that my computer rebooted itself, which does not happen very often. So shall I move the slides to see if you can advance there? It looks as though someone has shared some feedback that your voice is breaking up. I see that. But you do sound fine to us.

This might not be a bad time to go to questions. Yes, something is going a little left. Here we go. I’m coming back. Technical snafu. And some of you may actually understand what snafu stands for. Okay, Joe, we can see the embed code. Excellent. And you can hear me. Yes, we can hear you. Even better. Okay, so there’s the embed code. And when we go and look at that URL, it will pull up that armchair, and now I can interact with the armchair, which is actually a 3D model stored inside of AEM. I also have quick thumbnail adjustments that I can go to and see what that looks like. Now, this is just the beginning of what we’re able to do with 3D. This is not a webinar about 3D. This does touch on 3D because you can host these 3D viewers inside of AEM. This is a delivered out-of-the-box viewer, and you can use this to view the 3D view. This is an out-of-the-box viewer, and you can utilize this URL outside of AEM. It uses an dynamic media to deliver, and you can paste this URL into any web code you like. You can also use it as part of a product listing page, a product detail page, wherever you need to use it. Now, I think I have covered everything that I’ve wanted to cover. Let’s switch back to presentation. All right. I see a couple of different questions there. One of the questions is, if we have our own products that we want in the 3D viewer, we’d have to shoot all the angles. Well, the actual … There’s a two-part answer to that. One is a spin-set viewer that we’ve had for many years. That spin-set viewer is still available, and to use the spin-set viewer, yes, you have to shoot 12 to 16 images rotating around the entire circle. So 360 degrees divided by 12, so every 30 degrees you’d have to rotate it. Make sure that your lighting is set up properly, and then you can shoot your product. Assemble all of those in a spin-set, and then you would have the ability to use the spin-set viewer, which rotates around all of those 12 to 16 images. What I showed with the armchair is actually not a photograph. It is a 3D model that has been layered on with fabric, and you can layer on textures. It’s a 3D file itself that is using our 3D viewer. So the answer is yes, you’d have to have all of the correct angles if you’re shooting a product. If you’ve got the 3D model, you just upload the model into Adobe Experience Manager assets, and then you’re able to deliver it as a 3D element. All right. The impact on renditions if a file is renamed or moved in AEM also doesn’t update in a source file or if we touch impact renditions. Okay. If you rename a file, that will force the URL to change. So republishing will allow you to address that. If you’re changing the location, the actual location inside of AEM will not change the delivered URL for dynamic media delivered files. So the path name is not important. The actual name is. If you update a source file or if you retouch a source file, I have one client that actually keeps a smart collection of non-retouched assets. And you heard Bridget mention early on about Adobe Asset Link. They use Asset Link inside of Photoshop. It’s a panel that allows them to navigate through the AEM repository. They can see the smart collection and the non-retouched assets. When they retouch those assets, they can flip a flag, or when it gets checked back in, the workflow will process it. It will impact renditions because, and that’s what you want it to do, because you’ve just made a change to the source. So you do want it to impact renditions, which is a republish. Now if you think back to the architecture I showed you, there is caching. And that caching takes place at several different levels. The default time to live for an asset is 10 hours. You can either wait for the 10 hours to expire, at which point you will utilize the newest rendition, or you can flush out the cache and force it to utilize the new rendition. So I think that answered that one. If you have any questions whatsoever, add them in. I don’t see any other questions at the time being. All right, Bridget, while we’ve got, oh, here we go. We’ve got two more that have come in. Can we use any Lightroom, Photoshop, Profile, and DAM folders without restriction? I showed you Adobe Experience Manager in the managed service. In the cloud service flavor, you can use Lightroom and Photoshop profiles soon. It is currently in beta. And actually, I can, I’ve got time, so I’m actually going to pull, let me make sure that it’s up and running. Inside of the cloud service version of Adobe Experience Manager, which I’m just going to make sure I can log in before I pull it up. You have processing profiles, and this is new functionality.

Nope, it’s sleeping at the moment. I can’t bring it up. So the processing profile will allow you to utilize a Lightroom preset or a Photoshop action. There’s more to come on this as we get further into 2021. I believe it’s going to be announced fully at Summit. But you can use Lightroom profiles and Photoshop actions. The clause without restriction, not quite. If a Photoshop action requires user input, it’s not going to work because it is a service-oriented call that processes that action in the background, utilizing microservices, and to have it stop for a user to interact with it defeats the purpose of processing at scale. Is there a way to populate custom metadata and AEM assets from Photoshop using AssetLink? Not quite. There is a panel that has been created by a group at Adobe that is available that will give you the opportunity to add custom metadata in Photoshop, Bridge, Lightroom. Sorry, not Lightroom. Photoshop, Bridge, Illustrator, and if, Mark, I don’t know if you can, if you’ve got that URL handy, but we do have a panel that has been built and is accessible for free download to handle custom metadata. Anu, are we able to send out a URL to the attendees after the fact? Yes, we can upload it to the resource pod on the page that is available in your resource pod. You can access our previous recordings, and there you’ll find the resources that we are sharing. Okay, I will provide the link to that metadata panel, but I can’t look it up while I’m still answering questions.

Or maybe I can. All right, so we’ll provide that metadata panel. Under which domain will the assets be distributed? Can you use your own domains when you’re on managed services? The assets are distributed on the domain that is configured within your AEM. So, if your domain is mycompanyname.cen7.com, that’s where your assets will be distributed. If it’s something like f7d2.cen7.com, then that’s where your assets will be distributed.

Any other questions at this time? Actually, I’m going to share, Anu, can you give me sharing capability? Thank you.

All right, so this is the custom metadata panel that is, it works for Photoshop, Illustrator, and Bridge, and it allows you to add custom metadata to your assets. And Anu, I’ll give you this URL so that you can add it to the resources.

All right, now, there’s one other piece about that domain. If you work with. Sorry for the interruption. Were you still planning on sharing your screen, or should I? Oh, no, I’m done sharing. We can switch back. All right, so in terms of the domain name, where you’re publishing your assets, you can set up a custom domain working with Adobe Support. That is required for Smart Imaging, and as I mentioned before, Smart Imaging is free. It gives you greater capabilities, and it just needs to be turned on. But working with Adobe Support for dynamic media will allow you to create that custom domain. All right, Bridget, I think we are back to you, as I don’t see any other questions. Great. Thank you so much, Joe. Awesome presentation. Really appreciate it, and thanks, everyone, for your interesting and thoughtful questions. So before we sign off, just a couple of things. Wanted to put a plug in there for Adobe Summit, in case not everyone is aware of the fact that Adobe Summit is happening in April. It’s scheduled April 27th and 28th. It is, of course, all virtual. Normally, we would be in Las Vegas, but for the past couple of years now, it is happening virtually, and it is free. So be sure to click that link down in the asset link resources. Link number one takes you to the landing page for Adobe Summit, and we have some really great sessions in there. I’m just going to show you specifically for AEM assets and dynamic media. We have a handful of sessions you definitely want to check out. If you want more information on dynamic media and even deeper dive than what we did today, you can see under the training workshops, we’ve got a session called Deliver Intelligent Image Cross and Swatches with Dynamic Media. But there’s a whole host of other topics around metadata and content velocity and all kinds of great customer stories, so be sure to check them out. And also make sure as you’re looking at that asset link resources pod, there are all kinds of very interesting assets for the dynamic media story here, all the different use cases, how to kickstart your strategy. There’s a whole blog series you could go and learn more about and information on the custom media panel.

Automate the output of assets for all channels and screens with Dynamic Media.

Resources

*Dynamic Media Videos
*Rich Media Strategy Kickstart Guide
*Rich Media Strategy Image Preset Guide
*Image is Everything Blog Series

Series Recordings

recommendation-more-help
1b11e305-9ac1-4085-b79d-c0f5f0ae926b