AEM Masterclass: Asset Workflows, Permissions & Integration
In this session, Admins and DAM Librarians will learn efficient usage of AEM asset workflows, understanding and applying permissions and access control within DAM and getting to know of different Integration strategies with DAM.
- Efficient and correct usage of AEM asset workflows (including review, approval, and custom automation)
- Understanding and applying permissions and access control within DAM
- Capabilities of DAM, Best Practices, and Integration Strategies with DAM
Hello everyone, welcome to the Skill Exchange session on Asset Masterclass. Today we will discuss on the key pillars that truly unlock the efficiency and control of your digital asset management strategy. My name is Deepak Khitaawat and I am working as a Principal Engineer Software at Palo Alto Networks.
Here is a brief introduction of me. I am privileged to be part of Adobe community from a long period of time and I am currently an AM Champion.
Here is the brief agenda of what we will cover today.
We will first discuss on the power of assets in the digital landscape.
Then we will discuss into the automation, core of automation that is workflows.
Then we will discuss into access and permissions.
We will now connect all the dots that is integration strategies will be discussed.
We will discuss at the end best practices and key takeaways from this session.
Then we will open the floor up for Q&A. Assets are not just a simple place where you can dump your images and videos. Think of it as a central hub, a place to manage all your digital content and a single source of truth for that.
Entire life cycle of an asset right from ingestion i.e. upload to approval, reviewer, archiver or delivery is maintained by assets. It helps to make sure that every team member, every campaign follows the same version of the assets hence improving brand consistency and reusability of the asset. It makes sure that the different team can collaborate for assets in a single unified platform, removing that chaos and confusion of emails and all. It improves efficiency. It makes sure that the different renditions are compatible for fab, mobile, leveraging, and multi-channel delivery. It leverages smart tags via intelligent tagging as well as provides full text and metadata search. It makes sure that millions of assets are being placed at one place and they are rendered to the user in real time, removing any performance lack. Now we will discuss the workflows. Workflows are the core of automation. So what are them? They are automated sequence of steps to process and manage assets. Why workflows are so important for assets? They help to automate repetitive tasks, for example sending notifications, generating renditions or extracting metadata.
They enforce business rules. So all the compliance will be followed by using workflows. Since all the assets follow a particular predefined path, it helps to enhance the quality of the assets and consistency is maintained.
The time to market for the campaigns is reduced using this automation, which is a real-world requirement.
All the manual efforts are reduced and it frees up the time for providing and maintaining strategy by our team members. Now in this slide, we will discuss on an example scenario where a new marketing image is being created. The designer uploads that initial draft to a particular folder, which is being reviewed by marketing manager once he gets the notification of that.
Once the feedback is left, the designer makes changes appropriately and it’s routed to a concerned person like a brand director. Once it is approved, the asset is automatically moved to a different folder. It will be delivered with different steps as per the business requirement and then it can be published downstream to a website or different social channels. And once the campaign is finished, the asset can be deleted or moved to an archive folder. There are some out-of-the-box workflows in DAM I want to cover. I will start with the most important and fundamental one, DAM Update Asset Workflow. This is triggered whenever the asset is uploaded or updated. Some of the key steps with this workflow are rendition generation for different touchpoints like Webmobile, metadata extraction, processing smart text. It basically makes your assets ready for multi-channel uses. One difference in cloud service is heavy lifting for asset processing that is metadata extraction and rendition generation are offloaded to asset compute microservice.
This helps for performance scalability. Now DAM Update Asset is a transient workflow for clouds where its instance is not persisted in the JCR repository, hence providing for a superior scalability. Here is the flow in the cloud service whenever an asset is uploaded. The binary is uploaded to the cloud storage and all the requests like metadata extraction and rendition generation are taken care by asset compute microservices. And once it is done, AM is notified and AM updates the reference with the new renditions. Some other workflows I want to discuss. One is DAM metadata writeback. It makes sure that the consistency is achieved between the asset metadata and the original file. Any changes in the metadata, this workflow writes back to the binary file.
Dynamic media process asset workflow is a very powerful one. It helps to handle different image sets like your spin images, 360 images, flipped images and helps for video encoding. Whenever the video is uploaded in different byte rates and resolution, it is encoded and served to the user, reducing the lagging and all. Other important workflow I want to discuss is request for activation. So it takes care of two steps. One is concerned group is sent for review and once it is reviewed, then it is published to live. These out of the box workflows are very robust foundation, but in most of the cases we should extend them or optimize as per the business needs. For customizing workflows, workflow console is very important. It can be found under tools, you go to workflows and then you can create your models. So here you can customize it using out of the box component and multiple components provided by assets. Some core workflow components I want to discuss.
One is process steps. All the Java code can be implemented using workflow process interface or ECMAScripts to handle complex logics. For example, doing an integration with external system, we can achieve using this process step.
Next is the participant step.
If there is some task for review, it is assigned to a specific users or groups with this participant step and post approval only it will go to the next step. Other important step is dynamic participant step. Here the group to which the task is to be assigned is computed at runtime. For example, if an asset is belonging to a German domain, then it will be assigned to a German manager group and if asset is belonging to the English or US site, then it will be assigned to the US manager.
OR and AND splits are very helpful. OR is used for conditional branching. For example, an asset is sent for approval, then either it will be published or it will be sent back for the changes. AND split is helpful for parallel processing. For example, for an asset to be published to live, we should make sure that the SEO that is the metadata is proper and the content and imagery is proper. If both steps are taken care, then only it will go to the next step with AND split.
Other important step I want to mention is the container step. For example, our workflow is very complex. So group of related steps can be moved to a different workflow and that workflow can be used as a container step. This helps to clean up our already complex workflow.
There are a lot of real world customization examples. Some of the important examples I have mentioned here like applying dynamic watermarking, integrating notifications with slacks or teams whenever it is sent for reviewable steps and we can push the metadata into different downstreams like PIMP and others. I also want to discuss some of the advanced workflows which are event driven. Whenever there are changes in JCR properties, for example, a node is created or a node is updated or an asset is moved to a different folder or an replication event is triggered, workflow launchers can be initiated. Workflow launchers will internally call the model and it can take care of the proper steps. And for more advanced steps, we can use custom event listeners and invoke them programmatically using the workflow API. So I have also shared the screenshots of the different types, how it is there. The path is basically a payload and asset path where the launcher is triggered based on the events like created, deleted, replicated or moved to a different folder. Now I also want to discuss on some of the best practices of workflow we should follow. One is we should always optimize the performance. The workflow is a sequence of large number of the steps and some steps can take a longer type, hence we should use the async execution so that that long thread can be running separately and it avoids to block the workflow engine.
Since CPU and memory is very important, we should make sure that our Java code is very lightweight and we should not traverse the whole node to find the asset. We should use SQL or query builder for traversals. We should make sure that only needful renditions are being processed to optimize the asset processing.
We should make sure that our workflow models are versioned. For example, the workflow model path might be either conf or var and we should push it to git so that it can be tracked and people can make the changes appropriately and know the history when it was updated or modified.
We should always make sure we are resilient with error handling. Sometimes external systems may be down, so we should always implement a retry logic. We should make sure that our logs are very robust so it can be easily debugged by the team. We always should have fallback mechanisms in place so that any failures can be handled at our end. And at the last, we should use workflow exception so we know that this workflow is a failure and a notification can be sent to a group or a logic can be updated as per the need. Now we will be discussing on one of the important aspects for governance. How to control the access, which is the foundation for asset security.
Why permissions are so much important for DAMP? First, it makes sure that our assets are safeguarded and only confidential assets can be accessed by appropriate users and groups.
It makes sure that the approved assets are being published. So, for example, if a logo is outdated, it will not be published because it will not be accessible. It will help to enforce regulatory compliance with different rules like GDPR and other privacy laws. It makes sure that unauthorized deletions and changes are not being done since proper users have proper permissions. What is the permission framework behind it? It is based on your Jackrabbit access control list mechanism which offers granular control but we should have a great understanding of the same. It is managed through users, groups and privileges which we will be discussing in the next slide. So now we will discuss on what are the key or the building blocks of our asset security. First is users and groups. Users are very simple. The individual accounts that log into our AMasset instance are named as users. What are groups? Groups are collection of the users. They can be either system groups or they can be custom groups as per need and department. We should always take a note to always assign the permissions to groups, not individuals. If some new user has come, we can remove it from the group and add accordingly if the new user comes. What are privileges? Privileges are basically permissions assigned to those users and groups what they can do. They can read, write, add the child notes, remove notes, manage the versions, replicate. These are some of the privileges for users and groups.
Most important are access control list are the permissions which are provided. What are they? They are rules which are specified to a specific path. That path can be either a folder or an asset itself. Internally, they contain asset control entities which allows, like, grant or deny privileges to users or groups. Like what they can access, read or write is decided by ACs, that is access control entities which are part of our ACLs. Few points to note that permissions are always inherited down the content tree and deny rule takes precedence over the allow rule which we will discuss in the further slides. Here is the UI for permissions I wanted to explain. For the group dam users, you will see ACs listed here and on the right side you can add a new path and there provide the privileges like JCR, read, etc. And the type is also whether it’s allow or deny. So how we can access it? We can go to tools, security, users and group. There we can create custom groups or update the ACs for the current groups.
For the asset folder levels, setting permission is very important and how we can achieve is listed here. So, we should go to that particular folder, then we should go to the properties tab and from properties we should open the permissions tab as we can see in the right hand side screenshot. There we can add a user or a group and the roles we can add it whether it’s an owner, editor or a viewer. So this helps to manage our assets. And as we discussed that the permissions are inherited down the content tree. So for a parent folder permissions will be inherited by all the child folders and assets.
The permission evaluation logic works like deny always takes precedence over allow. More specific paths will always take precedence over the general paths. For example, if the parent folder is slash content slash dam, then a child path like slash content slash dam slash a dot jpeg will take precedence over the parent and its permissions will be used.
We should always consider the orders of access control entities in our permission design. So we should make sure that our orders are proper in there since first match wins for the same set of permissions. Now we will discuss on some of the core principles for the permission. First important is principle of least privileges. If a user needs read, just give him the read permission. Don’t give him the write permission. We should always have the group based permission. So it’s very easy. And if someone is new, then we can add him to the group. And if someone we want to remove, we can easily remove. So it’s very simplification of our audit process. And we should always keep this permission simple. We should not make it complex. Since by default, all the users and groups have a deny permission by assets. So we should selectively give them an allow based permission. We should use deny only when it’s sparingly urgent.
And we want to provide a very specific group or a user a deny permission, then only use deny as always use allow. From an operational point of view, there are some best practices which we should be taking a note of. We should make sure that our permissions are audited regularly. So any permissions which we are not needing, we can remove it. We should make sure that all our lower environments have the same permissions so that it can be properly tested and issues are not carried forward to the higher environment. We should document our permissions. So it’s very helpful for the team and new team members can also easily understand.
We should always make sure that our permissions are replicated to the publisher since they will be serving the content downstream to different platforms like websites or social. Since we have discussed on the workflows and permissions, now comes the integration, which is the most important point.
So think of your assets as a central hub. It’s not a silo, it’s a connector and it should be integrated with different marketing and technological stack.
Why integration is so much needful? One is all the assets are aligned across all the systems in real time, which helps for a consistent brand experience since the same visuals are delivered to each and every channel. Content delivery is very fast with this. We can make sure that our already great metadata are enriched by integration with different systems like PEM, CRM or AI services. It also ensures that workflows are optimized for other systems also like CMS and PEM or other commerce platforms. So integration is possible with large number of the systems. In this slide, I mentioned some of the important systems which we are using in our projects. Like PEM, we can integrate to sync the product images descriptions. We can use a CRM for personalized asset delivery. We can use an ERP for enriching the assets with business data. We can use a marketing automation platform integration for the market to feed campaign-ready assets directly. We can integrate with CDP to provide the customized assets based on the user past history. We can integrate with CDM to deliver the assets in a very fast manner. We can integrate with Creative Cloud like Photoshop and others. There are some integration patterns for assets. I will be discussing on the three or four important ones which we are using. First is AEM as the master dam. So AEM is a single source of truth and authoritative source for all your contents. All the assets flow from AEM to the downstream like WebShoShil. So this is happening for all the new projects as well as for the large enterprises where AEM is the primary content hub. There are some legacy projects like very old projects which were not earlier in AEM that can be having AEM as a consumer or a slave dam. So where assets are present in that system like a legacy dam and they are slowly ingested into AEM and that AEM assets are served downstream to the website and other showshels. So slowly they are moving from the external systems to AEM.
The other important model which is used in most of the projects is the hybrid model where some of the assets are managed by AEM and some are managed by the external systems. And one is optimization where CDN is very helpful. So CDN if it is dynamic media, dynamic media takes care for rendering those assets. Else we can use the CDN like Akamai or Cloudflare to make sure that the assets are served to the users in very fast speed. For example, if a user from India requests the assets and the server is in US, then an ad server near India should serve that content. Now I want to discuss on some of the use cases for integration after discussing all the integration patterns.
One is we can sync the content from platforms to AEM. For example, PIM, we can use the product images, SQ details and use and workflow internally step so that AEM assets are linked to that PIM metadata.
We can make sure that AEM assets are served to different platforms like Marketo or Shoshil. So all the approved assets can be served for marketing campaigns.
We should have integration with dynamic media and CDN as I have mentioned previously.
Also, we should have integration currently with most of the tools like Photoshop Illustrator where the photographer can directly make the updates to AEM and then they can integrate and leverage those after checking in the Photoshop. Now discussing on the integration, there are some of the governance which we should take and incorporate into our asset projects. We should regularly do troubleshooting and do an operational visibility. We should see which workflows are active, which are completed. We should regularly monitor the logs. We should always monitor the queue health. We should see which workflows are stuck, which are slow so we can optimize and make them better. We should use SQL 2 and Query Builder queries instead of traversals. We should use the Sling console to do the diagnostics.
We should make sure that the proper lifecycle of an asset is being implemented at our end using any needful workflow and process. We should make sure that the folder follows a proper regular convention and the asset usage is being reviewed. So if an asset is redundant, we should get rid of it. We should always make sure that performance tuning is done by using Lucene or OPC queries wherever we have to do faster asset search. Our infrastructure CPU memory should be always advanced and we should always leverage Dispatcher and CDN for delivering in very high speed. Now I want to mention some of the key takeaways from the session.
One is workflows which are a source of automation are very powerful and we should leverage them into our asset projects.
We should make sure that the proper governance is taken care with applying proper permissions. We should make sure that integration is there for assets to all available digital ecosystem to enhance the maximum potential of assets.
We should make sure that we always do regular monitoring, regular troubleshooting and we should optimize our current AM assets implementation so that it is more efficient and performant in future. We should always establish the clear policies for asset management and security since governance helps and is very useful.
Now we will open the floor for Q&A. Thank you so much Deepak for that presentation. I love how you made some complex and technical topics feel so manageable.
Thanks Will for having me in the session. It was very nice. I’m thankful to you, Wuladuby and my team at Palo Alto Networks for encouraging me in having this skill exchange session.
Anytime of course. So with that, we know the drill. Let’s take some questions. Make sure to drop yours in the chat if you haven’t already. So we’re going to get started with a question from Leanne and she’s asking how does the workflow notification come through to the approver? Is that coming through via email or some other channel? Sure. So once we configure a participant step, we assign a user or a group where the notification should come. So normally it goes to the inbox. They will receive the mail notification. So in AM console, they can go to the inbox and check. Getting email is also a good practice. So for this, we separately need to configure mail service, basic email service and make sure that our email notifications are in place. So that email goes to the proper concerned users or groups and they can take the action. But if it is not configured, it will by default go to AM inbox. They can go to the bell notifications and see them. And they can take an action if it is find them, they can approve and it will go to the next step or and as per the configuration appropriately.
Gotcha. Okay, thank you.
Next up, we have a question from someone using AM assets on prem 6.5.
And this person is asking what can be done when assets get stuck in a workflow? They’re not sure why that occurs and is it possible to set up or track a clear failed status? For more context, their use case is multiple uploads share the same SHA-1 hash. They’re not duplicates, but they end up failed sometimes.
Anything you can share on that note? Yeah, that’s a nice question and it can happen sometimes.
So sometimes our workflow assets can get stuck. So the recommended steps will be we shall go to the workflow instance console. We shall check for the workflows which are in your running or suspended state and we shall click on them. We will check the history like the logs where they are failing. So either we should terminate or suspend them. And if we see the proper reasoning, we get why it’s failed, then we can reprocess the workflow. We can again run the workflow on that asset and it should be fine. We can trace our logs and if we are constantly seeing that it’s failing, so it might be a code issue that it’s a permission level issue or some repository access issue. So that should be rectified and depending upon the use case you mentioned. So you can share, we can discuss more on this in detail during our skill post office hours. But yeah, it should be for sure related to some functionality at the end and can be debugged and fixed.
And for the initial question mentioned, you can go to the workflow instance console, check for all the workflows in the running and suspended state and take a look at your logs and the appropriate actions.
Perfect, thank you. Yeah, that series of steps you detailed for troubleshooting is super helpful. So I appreciate that. And also great shout out if there’s anything extra technical or requires more nuance or context, those follow up sessions in September that we’ve been dropping links for throughout the day is a great time to dig deeper into some of those questions from this session. So that’s a great call out too Deepak. Another question about is it a best practice to maintain all permissions as code artifacts in OSGI config.cfg file? That’s a great question. In the previous versions of AEM, we were handling permissions with your package manager.
With the newer versions of AEM on prem and cloud, it is recommended for a best practice to do it with your repository repo init script. So that is recommended and a best practice as compared to the OSGI configuration mentioned.
So and this repo init script will help in one other way that it will be same across your environment. You can deploy the same in your lower environments like int or in the higher environments like author and production.
In the past we used to also handle with the package manager. That is also fine. But since it’s a repository level config and more granular, so it’s better in a recommended way to have the basic permissions with your repo init script. Gotcha, gotcha. Thank you.
Few more questions coming in. Michael is wondering if you have anything you can share around how you’re tuning your queries. Sure, that’s a nice question. So we should make sure that we have properly added indexes to make sure that our queries are fine tuned and we don’t traverse the whole demo assets. For example, whenever we are implementing a custom Java code using a process, whether it’s a process step or your participant step, make sure that the queries fine tune like whether it’s a query builder or your SQL 2. We should use it efficiently. For example, we should not traverse through all the assets. We should just traverse the needful nodes and we should also make sure that the indexes are added so that the results and retrieval are faster. But yeah, it’s very important to make sure we use the proper queries, whether it’s equal to a query builder and we should just target to the appropriate asset. So in AEM workflow, there is one good leverage. You have a workflow payload, so you should just play around it rather than traversing the whole repository or assets because the payload will give you an appropriate path. For example, if you are triggering it on an asset A, so you will get the asset A path in the payload and in the backend you can do computation accordingly rather than traversing through the whole repository and just make sure the proper indexes are added. Yeah, that’s a nice question.
Gotcha, gotcha. Thank you. Another great question here from Eric. Deepak, how do you modify rendition pixel sizes on cloud if you’re familiar with that? And as a follow up to that, if there are multiple renditions, is it possible to select just one type of those renditions to download or does one need to download the full rendition pack for multiple rendition sizes? Sure. So on cloud, I have not worked much. We have explored mainly via sandbox. Currently, I’m using on-prem, but I’ve explored cloud, sandbox and know the differences between cloud and on-prem, how we are handling. So there’s one good, in our workflow models, we have a step. So we can define the renditions there itself. For example, if you are using the customized dam update asset workflow or an out of the box workflow, which you are going to extend and customize, so there you can configure the renditions, which step you want. So if you already have a dynamic media, you can configure in the dynamic media step the renditions or if you just want in the thumbnail rendition something, so you can update in that step, whatever the rendition size you want. And if you want to rerun it after updating your workflow model, then you can just reprocess your assets with that and it will take care. And if you are working on-prem and you don’t have, in on-prem, we basically used scene 7 for dynamic media assets. So if you have scene 7 configuration set, so whatever the dynamic renditions you are doing, the AEM will basically push those assets after applying the workflow to scene 7. So in that way, whatever we configure in that workflow model step renditions for the dynamic media, it will take into place. So it’s possible and I can in my post office also show you the configuration and detail steps how we do it. Perfect. Yeah, I love that use case. That sounds great. I am seeing another question that I really like from Priya, who is asking about the right way to set up groups in assets. So she’s asking for creating groups, is it creating them directly in Adobe IMS or should she be leveraging Active Directory, creating them in Active Directory and then it gets synced to IMS and AEM? Yeah, I think that’s a great question and how it’s been evolving from the versions of AEM. On the past versions, like on AEM on-prem, basically we used to set in the console itself. So we created the new groups and exported with package manager and with the newer versions of AEM on-prem and cloud, as I mentioned, like initial permissions we are handling with repo in it. So we can configure using that IMS and it can sync to cloud. So that is also being one of the way or if you are someone admin level access who is doing on cloud, so he can manage. So I think both these things are feasible that I have also heard a lot of things that you use IMS and it is getting synced to those groups to your cloud. So that is also a fine way and as I mentioned, if some super admin is doing that is helpful and we should make sure that our repo in it script properly configured and pushed to the code base.
So that’s a fine question and I can I think cover more about it in post-champion office hours also, like a detailed use case for this. Awesome. Yeah, I love that.
Another great question from Rajesh here. He’s wondering how he can better troubleshoot asset ingestion issues.
So in your session, Deepak, you mentioned asset compute service used in CS to process so what’s the way, one, what’s your advice for troubleshooting asset ingestion issues and then how are you monitoring that over time? Sure. So, to it also depends on how we are doing asset ingestion. In traditional way, we can bulk select the assets and upload them to a particular folder and with the advancement of higher AM versions and cloud, we are doing it via Azure also. We can have the assets in Azure and we can do ingestion into AM. So one thing I will always recommend it, do asset ingestion in batches, not do in bulk.
And also monitor, for example, if there are some things which is not needed in your business context, for example, you don’t need metadata write back functionality, you don’t need a large number of metadata or renditions for your business use case, you just extend the workflow and omit those steps. So if your business use case says that you don’t need, so it’s better and advisable and then you can do your ingestion in batches, not all the assets at work, whether you do directly upload or you do it via Azure and then importing to AM. So I think it’s done and it’s very doable. I think most of the organizations are doing it. So asset ingestion in batches is fine and should not be an issue. We should monitor and we should also make sure what our business parities are. As I mentioned, that unnecessary steps we should omit.
Gotcha. And following up, speaking of workflows, follow up question to that, specifically in managed services, if you’re familiar, but Priya is asking, is it possible to offload workflow processing to a different server in managed services? Yeah, so it will be done, I think, with customization only, for example, in cloud, I mentioned that you can do to asset compute microservice in AM on prem. Also, I think we can, if you have some custom logic, we can do it like what we can do, we can override or extend or out of the box workflow, handle all the logic by ourselves, like in the backend and do it like how we want to do as per. We can also think to integrate with the APIs and check with our Adobe app, like, for example, if compute microservice can be given it as a package or how we can leverage that. But it’s a good use case how we can do in prem to improve the performance of the workflows. Yeah, that’s a nice perspective.
Excellent. I think we’ll have time for maybe one or two more, but I’m curious Deepak, specifically in work throws, is there anything, any best practice you can share about optimizing workflows out of the box? Yeah, I think some steps I covered in the questions, like we should understand our business need. For example, if we don’t need extra renditions or metadata write back, we should omit our steps. If we don’t have scene 7, for example, then we should just remove that thing instead of it and same with the smart text, like if we need the smart text and we should configure that model and use. And if we don’t need visual, just remove it. So one thing will be like always optimize your workflow, whatever is per business need and context. And most probably I will try to leverage like clone, extend out of the box workflow and use it. Second should be like I will make sure of my launchers. I will track my launchers that sometimes on any JCR event like modification and all my out of the box workflow launchers can get triggered. So that creates chaos and unnecessary processing. So I should monitor my launchers, just have the needful ones and see like which workflows it is triggering. Thirdly, I will make sure that my workflow code is committed to the code base so that the changes are not lost when moving from a lower environment to higher environment. With AEM higher versions, we have the config available in conf so that can be pushed to the higher environment. And fourthly, as I mentioned, like we should have a proper governance and follow a proper logging practice. Like code practices should be there, proper logging, retire mechanism should be there so that our workflows are more robust and possible. Workflows are super important for automation and we should leverage it in a good way and we should always see our business use case. So yeah, I think it’s very powerful future.
Amazing. And that’s perfect timing Deepak. Thank you so much for joining us and for sharing your expertise with all of us today. Yeah, thanks a lot Will for having me. Thanks to entire Adobe team and Palo Alto Networks team for supporting me. Thank you all. Bye.
Unlocking Efficient Digital Asset Management
Discover how to elevate your digital asset management (DAM) strategy for maximum efficiency and control:
- Centralized Asset Hub Manage the entire asset lifecycle—from upload to approval, delivery, and archiving—in one unified platform.
- Workflow Automation Streamline repetitive tasks, enforce business rules, and reduce time-to-market with customizable workflows.
- Granular Access Control Safeguard assets and ensure compliance with robust permissions and group-based access.
- Seamless Integrations Connect DAM with marketing, creative, and business systems for real-time alignment and enriched metadata.
Mastering these pillars empowers teams to boost collaboration, maintain brand consistency, and optimize asset delivery across channels.
Permission Strategies for Asset Security
- Assign permissions at the group level for easier management and auditing.
- Use Access Control Lists (ACLs) to grant or deny privileges (read, write, replicate) at specific asset or folder paths.
- Follow the principle of least privilege: only grant necessary access, default to deny, and use allow rules selectively.
- Regularly audit, document, and replicate permissions across environments for consistency and compliance.
- Prefer repo init scripts for permission management in modern AEM deployments.
Workflow Automation Essentials
- Automate asset processing with workflows for tasks like notifications, rendition generation, and metadata extraction.
- Enforce business rules and compliance, ensuring assets follow consistent, quality-controlled paths.
- Use process, participant, and dynamic participant steps for custom logic and flexible approvals.
- Optimize performance by leveraging asynchronous execution, lightweight code, and targeted queries.
- Monitor and troubleshoot workflows via the instance console, logs, and error handling mechanisms.