AEM 6.4 has reached the end of extended support and this documentation is no longer updated. For further details, see our technical support periods. Find the supported versions here.
This page provides general guidelines on how to optimize the performance of your AEM deployment. If you are new to AEM, please go over the following pages before you start reading the performance guidelines:
Illustrated below are the deployment options available for AEM (scroll to view all the options):
AEM Product |
Topology |
Operating System |
Application Server |
JRE |
Security |
Micro Kernel |
Datastore |
Indexing |
Web Server |
Browser |
Marketing Cloud |
Sites |
Non-HA |
Windows |
CQSE |
Oracle |
LDAP |
Tar |
Segment |
Property |
Apache |
Edge |
Target |
Assets |
Publish-HA |
Solaris |
WebLogic |
IBM |
SAML |
MongoDB |
File |
Lucene |
IIS |
IE |
Analytics |
Communities |
Author-CS |
Red Hat |
WebSphere |
HP |
Oauth |
RDB/Oracle |
S3/Azure |
Solr |
iPlanet |
FireFox |
Campaign |
Forms |
Author-Offload |
HP-UX |
Tomcat |
|
|
RDB/DB2 |
MongoDB |
|
|
Chrome |
Social |
Mobile |
Author-Cluster |
IBM AIX |
JBoss |
|
|
RDB/MySQL |
RDBMS |
|
|
Safari |
Audience |
Multi-site |
ASRP |
SUSE |
|
|
|
RDB/SQLServer |
|
|
|
|
Assets |
Commerce |
MSRP |
Apple OS |
|
|
|
|
|
|
|
|
Activation |
Dynamic Media |
JSRP |
|
|
|
|
|
|
|
|
|
Mobile |
Brand Portal |
J2E |
|
|
|
|
|
|
|
|
|
|
AoD |
|
|
|
|
|
|
|
|
|
|
|
LiveFyre |
|
|
|
|
|
|
|
|
|
|
|
Screens |
|
|
|
|
|
|
|
|
|
|
|
Doc Security |
|
|
|
|
|
|
|
|
|
|
|
Process Mgt |
|
|
|
|
|
|
|
|
|
|
|
desktop app |
|
|
|
|
|
|
|
|
|
|
|
The performance guidelines apply mainly to AEM Sites.
You should use the performance guidelines in the following situations:
This chapter gives a general overview of the AEM architecture and its most important components. It also provides development guidelines and describes the testing scenarios used in the TarMK and MongoMK benchmark tests.
The AEM platform consists of the following components:
For more information on the AEM platform, see What is AEM.
There are three important building blocks to an AEM deployment. The Author Instance which is used by content authors, editors, and approvers to create and review content. When the content is approved, it is published to a second instance type named the Publish Instance from where it is accessed by the end users. The third building block is the Dispatcher which is a module that handles caching and URL filtering and is installed on the webserver. For additional information about the AEM architecture, see Typical Deployment Scenarios.
Micro Kernels act as persistence managers in AEM. There are three types of Micro Kernels used with AEM: TarMK, MongoDB, and Relational Database (under restricted support). Choosing one to fit your needs depends on the purpose of your instance and the deployment type you are considering. For additional information about Micro Kernels, see the Recommended Deployments page.
In AEM, binary data can be stored independently from content nodes. The location where the binary data is stored is referred to as the Data Store, while the location of the content nodes and properties is called the Node Store.
Adobe recommends TarMK to be the default persistence technology used by customers for both the AEM Author and the Publish instances.
The Relational Database Micro Kernel is under restricted support. Contact Adobe Customer Care before using this type of Micro Kernel.
When dealing with large number of binaries, it is recommended that an external data store be used instead of the default node stores in order to maximize performance. For example, if your project requires a large number of media assets, storing them under the File or Azure/S3 Data Store will make accessing them faster than storing them directly inside a MongoDB.
For further details on the available configuration options, see Configuring Node and Data Stores.
Adobe recommends to choose the option of deploying AEM on Azure or Amazon Web Services (AWS) using Adobe Managed Services, where customers will benefit from a team who has the experience and the skills of deploying and operating AEM in these cloud computing environments. Please see our additional documentation on Adobe Managed Services.
For recommendations on how to deploy AEM on Azure or AWS, outside of Adobe Managed Services, we strongly recommend working directly with the cloud provider or one of our partners supporting the deployment of AEM in the cloud environment of your choice. The selected cloud provider or partner is responsible for the sizing specifications, design and implementation of the architecture they will support to meet your specific performance, load, scalability, and security requirements.
For additional details also see the technical requirements page.
Listed in this section are the custom index providers used with AEM. To know more about indexing, see Oak Queries and Indexing.
For most deployments, Adobe recommends using the Lucene Index. You should use Solr only for scalability in specialized and complex deployments.
You should develop for AEM aiming for performance and scalability. Presented below are a number of best practices that you can follow:
DO
DON’T
Don’t use JCR APIs directly, if you can
Don’t change /libs, but rather use overlays
Don’t use queries wherever possible
Don’t use Sling Bindings to get OSGi services in Java code, but rather use:
For further details about developing on AEM, read Developing - The Basics. For additional best practices, see Development Best Practices.
All the benchmark tests displayed on this page have been performed in a laboratory setting.
The testing scenarios detailed below are used for the benchmark sections of the TarMK, MongoMk and TarMK vs MongoMk chapters. To see which scenario was used for a particular benchmark test, read the Scenario field from the Technical Specifications table.
Single Product Scenario
AEM Assets:
Mix Products Scenario
AEM Sites + Assets:
Vertical Use Case Scenario
Media:
This chapter gives general performance guidelines for TarMK specifying the minimum architecture requirements and the settings configuration. Benchmark tests are also provided for further clarification.
Adobe recommends TarMK to be the default persistence technology used by customers in all deployment scenarios, for both the AEM Author and Publish instances.
For more information about TarMK, see Deployment Scenarios and Tar Storage.
The minimum architecture guidelines presented below are for production enviroments and high traffic sites. These are not the minimum specifications needed to run AEM.
To establish good performance when using TarMK, you should start from the following architecture:
Illustrated below are the architecture guidelines for AEM sites and AEM Assets.
Binary-less replication should be turned ON if the File Datastore is shared.
Tar Architecture Guidelines for AEM Sites
Tar Architecture Guidelines for AEM Assets
For good performance, you should follow the settings guidelines presented below. For instructions on how to change the settings, see this page.
Setting | Parameter | Value | Description |
Sling Job Queues | queue.maxparallel |
Set value to half of the number of CPU cores. | By default the number of concurrent threads per job queue is equal to the number of CPU cores. |
Granite Transient Workflow Queue | Max Parallel |
Set value to half of the number of CPU cores | |
JVM parameters |
|
500000 100000 250000 True |
Add these JVM parameters in the AEM start script to prevent expansive queries from overloading the systems. |
Lucene index configuration |
|
Enabled Enabled Enabled |
For more details on the available parameters, see this page. |
Data Store = S3 Datastore |
|
1048576 (1MB) or smaller 2-10% of max heap size |
See also Data Store Configurations. |
DAM Update Asset workflow | Transient Workflow |
checked | This workflow manages the update of assets. |
DAM MetaData Writeback | Transient Workflow |
checked | This workflow manages XMP write-back to the original binary and sets the last modified date in JCR. |
The benchmark tests were performed on the following specifications:
Author Node | |
---|---|
Server | Bare metal hardware (HP) |
Operating System | RedHat Linux |
CPU / Cores | Intel® Xeon® CPU E5-2407 @2.40GHz, 8 cores |
RAM | 32GB |
Disk | Magnetic |
Java | Oracle JRE Version 8 |
JVM Heap | 16GB |
Product | AEM 6.2 |
Nodestore | TarMK |
Datastore | File DS |
Scenario | Single Product: Assets / 30 concurrent threads |
The numbers presented below have been normalized to 1 as the baseline and are not the actual throughput numbers.
The primary reason for choosing the MongoMK persistence backend over TarMK is to scale the instances horizontally. This means having two or more active author instances running at all times and using MongoDB as the persistence storage system. The need to run more than one author instance results generally from the fact that the CPU and memory capacity of a single server, supporting all concurrent authoring activities, is no longer sustainable.
For more information about TarMK, see Deployment Scenarios and Mongo Storage.
To establish good performance when using MongoMK, you should start from the following architecture:
In production environments, MongoDB will always be used as a replica set with a primary and two secondaries. Reads and writes go to the primary and reads can go to the secondaries. If storage is not available, one of the secondaries can be replaced with an arbiter, but MongoDB replica sets must always be composed of an odd number of instances.
Binary-less replication should be turned ON if the File Datastore is shared.
For good performance, you should follow the settings guidelines presented below. For instructions on how to change the settings, see this page.
Setting | Parameter | Value (default) | Description |
Sling Job Queues | queue.maxparallel |
Set value to half of the number of CPU cores. | By default the number of concurrent threads per job queue is equal to the number of CPU cores. |
Granite Transient Workflow Queue | Max Parallel |
Set value to half of the number of CPU cores. | |
JVM parameters |
|
500000 100000 250000 True 60000 |
Add these JVM parameters in the AEM start script to prevent expansive queries from overloading the systems. |
Lucene index configuration |
|
Enabled Enabled Enabled |
For more details on available parameters, see this page. |
Data Store = S3 Datastore |
|
1048576 (1MB) or smaller 2-10% of max heap size |
See also Data Store Configurations. |
DocumentNodeStoreService |
|
2048 35 (25) 20 (10) 30 (5) 10 (3) 4 (4) ./cache,size=2048,binary=0,-compact,-compress |
The default size of the cache is set to 256 MB. Has impact on the time it takes to perform cache invalidation. |
oak-observation |
|
min & max = 20 50000 |
The benchmark tests were performed on the following specifications:
Author node | MongoDB node | |
---|---|---|
Server | Bare metal hardware (HP) | Bare metal hardware (HP) |
Operating System | RedHat Linux | RedHat Linux |
CPU / Cores | Intel® Xeon® CPU E5-2407 @2.40GHz, 8 cores | Intel® Xeon® CPU E5-2407 @2.40GHz, 8 cores |
RAM | 32GB | 32GB |
Disk | Magnetic - >1k IOPS | Magnetic - >1k IOPS |
Java | Oracle JRE Version 8 | N/A |
JVM Heap | 16GB | N/A |
Product | AEM 6.2 | MongoDB 3.2 WiredTiger |
Nodestore | MongoMK | N/A |
Datastore | File DS | N/A |
Scenario | Single Product: Assets / 30 concurrent threads | Single Product: Assets / 30 concurrent threads |
The numbers presented below have been normalized to 1 as the baseline and are not the actual throughput numbers.
The basic rule that needs to be taken into account when choosing between the two is that TarMK is designed for performance, while MongoMK is used for scalability. Adobe recommends TarMK to be the default persistence technology used by customers in all deployment scenarios, for both the AEM Author and Publish instances.
The primary reason for choosing the MongoMK persistence backend over TarMK is to scale the instances horizontally. This means having two or more active author instances running at all times and using MongoDB as the persistence storage system. The need to run more than one author instance generally results from the fact that the CPU and memory capacity of a single server, supporting all concurrent authoring activities, is no longer sustainable.
For further details on TarMK vs MongoMK, see Recommended Deployments.
Benefits of TarMK
Criteria for choosing MongoMK
The numbers presented below have been normalized to 1 as the baseline and are not actual throughput numbers.
Author OAK Node | MongoDB Node | ||
Server | Bare metal hardware (HP) | Bare metal hardware (HP) | |
Operating System | RedHat Linux | RedHat Linux | |
CPU / Cores | Intel(R) Xeon(R) CPU E5-2407 @2.40GHz, 8 cores | Intel(R) Xeon(R) CPU E5-2407 @2.40GHz, 8 cores | |
RAM | 32GB | 32GB | |
Disk | Magnetic - >1k IOPS | Magnetic - >1k IOPS | |
Java | Oracle JRE Version 8 | N/A | |
JVM Heap16GB | 16GB | N/A | |
Product | AEM 6.2 | MongoDB 3.2 WiredTiger | |
Nodestore | TarMK or MongoMK | N/A | |
Datastore | File DS | N/A | |
Scenario |
|
To enable the same number of Authors with MongoDB as with one TarMK system you need a cluster with two AEM nodes. A four node MongoDB cluster can handle 1.8 times the number of Authors than one TarMK instance. An eight node MongoDB cluster can handle 2.3 times the number of Authors than one TarMK instance.
Author TarMK Node | Author MongoMK Node | MongoDB Node | |
Server | AWS c3.8xlarge | AWS c3.8xlarge | AWS c3.8xlarge |
Operating System | RedHat Linux | RedHat Linux | RedHat Linux |
CPU / Cores | 32 | 32 | 32 |
RAM | 60GB | 60GB | 60GB |
Disk | SSD - 10k IOPS | SSD - 10k IOPS | SSD - 10k IOPS |
Java | Oracle JRE Version 8 | Oracle JRE Version 8 |
N/A |
JVM Heap16GB | 30GB | 30GB | N/A |
Product | AEM 6.2 | AEM 6.2 | MongoDB 3.2 WiredTiger |
Nodestore | TarMK | MongoMK | N/A |
Datastore | File DS | File DS |
N/A |
Scenario |
|
The guidelines presented on this page can be summarized as follows:
TarMK with File Datastore is the recommended architecture for most customers:
MongoMK with File Datastore is the recommended architecture for horizontal scalability of the Author tier:
Nodestore should be stored on the local disk, not a network attached storage (NAS)
When using Amazon S3:
Custom index should be created in addition to the out of the box index based on most common searches
Customizing workflow can substantially improve the performance, for example, removing the video step in the “Update Asset” workflow, disabling listeners which are not used, etc.
For more details, also read the Recommended Deployments page.