AEM crashes randomly and in the logs OutOfMemoryError is observed
AEM gets slower over time and eventually crashes
AEM is unresponsive
Diagnosing a memory issue
Search the log files for OutOfMemoryError, if you find any matches then you have a memory issue
Review the http://aem-host:port/system/console/memoryusage screen
If the “Old Generation” (JDK 7 and earlier) or “Tenured Generation” (JDK8 or later) usage is high then this could be a sign of a heap memory utilization issue. Click “Run Garbage Collector” to request the JVM to run a full heap garbage collection. If the high heap utilization stays high after requesting GC then there is likely an issue. On an AEM instance with Oak Tar storage, if the tenured usage is higher than 3GB then there might be a problem. High heap utilization on a system with Mongo storage could be due to the in-memory cache configuration.
Take thread dumps and top output and perform thread analysis. Check if the threads causing high CPU utilization are native JVM Garbage Collection threads. If the thread using the most CPU time are the “VM Thread” or any garbage collection threads then there is likely a memory issue.
What causes memory issues
Java application memory leak
Java Finalizer pile up due to incorrect use of finalizing in custom code
The best way to identify the cause of a memory issue is to analyze a heap dump.
Once you’ve captured a Heap Dump file then open it in Eclipse MAT or IBM Memory Analyzer tool. In Eclipse MAT, run the Leak Suspects report and open “Thread Details” view to see potential causes for the memory issue.
Solutions to common memory issues
Optimize your application code to utilize less memory if you notice long garbage collection pauses. Most Garbage Collection issues can best be solved by optimizing the application versus tuning the JVM.
Active: when items are being processed. Idle: when the queue is empty. Blocked: when items are in the queue, but cannot be processed; for example, when the agent points to a host that is down or non-existent.
Review the replication configurations if your server is cloned or the agent has been configured recently. For details, see here.
Review the replication agent logs at http://host:port/etc/replication/agents.author/AgentName.log.html#end. If you can’t identify any items collect this log and present to AEM support.
Review the server error.log from AEMinstall/crx-quickstart/logs; If you can’t identify any items collect this log and present to AEM support.
If the replication queue is in “idle” state and none of the above applies, in this case the problem is most likely caused by the workflows. If the workflows are not being processed then the replication item never gets to the replication queue. To monitor the status of your workflows, you can check the workflow dashboard to check the number of running workflow instances. You can read about administering workflows here.
Replications slows down when the system is under high load or experience other performance issues.
Log files or compaction command output report SegmentNotFoundException.
What causes corruption issues
The segment is removed by manual intervention (e.g. rm -rf ).
The segment is removed by revision garbage collection or the segment cannot be found due to some bug in the code.
The segment cannot be found due to some bug in the code.
Various maintenance tasks are not performed on time leading to repository growth and low disk space.
Forcefully stopping AEM by killing java process.
Diagnosing repository corruption issues:
Review the error.log file and check if there is SegmentNotFoundException or IllegalArgument Exception.
To determine whether a segment has been removed by revision garbage collection, check the output of the org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC (enable debug log) logger. That logger logs the segment ids of all segments removed by the cleanup phase. Only when the offending segment id appears in the output of that logger is revision garbage collection the cause for the exception.
In case of corruption in external datastore, search log file for all occurrences of error Error occurred while obtaining InputStream for blobId. This error means that you are missing files from your AEM datastore directory.
Solution to repair corruption issues:
Determine the last known good revision of the segment store by using the check run-mode of oak-run. Manually revert the corrupt segment store to its latest good revision. This operation will revert the Oak repository to a previous state in time. You should completely backup the repository before performing this operation.
To perform check and restore, follow steps mentioned in this article.
If the check fails with ConsistencyChecker - No good revisions found then implement the steps in part B of this article.
If you are not using a datastore, then use an external file, S3 or Azure datastore, instead of default segmentstore.
Using a datastore provides better performance.
Migrate the instance to one with a datastore using crx2oak.
Apply the latest Service Pack and Cumulative Fix Pack and Oak Cumulative Fix Pack.