During the Pipeline execution, a number of metrics are captured and compared to either the Key Performance Indicators (KPIs) defined by the business owner, or standards set by Adobe Managed Services.
These are reported using the three-tier gating system as defined in this section.
There are three gates in the pipeline:
For each of these gates, there is a three-tier structure for issues identified by the gate.
In a Code Quality Only Pipeline, Important failures in the Code Quality Testing gate cannot be overridden since the Code Quality Testing step is the final step in the pipeline.
This step evaluates the quality of your application code. It is the core objective of a Code-Quality only pipeline and is executed immediately following the build step in all non-production and production pipelines. Refer to Configuring your CI-CD Pipeline to learn more about different types of pipelines.
In Code Quality Testing, the source code is scanned to ensure that it meets certain quality criteria. Currently, this is implemented by a combination of SonarQube and content package-level examination using OakPAL. There are over 100 rules combining generic Java rules and AEM-specific rules. Some of the AEM-specific rules are created based on best practices from AEM Engineering and are referred to as Custom Code Quality Rules.
You can download the complete list of rules here.
The results of this step is delivered as Rating. The table below summarizes the ratings for various test criteria:
|Security Rating||A = 0 Vulnerability
B = at least 1 Minor Vulnerability
C = at least 1 Major Vulnerability
D = at least 1 Critical Vulnerability
E = at least 1 Blocker Vulnerability
|Reliability Rating||A = 0 Bug
B = at least 1 Minor Bug
C = at least 1 Major Bug
D = at least 1 Critical Bug
E = at least 1 Blocker Bug
|Maintainability Rating||Outstanding remediation cost for code smells is:
|Coverage||A mix of unit test line coverage and condition coverage using this formula:
where: CT = conditions that have been evaluated to ‘true’ at least once while running unit tests
CF = conditions that have been evaluated to ‘false’ at least once while running unit tests
LC = covered lines = lines_to_cover - uncovered_lines
B = total number of conditions
EL = total number of executable lines (lines_to_cover)
|Skipped Unit Tests||Number of skipped unit tests.||Info||> 1|
|Open Issues||Overall issue types - Vulnerabilities, Bugs, and Code Smells||Info||> 0|
|Duplicated Lines||Number of lines involved in duplicated blocks.
For a block of code to be considered as duplicated:
Differences in indentation as well as in string literals are ignored while detecting duplications.
|Cloud Service Compatibility||Number of identified Cloud Service Compatibility issues.||Info||> 0|
Refer to Metric Definitions for more detailed definitions.
To learn more about the custom code quality rules executed by Cloud Manager, please refer to Custom Code Quality Rules.
The quality scanning process is not perfect and will sometimes incorrectly identify issues which are not actually problematic. This is referred to as a “false positive”.
In these cases, the source code can be annotated with the standard Java
@SuppressWarnings annotation specifying the rule ID as the annotation attribute. For example, one common problem is that the SonarQube rule to detect hardcoded passwords can be aggressive about how a hardcoded password is identified.
To look at a specific example, this code would be fairly common in an AEM project which has code to connect to some external service:
@Property(label = "Service Password") private static final String PROP_SERVICE_PASSWORD = "password";
SonarQube will then raise a Blocker Vulnerability. After reviewing the code, you identify that this is not a vulnerability and can annotate this with the appropriate rule id.
@SuppressWarnings("squid:S2068") @Property(label = "Service Password") private static final String PROP_SERVICE_PASSWORD = "password";
However, on the other hand, if the code was actually this:
@Property(label = "Service Password", value = "mysecretpassword") private static final String PROP_SERVICE_PASSWORD = "password";
Then the correct solution is to remove the hardcoded password.
While it is a best practice to make the
@SuppressWarnings annotation as specific as possible, i.e. annotate only the specific statement or block causing the issue, it is possible to annotate at a class level.
Cloud Manager runs the existing AEM Security Heath Checks on stage following the deployment and reports the status through the UI. The results are aggregated from all AEM instances in the environment.
If any of the Instances report a failure for a given health check, the entire Environment fails that health check. As with Code Quality and Performance Testing, these health checks are organized into categories and reported using the three-tier gating system. The only distinction is that there is no threshold in the case of security testing. All the health checks are simply pass or fail.
The following table lists the current checks:
|Name||Health Check Implementation||Category|
|Deserialization firewall Attach API Readiness is in an acceptable state||Deserialization Firewall Attach API Readiness||Critical|
|Deserialization firewall is functional||Deserialization Firewall Functional||Critical|
|Deserialization firewall is loaded||Deserialization Firewall Loaded||Critical|
|AuthorizableNodeName implementation does not expose authorizable ID in the node name/path.||Authorizable Node Name Generation||Critical|
|Default passwords have been changed||Default Login Accounts||Critical|
|Sling default GET servlet is protected from DOS attacks.||Sling Get Servlet||Critical|
|The Sling Java Script Handler is configured appropriately||Sling Java Script Handler||Critical|
|The Sling JSP Script Handler is configured appropriately||Sling JSP Script Handler||Critical|
|SSL is configured correctly||SSL Configuration||Critical|
|No obviously insecure user profile policies found||User Profile Default Access||Critical|
|The Sling Referrer Filter is configured in order to prevent CSRF attacks||Sling Referrer Filter||Important|
|The Adobe Granite HTML Library Manager is configured appropriately||CQ HTML Library Manager Config||Important|
|CRXDE Support bundle is disabled||CRXDE Support||Important|
|Sling DavEx bundle and servlet are disabled||DavEx Health Check||Important|
|Sample content is not installed||Example Content Packages||Important|
|Both the WCM Request Filter and the WCM Debug Filter are disabled||WCM Filters Configuration||Important|
|Sling WebDAV bundle and servlet are configured appropriately||WebDAV Health Check||Important|
|The web server is configured to prevent clickjacking||Web Server Configuration||Important|
|Replication is not using the ‘admin’ user||Replication and Transport Users||Info|
Performance testing in Cloud Manager is implemented using a 30 minute test.
During pipeline setup, the deployment manager can decide how much traffic to direct to each bucket.
You can learn more about bucket controls, from Configure your CI/CD Pipeline.
To setup your program and define your KPIs, see Setup your Program.
The following table summarizes the performance test matrix using the three-tier gating system:
|Page Request Error Rate %||Critical||>= 2%|
|CPU Utilization Rate||Critical||>= 80%|
|Disk IO Wait Time||Critical||>= 50%|
|95 Percentile Response Time||Important||>= Program-level KPI|
|Peak Response Time||Important||>= 18 seconds|
|Page Views Per Minute||Important||< Program-level KPI|
|Disk Bandwidth Utilization||Important||>= 90%|
|Network Bandwidth Utilization||Important||>= 90%|
|Requests Per Minute||Info||>= 6000|
New graphs and download options have been added to the Performance Test Results dialog.
When you open the Performance Test dialog, the metric panels can be expanded to display a graph, provide a link to a download, or both.
For Cloud Manager Release 2018.7.0, this functionality is available for the following metrics:
Disk I/O Wait Time
Page Error Rate
Disk Bandwidth Utilization
Network Bandwidth Utilization
Peak Response Time
95th Percentile Response Time
The following images display the performance test graphs: