This article is about configuring indexes in AEM 6. For best practices on optimizing query and indexing performance, see Best Practices for Queries and Indexing.
Unlike Jackrabbit 2, Oak does not index content by default. Custom indexes need to be created when necessary, much like with traditional relational databases. If there is no index for a specific query, possibly many nodes will be traversed. The query may still work but probably be very slow.
If Oak encounters a query without an index, a WARN level log message is printed:
*WARN* Traversed 1000 nodes with filter Filter(query=select ...) consider creating an index or changing the query
The Oak query engine supports the following languages:
The Apache Oak based backend allows different indexers to be plugged into the repository.
One indexer is the Property Index, for which the index definition is stored in the repository itself.
Implementations for Apache Lucene and Solr are also available by default, which both support fulltext indexing.
The Traversal Index is used if no other indexer is available. This means that the content is not indexed and content nodes are traversed to find matches to the query.
If multiple indexers are available for a query, each available indexer estimates the cost of executing the query. Oak then chooses the indexer with the lowest estimated cost.
The above diagram is a high level representation of the query execution mechanism of Apache Oak.
First, the query is parsed into an Abstract Syntax Tree. Then, the query is checked and transformed into SQL-2 which is the native language for Oak queries.
Next, each index is consulted to estimate the cost for the query. Once that is completed, the results from the cheapest index are retrieved. Finally, the results are filtered, both to ensure that the current user has read access to the result and that the result matches the complete query.
For a large repository, building an index is a time consuming operation. This is true for both the initial creation of an index, and reindexing (rebuilding an index after changing the definition). See also Troubleshooting Oak Indexes and Preventing Slow Re-indexing.
If reindexing is needed in very large repositories, specially when using MongoDB and for fulltext indexes, consider text pre-extraction, and using oak-run to build the initial index and to reindex.
Indexes are configured as nodes in the repository under the oak:index node.
The type of the index node must be oak:QueryIndexDefinition. Several configuration options are available for each indexer as node properties. For more information, see the configuration details for each indexer type below.
The Property Index is generally useful for queries that have property constraints but are not full-text. It can be configured by following the below procedure:
Open CRXDE by going to http://localhost:4502/crx/de/index.jsp
Create a new node under oak:index
Name the node PropertyIndex, and set the node type to oak:QueryIndexDefinition
Set the following properties for the new node:
property
(of type String)jcr:uuid
(of type Name)This particular example will index the jcr:uuid
property, whose job is to expose the universally unique idetifier (UUID) of the node it is attached to.
Save the changes.
The Property Index has the following configuration options:
The type property will specify the type of index, and in this case it must be set to property
The propertyNames property indicates the list of the properties that will be stored in the index. In case it is missing, the node name will be used as a property name reference value. In this example, the jcr:uuid property whose job is to expose the unique identifier (UUID) of its node is added to the index.
The unique flag which, if set to true adds a uniqueness constraint on the property index.
The declaringNodeTypes propery allows you to specify a certain node type that the index will only apply to.
The reindex flag which if set to true, will trigger a full content reindex.
The Ordered index is an extension of the Property index. However, it has been deprecated. Indexes of this type need to be replaced with the Lucene Property Index.
A full text indexer based on Apache Lucene is available in AEM 6.
If a full-text index is configured, then all queries that have a full-text condition use the full-text index, no matter if there are other conditions that are indexed, and no matter if there is a path restriction.
If no full-text index is configured, then queries with full-text conditions will not work as expected.
Because the index is updated via an asynchronbous background thread, some full-text searches will be unavailable for a small window of time until the background processes are finished.
You can configure a Lucene full-text index, by following the below procedure:
Open CRXDE and create a new node under oak:index.
Name the node LuceneIndex and set the node type to oak:QueryIndexDefinition
Add the following properties to the node:
lucene
(of type String)async
(of type String)Save the changes.
The Lucene Index has the following configuration options:
Since Oak 1.0.8, Lucene can be used to create indexes which involve property constraints that are not full-text.
In order to achieve a Lucene Property Index the fulltextEnabled property must always be set to false.
Take the following example query:
select * from [nt:base] where [alias] = '/admin'
In order to define a Lucene Property Index for the above query, you can add the following definition by creating a new node under oak:index:
LucenePropertyIndex
oak:QueryIndexDefinition
Once the node has been created, add the following properties:
type:
lucene (of type String)
async:
async (of type String)
fulltextEnabled:
false (of type Boolean)
includePropertyNames: [alias]
(of type String)
Compared to the regular Property Index, the Lucene Property Index is always configured in async mode. Thus, the results returned by index may not always reflect the most up to date state of the repository.
For more specific information on the Lucene Property Index, see the Apache Jackrabbit Oak Lucene documentation page.
Since version 1.2.0, Oak supports Lucene analyzers.
Analyzers are used both when a document is indexed, and at query time. An analyzer examines the text of fields and generates a token stream. Lucene analyzers are composed of a series of tokenizer and filter classes.
The analyzers can be configured via the analyzers
node (of type nt:unstructured
) inside the oak:index
definition.
The default analyzer for an index is configured in the default
child of the analyzers node.
For a list of available analyzers, please consult the API documentation of the Lucene version you are using.
If you wish to use any out of the box analyzer, you can configure it following the below procedure:
Locate the index you wish to use the analyzer with under the oak:index
node.
Under the index, create a child node called default
of type nt:unstructured
.
Add a property to the default node with the following properties:
class
String
org.apache.lucene.analysis.standard.StandardAnalyzer
The value is the name of the analyzer class you wish to use.
You can also set the analyzer to be used with a specific lucene version by using the optional luceneMatchVersion
string propery. A valid synthax for using it with Lucene 4.7 would be:
luceneMatchVersion
String
LUCENE_47
If luceneMatchVersion
is not provided, Oak will use the version of Lucene it is shipped with.
If you wish to add a stopwords file to the analyzer configurations, you can create a new node under the default
one with the following properties:
stopwords
nt:file
Analyzers can also be composed based on Tokenizers
, TokenFilters
and CharFilters
. You can do this by specifying an analyzer and creating children nodes of its optional tokenizers and filters which will be applied in listed order. See also https://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Specifying_an_Analyzer_in_the_schema
Consider this node structure as an example:
Name: analyzers
Name: default
Name: charFilters
Type: nt:unstructured
HTMLStrip
Mapping
Name: tokenizer
Property Name: name
String
Standard
Name: filters
Type: nt:unstructured
Name: LowerCase
Name: Stop
Property name: words
String
stop1.txt, stop2.txt
Name: stop1.txt
nt:file
Name: stop2.txt
nt:file
The name of the filters, charFilters and tokenizers are formed by removing the factory suffixes. Thus:
org.apache.lucene.analysis.standard.StandardTokenizerFactory
becomes standard
org.apache.lucene.analysis.charfilter.MappingCharFilterFactory
becomes Mapping
org.apache.lucene.analysis.core.StopFilterFactory
becomes Stop
Any configuration parameter required for the factory is specified as property of the node in question.
For cases such as loading stop words where content from external files needs to be loaded, the content can be provided by creating a child node of nt:file
type for the file in question.
The purpose of the Solr index is mainly full-text search but it can also be used to index search by path, property restrictions and primary type restrictions. This means the Solr index in Oak can be used for any type of JCR query.
The integration in AEM happens at the repository level so that Solr is one of the possible indexes that can be used in Oak, the new repository implementation shipped with AEM.
AEM can also be confiured to work with a remote Solr server instance:
Download and extract the latest version of Solr. For more info on how to do this, please consult the Apache Solr Installation documentation.
Now, create two Solr shards. You can do this by creating folders for each shard in the folder where Solr has been upacked:
<solrunpackdirectory>\aemsolr1\node1
<solrunpackdirectory>\aemsolr2\node2
Locate the example instance in the Solr package. It is usually located in a folder called " example
" in the root of the package.
Copy the following folders from the example instance to the two shard folders ( aemsolr1\node1
and aemsolr2\node2
):
contexts
etc
lib
resources
scripts
solr-webapp
webapps
start.jar
Create a new folder called " cfg
" in each of the two shard folders.
Place your Solr and Zookeeper configuration files in the newly created cfg
folders.
For more info on Solr and ZooKeeper configuration, consult the Solr Configuration documentation and the ZooKeeper Getting Started Guide.
Start the first shard with ZooKeeper support by going to aemsolr1\node1
and running the following command:
java -Xmx2g -Dbootstrap_confdir=./cfg/oak/conf -Dcollection.configName=myconf -DzkRun -DnumShards=2 -jar start.jar
Start the second shard by going to aemsolr2\node2
and running the following command:
java -Xmx2g -Djetty.port=7574 -DzkHost=localhost:9983 -jar start.jar
After both shards have been started, test that everything is up and running by connecting to the Solr interface at http://localhost:8983/solr/#/
Start AEM and go to the Web Console at http://localhost:4502/system/console/configMgr
Set the following configuration under Oak Solr remote server configuration:
http://localhost:8983/solr/
Choose Remote Solr in the drop down list under Oak Solr server provider.
Go to CRXDE and login as Admin.
Create a new node called solrIndex under oak:index, and set the following properties:
Save the changes.
Below is an example of a base configuration that can be used with all three Solr deployments described in this article. It accomodates the dedicated property indexes that are already present in AEM and should not be used with other applications.
In order to properly use it, you need to place the contents of the archive directly in the Solr Home Directory. In the case of multi-node deployments, it should go directly under the root folder of each node.
Recommended Solr configuration files
AEM 6.1 also integrates two indexing tools present in AEM 6.0 as part of the Adobe Consulting Services Commons toolset:
You can now reach them by going to Tools - Operations - Dashboard - Diagnosis from the AEM Welcome screen.
For more information on how to use them, see the Operations Dashboard documentation.
The ACS Commons package also exposes OSGi configurations that can be used to create property indexes.
You can access it from the Web Console by searching for “Ensure Oak Property Index”.
Situations may arise where queries take a long time to execute, and the general system response time is slow.
This section presents a set of recommendations on what needs to be done in order to track down the cause of such issues and advice on how to resolve them.
The easiest way to get required information for the query being executed is via the Explain Query tool. This will enable you to collect the precise information that is needed to debug a slow query without the need to consult the log level information. This is desirable if you know the query that is being debugged.
If this is not possible for any reason, you can gather the indexing logs in a single file and use it to troubleshoot your particular problem.
To enable logging, you need to enable DEBUG level logs for the categories pertaining to Oak indexing and queries. These categories are:
The com.day.cq.search category is only applicable if you are using the AEM provided QueryBuilder utility.
It is important that the logs are only set to DEBUG for the duration the query you want to troubleshoot is being executed, otherwise a large amount of events will be generated in the logs over time. Because of this, once the required logs are collected switch back to INFO level logging for the categories mentioned above.
You can enable logging by following this procedure:
Point your browser to https://serveraddress:port/system/console/slinglog
Click the Add new Logger button in the lower part of the console.
In the newly created row, add the categories mentioned above. You can use the + sign to add more than one category to a single logger.
Choose DEBUG from the Log level drop down list.
Set the output file to logs/queryDebug.log
. This will correlate all the DEBUG events into a single log file.
Run the query or render the page that is using the query you wish to debug.
Once you have executed the query, go back to the logging console and change the log level of the newly created logger to INFO.
The way the query gets evaluated is largely affected by the index configuration. It is important to get the index configuration in order to be analyzed or sent to support. You can either get the configuration as a content package or get a JSON rendition.
Since in most cases, the indexing configuration is stored under the /oak:index
node in CRXDE, you can get the JSON version at:
https://serveraddress:port/oak:index.tidy.-1.json
If the index is configured at a different location, change the path accordingly.
In some cases it is helpful to provide the output of index related MBeans for debugging. You can do this by:
Going to the JMX console at:
https://serveraddress:port/system/console/jmx
Search for the following MBeans:
Click each of the MBeans to get the performance statistics. Create a screenshot or note them down in case submission to support is required.
You can also get the JSON variant of these statistics at the following URLs:
https://serveraddress:port/system/sling/monitoring/mbeans/org/apache/jackrabbit/oak/%2522LuceneIndex%2522.tidy.-1.json
https://serveraddress:port/system/sling/monitoring/mbeans/org/apache/jackrabbit/oak/%2522LuceneIndex%2522.tidy.-1.json
https://serveraddress:port/system/sling/monitoring/mbeans/org/apache/jackrabbit/oak/%2522LuceneIndex%2522.tidy.-1.json
https://serveraddress:port/system/sling/monitoring/mbeans/org/apache/jackrabbit/oak/%2522LuceneIndex%2522.tidy.-1.json
You can also provide consolidated JMX output via https://serveraddress:port/system/sling/monitoring/mbeans/org/apache/jackrabbit/oak.tidy.3.json
. This would include all Oak related MBean details in JSON format.
You can gather additional details in order to help troubleshoot the problem, such as:
org.apache.jackrabbit.oak-core
bundle.https://serveraddress:port/libs/cq/search/content/querydebug.html