Configuring events for custom implementation

Parts of this configuration is a custom development and requires the following:

  • Working knowledge of JSON, XML, and Javascript parsing in Adobe Campaign.
  • Working knowledge of the QueryDef and Writer APIs.
  • Working notions of encryption and authentication using private keys.

Since editing the Javascript code requires technical skills, please do not attempt it without the proper understanding.

Processing events in JavaScript

JavaScript file

Pipeline uses a JavaScript function to process each message. This function is user-defined.

It is configured in the NmsPipeline_Config option under the “JSConnector” attribute. This javascript is called every time an event is received. It’s run by the pipelined process.

The sample Javascript file is cus:triggers.js.

JavaScript function

The pipelined Javascript must start with a specific function.

This function is called once for every event:

function processPipelineMessage(xmlTrigger) {}

It should return as

<undefined/>

You should restart pipelined after editing the Javascript.

Trigger data format

The trigger data is passed to the JS function in XML format.

  • The @triggerId attribute contains the name of the trigger.
  • The enrichments element in JSON format contains the data generated by Adobe Analytics and is attached to the trigger.
  • @offset is the “pointer” to the message. It indicates the order of the message within the queue.
  • @partition is a container of messages within the queue. The offset is relative to a partition.
    There are about 15 partitions in the queue.

Example:


<trigger offset="1500435" partition="4" triggerId="LogoUpload_1_Visits_from_specific_Channel_or_ppp">
 <enrichments>{"analyticsHitSummary":{"dimensions":{" eVar01":{"type":"string","data":["PI4INE1ETDF6UK35GO13X7HO2ITLJHVH"],"name":" eVar01","source":"session summary"}, "timeGMT":{"type":"int","data":[1469164186,1469164195],"name":"timeGMT","source":"session summary"}},"products":{}}}</enrichments>
 <aliases/>
 </trigger>

Data format enrichment

NOTE

It is a specific example from various possible implementations.

The content is defined in JSON format in Adobe Analytics for each trigger.
For example, in a trigger LogoUpload_uploading_Visits:

  • eVar01 can contain the Shopper ID in String format which is used to reconcile with Adobe Campaign recipients.
    It must be reconciled to find the Shopper ID, which is the primary key.

  • timeGMT can contain the time of the trigger on the Adobe Analytics side in UTC Epoch format (seconds since 01/01/1970 UTC).

Example:

{
 "analyticsHitSummary": {
 "dimensions": {
 "eVar01": {
 "type": "string",
 "data": ["PI4INE1ETDF6UK35GO13X7HO2ITLJHVH"],
 "name": " eVar01",
 "source": "session summary"
 },
 "timeGMT": {
 "type": "int",
 "data": [1469164186, 1469164195],
 "name": "timeGMT",
 "source": "session summary"
 }
 },
 "products": {}
 }
 }

Events processing order

The events are processed one at a time, by order of offset. Each thread of the pipelined processes a different partition.

The ‘offset’ of the last event retrieved is stored in the database. Therefore, if the process is stopped, it restarts from the last message. This data is stored in the built-in schema xtk:pipelineOffset.

This pointer is specific to each instance and each consumer. Therefore, when many instances access the same pipeline with different consumers, they each get all the messages and in the same order.

The consumer parameter of the pipeline option identifies the calling instance.

Currently, there is no way to have different queues for separate environments such as ‘staging’ or ‘dev’.

Logging and error handling

Logs such as logInfo() are directed to the pipelined log. Errors such as logError() are written to the pipelined log and cause the event to be put into a retry queue. In this case, you should check the pipelined log.
Messages in error are retried several times in the duration set in the pipelined options.

For debugging and monitoring purposes, the full trigger data is written into the trigger table in the “data” field in XML format. Alternatively, a logInfo() containing the trigger data serves the same purpose.

Parsing the data

This sample Javascript code parses the eVar01 in the enrichments.

function processPipelineMessage(xmlTrigger)
 {
 (…)
 var shopper_id = ""
 if (xmlTrigger.enrichments.length() > 0)
 {
 if (xmlTrigger.enrichments.toString().match(/eVar01/) != undefined)
 {
 var enrichments = JSON.parse(xmlTrigger.enrichments.toString())
 shopper_id = enrichments.analyticsHitSummary.dimensions. eVar01.data[0]
 }
 }
 (…)
 }

Be cautious when parsing to avoid errors.
Since this code is used for all triggers, most data is not required. Therefore, it can be left empty when not present.

Storing the trigger

NOTE

It is a specific example from various possible implementations.

This sample JS code saves the trigger to the database.

function processPipelineMessage(xmlTrigger)
 {
 (…)
 var event = 
 <pipelineEvent
 xtkschema = "cus:pipelineEvent"
 _operation = "insert"
 created = {timeNow}
 lastModified = {timeNow}
 triggerType = {triggerType}
 timeGMT = {timeGMT}
 shopper_id = {shopper_id}
 data = {xmlTrigger.toXMLString()}
 />
 xtk.session.Write(event)
 return <undef/>;
 }

Constraints

Performance for this code must be optimal since it runs at high frequencies and could cause potential negative effects for other marketing activities. Especially if processing more than one million trigger events per hour on the Marketing server or if it is not properly tuned.

The context of this Javascript is limited. Not all functions of the API are available. For example, getOption() or getCurrentdate() do not work.

To enable faster processing, several threads of this script are executed at the same time. The code must be thread safe.

Storing the events

NOTE

It is a specific example from various possible implementations.

Pipeline event schema

Events are stored in a database table. It isused by marketing campaigns to target customers and enrich emails using triggers.
Although each trigger can have a distinct data structure, all triggers can be held in a single table.
The triggerType field identifies from which trigger the data originates.

Here is a sample schema code for this table:

Attribute Type Label Description
pipelineEventId Long Primary key The trigger’s internal primary key.
data Memo Trigger Data The full contents of trigger data in XML format. For debugging and audit purposes.
triggerType String 50 TriggerType The name of the trigger. Identifies the behavior of the customer on the website.
shopper_id String 32 shopper_id The shopper’s Internal Identifier. Set by the reconciliation workflow. If zero, it means that the customer is unknown in Campaign.
shopper_key Long shopper_key The shopper’s External identifier as captured by Analytics.
created Datetime Created The time when the event was created in Campaign.
lastModified Datetime Last Modified The last time the event was modified in Adobe.
timeGMT Datetime Timestamp The time when the event was generated in Analytics.

Displaying the events

The events can be displayed with a simple form based on the events schema.

NOTE

The Pipeline Event node is not built-in and needs to be added, as well as the related form needs to be created in Campaign. These operations are restricted to expert users only. For more on this, refer to these sections: Navigation hierarchy and Editing forms.

Processing the events

Reconciliation workflow

Reconciliation is the process of matching the customer from Adobe Analytics into the Adobe Campaign database. For example, the criteria for matching can be the shopper_id.

For performance reasons, the matching must be done in batch mode by a workflow.
The frequency must be set to 15 minutes to optimize the workload. As a consequence, the delay between an event reception in Adobe Campaign and its processing by a marketing workflow is up to 15 minutes.

Options for unit reconciliation in JavaScript

It is possible to run the reconciliation query for each trigger in the JavaScript. It has a higher performance impact and gives faster results. It could be required for specific use cases when reactivity is needed.

It can be difficult to implement if no index is set on shopper_id. If the criteria are on a separate database server than the marketing server, it uses a database link, which has poor performance.

Purge workflow

Triggers are processed within the hour. The volume can be about 1 million triggers per hour. It explains why a purge workflow must be put in place. The purge runs once per day and deletes all triggers that are older than three days.

Campaign workflow

The trigger campaign workflow is often similar to other recurring campaigns that have been used.
For example, it can start with a query on the triggers looking for specific events during the last day. That target is used to send the email. Enrichments or data can come from the trigger. It can be safely used by Marketing as it requires no configuration.

On this page