For this configuration, three computers are required:
The two servers in the DMZ handle tracking, mirror pages and delivery and are redundant for high availability.
The application server in the LAN serves the end users and performs all recurrent processes (workflow engine). Thus, when peak loads are reached on the frontal servers, the application users are not impacted.
The database server can be hosted on a separate computer from these three. It is otherwise for the application server and database server to share the same computer within the LAN as long as the operating system is supported by Adobe Campaign (Linux or Windows).
General communication between servers and processes is carried out according to the following schema:
This type of configuration can handle a large number of recipients (500,000 to 1,000,000) as the database server (and the available bandwidth) is the main limiting factor.
JDK on all three computers,
Web server (IIS, Apache) on both frontals,
Access to a database server on all three computers,
Bounce mailbox accessible via POP3,
Creation of two DNS aliases:
Firewall configured to open STMP (25), DNS (53), HTTP (80), HTTPS (443), SQL (1521 for Oracle, 5432 for PostgreSQL, etc.) ports. For further information, refer to section Database access.
Follow the steps to install a standalone instance from the Adobe Campaign application server to the creation of the database (step 12). Refer to Installing and configuring (single machine).
Since the computer is not a tracking server, do not take the integration with the Web server into account.
In the following examples, the parameters of the instance are:
The installation and configuration procedure is identical on both computers.
The steps are as follows:
Install the Adobe Campaign server.
Follow the Web server integration procedure (IIS, Apache) described in the following sections:
Create the demo instance. There are two ways of doing this:
Create the instance via the console:
For more on this, refer to Creating an instance and logging on.
Create the instance using command lines:
nlserver config -addinstance:demo/tracking.campaign.net*
For more on this, refer to Creating an instance.
The name of the instance is the same as that of the application server.
The connection to the server with the nlserver web module (mirror pages, unsubscription) will be made from the URL of the load balancer (tracking.campaign.net).
Change the internal to the same as the application server.
For more on this, refer to this section.
Link the database to the instance:
nlserver config -setdblogin:PostgreSQL:campaign:demo@dbsrv -instance:demo
In the config-default.xml and config-demo.xml files, enable the web, trackinglogd and mta modules.
For more on this, refer to this section.
Edit the serverConf.xml file and populate:
the DNS configuration of the MTA module:
<dnsConfig localDomain="campaign.com" nameServers="192.0.0.1, 192.0.0.2"/>
The nameServers parameter is only used in Windows.
For more on this, refer to Delivery settings.
the redundant tracking servers in the redirection parameters:
<spareServer enabledIf="$(hostname)!='front_srv1'" id="1" url="https://front_srv1:8080"/> <spareServer enabledIf="$(hostname)!='front_srv2'" id="2" url="https://front_srv2:8080"/>
For more on this, refer to Redundant tracking.
Start the website and test the redirection from the URL: https://tracking.campaign.net/r/test.
The browser should display the following messages (depending on the URL redirected by the load balancer):
<redir status="OK" date="AAAA/MM/JJ HH:MM:SS" build="XXXX" host="tracking.campaign.net" localHost="front_srv1"/>
<redir status="OK" date="AAAA/MM/JJ HH:MM:SS" build="XXXX" host="tracking.campaign.net" localHost="front_srv2"/>
For more on this, refer to the following sections:
Start the Adobe Campaign server.
In the Adobe Campaign console, connect using the admin login without a password and launch the deployment wizard.
For more on this, refer to Deploying an instance.
Configuration is identical to a standalone instance apart from the configuration of the tracking module.
Populate the external URL (that of the load balancer) used for redirection and the internal URLs of the two frontal servers.
For more on this, refer to Tracking configuration.
We use the existing instance of the two tracking servers created previously and use the internal login.