A dataflow is a scheduled task that retrieves and ingests data from a source to a Platform dataset. This tutorial provides steps to configure a new dataflow using your customer success account.
This tutorial requires a working understanding of the following components of Adobe Experience Platform:
Additionally, this tutorial requires that you have already created a customer success account. A list of tutorials for creating different customer success connectors in the UI can be found in the source connectors overview.
After creating your customer success connector, the Select data step appears, providing an interactive interface for you to explore your file hierarchy.
You can use the Search option on the top of the page to quickly identify the source data you intend to use.
The search source data option is available to all tabular-based source connectors excluding the Analytics, Classifications, Event Hubs, and Kinesis connectors.
Once you find the source data, select the directory, then click Next.
The Mapping step appears, providing an interactive interface to map the source data to a Platform dataset.
Choose a dataset for inbound data to be ingested into. You can either use an existing dataset or create a new dataset.
To ingest data into an existing dataset, select Use existing dataset, then click the dataset icon.
The Select dataset dialog appears. Find the dataset you you wish to use, select it, then click Continue.
To ingest data into a new dataset, select Create new dataset and enter a name and description for the dataset in the fields provided.
You can attach a schema field by entering a schema name in the Select schema search bar. You can also select the drop down icon to see a list of existing schemas. Alternatively, you can select Advanced search to access screen of existing schemas including their respective details.
During this step, you can enable your dataset for Real-time Customer Profile and create a holistic view of an entity’s attributes and behaviors. Data from all enabled datasets will be included in Profile and changes are applied when you save your dataflow.
Toggle the Profile dataset button to enable your target dataset for Profile.
The Select schema dialog appears. Select the schema you wish to apply to the new dataset, then click Done.
Based on your needs, you can choose to map fields directly, or use data prep functions to transform source data to derive computed or calculated values. For more information on mapper functions and calculated fields, refer to either the Data Prep functions guide or the calculated fields guide.
Platform provides intelligent recommendations for auto-mapped fields based on the target schema or dataset that you selected. You can manually adjust mapping rules to suit your use cases.
Select Preview data to see mapping results of up to 100 rows of sample data from the selected dataset.
During the preview, the identity column is prioritized as the first field, as it is the key information necessary when validating mapping results.
Once your source data is mapped, select Close.
The Scheduling step appears, allowing you to configure an ingestion schedule to automatically ingest the selected source data using the configured mappings. The following table outlines the different configurable fields for scheduling:
|Frequency||Selectable frequencies include
|Interval||An integer that sets the interval for the selected frequency.|
|Start time||A UTC timestamp indicating when the very first ingestion is set to occur.|
|Backfill||A boolean value that determines what data is initially ingested. If Backfill is enabled, all current files in the specified path will be ingested during the first scheduled ingestion. If Backfill is disabled, only the files that are loaded in between the first run of ingestion and the start time will be ingested. Files loaded prior to start time will not be ingested.|
|Delta Column||An option with a filtered set of source schema fields of type, date, or time. This field is used to differentiate between new and existing data. Incremental data will be ingested based on the timestamp of selected column.|
Dataflows are designed to automatically ingest data on a scheduled basis. Start by selecting the ingestion frequency. Next, set the interval to designate the period between two flow runs. The interval’s value should be a non-zero integer and should be set to greater than or equal to 15.
To set the start time for ingestion, adjust the date and time displayed in the start time box. Alternatively, you can select the calendar icon to edit the start time value. Start time must be greater than or equal to your current UTC time.
Select Load incremental data by to assign the delta column. This field provides a distinction between new and existing data.
To set up one-time ingestion, select the frequency drop down arrow and select Once.
Interval and Backfill are not visible during a one-time ingestion.
Once you have provided appropriate values to the schedule, select Next.
The Dataflow detail step appears, allowing you to name and give a brief description about your new dataflow.
During this process, you can also enable Partial ingestion and Error diagnostics. Enabling Partial ingestion provides the ability to ingest data containing errors up to a certain threshold. Once Partial ingestion is enabled, drag the Error threshold % dial to adjust the error threshold of the batch. Alternatively, you can manually adjust the threshold by selecting the input box. For more information, see the partial batch ingestion overview.
Provide values for the dataflow and select Next.
The Review step appears, allowing you to review your new dataflow before it is created. Details are grouped within the following categories:
Once you have reviewed your dataflow, click Finish and allow some time for the dataflow to be created.
Once your dataflow has been created, you can monitor the data that is being ingested through it to see information on ingestion rates, success, and errors. For more information on how to monitor dataflow, see the tutorial on monitoring accounts and dataflows in the UI.
You can delete dataflows that are no longer necessary or were incorrectly created using the Delete function available in the Dataflows workspace. For more information on how to delete dataflows, see the tutorial on deleting dataflows in the UI.
By following this tutorial, you have successfully created a dataflow to bring in data from a customer success source and gained insight on monitoring datasets. Incoming data can now be used by downstream Platform services such as Real-time Customer Profile and Data Science Workspace. See the following documents for more details:
The following sections provide additional information for working with source connectors.
When a dataflow is created, it immediately becomes active and ingests data according to the schedule it was given. You can disable an active dataflow at any time by following the instructions below.
Within the Authentication screen, select the name of the account that’s associated with the dataflow you wish to disable.
The Source activity page appears. Select the active dataflow from the list to open its Properties column on the right-hand side of the screen, which contains an Enabled toggle button. Click the toggle to disable the dataflow. The same toggle can be used to re-enable a dataflow after it has been disabled.
Inbound data from your source connector can be used towards enriching and populating your Real-time Customer Profile data. For more information on populating your Real-time Customer Profile data, see the tutorial on Profile population.