Provide dataflow details
In the Experience Platform UI, select Sources in the left navigation. On the Catalog view, navigate to the Local system category. Under Local file upload, select Add data.
The Map CSV XDM schema workflow appears, starting on the Dataflow detail step.
Select Create a new schema using ML recommendations, causing new controls to appear. Choose the appropriate class for the CSV data you want to map (Profile or ExperienceEvent). You can optionally use the dropdown menu to select the relevant industry for your business, or leave it blank if the provided categories do not apply to you. If your organization operates under a business-to-business (B2B) model, select the B2B data checkbox.
From here, provide a name for the schema that will be created from the CSV data, and a name for the output dataset that will contain the data ingested under that schema.
You can optionally configure the following additional features for the dataflow before proceeding:
Input name | Description |
---|---|
Description | A description for the dataflow. |
Error diagnostics | When enabled, error messages are generated for newly ingested batches, which can be viewed when fetching the corresponding batch in the API. |
Partial ingestion | When enabled, valid records for new batch data will be ingested within a specified error threshold. This threshold allows you to configure the percentage of acceptable errors before the entire batch fails. |
Dataflow details | Provide a name and optional description for the dataflow that will bring the CSV data into Platform. The dataflow is automatically assigned a default name when starting this workflow. Changing the name is optional. |
Alerts | Select from a list of in-product alerts that you want to receive regarding the status of the dataflow once it has been initiated. |
When you are finished configuring the dataflow, select Next.
Select data
On the Select data step, use the left column to upload your CSV file. You can select Choose files to open a file explorer dialog to select the file from, or you can drag and drop the file onto the column directly.
After uploading the file, a sample data section appears that shows the first ten rows of the received data so you can verify it has uploaded correctly. Select Next to continue.
Configure schema mappings
The ML models are run to generate a new schema based on your dataflow configuration and your uploaded CSV file. When the process is complete, the Mapping step populates to show the mappings for each individual field alongside fully navigable view of the generated schema structure.

From here, you can optionally edit the field mappings or alter the field groups they are associated with according to your needs. When satisfied, select Finish to complete the mapping and initiate the dataflow you configured earlier. The CSV data is ingested into the system and populates a dataset based on the generated schema structure, ready to be consumed by downstream Platform services.