Provide dataflow details

In the Experience Platform UI, select Sources in the left navigation. On the Catalog view, navigate to the Local system category. Under Local file upload, select Add data.

The Sources catalog in the Platform UI, with Add data under Local file upload being selected.

The Map CSV XDM schema workflow appears, starting on the Dataflow detail step.

Select Create a new schema using ML recommendations, causing new controls to appear. Choose the appropriate class for the CSV data you want to map (Profile or ExperienceEvent). You can optionally use the dropdown menu to select the relevant industry for your business, or leave it blank if the provided categories do not apply to you. If your organization operates under a business-to-business (B2B) model, select the B2B data checkbox.

The Dataflow detail step with the ML recommendation option selected. Profile is selected for the class and Telecommunications selected for the industry

From here, provide a name for the schema that will be created from the CSV data, and a name for the output dataset that will contain the data ingested under that schema.

You can optionally configure the following additional features for the dataflow before proceeding:

Input nameDescription
DescriptionA description for the dataflow.
Error diagnosticsWhen enabled, error messages are generated for newly ingested batches, which can be viewed when fetching the corresponding batch in the API.
Partial ingestionWhen enabled, valid records for new batch data will be ingested within a specified error threshold. This threshold allows you to configure the percentage of acceptable errors before the entire batch fails.
Dataflow detailsProvide a name and optional description for the dataflow that will bring the CSV data into Platform. The dataflow is automatically assigned a default name when starting this workflow. Changing the name is optional.
AlertsSelect from a list of in-product alerts that you want to receive regarding the status of the dataflow once it has been initiated.

When you are finished configuring the dataflow, select Next.

The Dataflow detail section is completed.

Select data

On the Select data step, use the left column to upload your CSV file. You can select Choose files to open a file explorer dialog to select the file from, or you can drag and drop the file onto the column directly.

The Choose files button and drag-and-drop area highlighted within the Select data step.

After uploading the file, a sample data section appears that shows the first ten rows of the received data so you can verify it has uploaded correctly. Select Next to continue.

Sample data rows are populated within the workspace

Configure schema mappings

The ML models are run to generate a new schema based on your dataflow configuration and your uploaded CSV file. When the process is complete, the Mapping step populates to show the mappings for each individual field alongside fully navigable view of the generated schema structure.

The Mapping step in the UI, showing all CSV fields mapped and the resulting schema structure.

NOTE
You can filter all fields in your schema based on a variety of criteria during the source-to-target field mapping workflow. The default behavior is to display all mapped fields. To change the displayed fields, select the filter icon next to the search input field and choose from the dropdown options.
The mapping stage fo the CSV to XDM schema creation workflow with the filter icon and dropdown menu highlighted.

From here, you can optionally edit the field mappings or alter the field groups they are associated with according to your needs. When satisfied, select Finish to complete the mapping and initiate the dataflow you configured earlier. The CSV data is ingested into the system and populates a dataset based on the generated schema structure, ready to be consumed by downstream Platform services.

The Finish button being selected, completing the CSV mapping process.