Ingest data using CRM source connectors
- Topics:
- Sources
CREATED FOR:
- Intermediate
- Developer
Learn how to easily batch ingest data from CRM sources into Adobe Experience Platform’s Real-Time Customer Profile and data lake seamlessly. For more detailed product documentation, see customer relationship management (CRM) on the Source Connectors overview page.
Standard workflow
Learn how to configure the source connector for Salesforce CRM using the standard workflow. The standard workflow requires upfront creation of schemas and identity namespaces. Other CRM source connectors may only support the standard workflow.
Transcript
Hi there. I’m going to give you a quick overview of how to ingest data from your CRM systems into Adobe Experience Platform. Data ingestion is a fundamental step to getting your data in Experience Platform so you can use it to build 360-degree, real-time customer profiles and use them to provide meaningful experiences. Adobe Experience Platform allows data to be ingested from various external sources by giving you the ability to structure, label, and enhance incoming data using platform services. You can ingest data from a wide variety of sources, such as Adobe applications, cloud-based storage, databases, and many others. Experience Platform provides tools to ensure that the ingested data is XDM compliant and helps prepare the data for real-time customer profiles and other services. When you log into platform, you will see Sources in the left navigation. Clicking sources will take you to the source catalog screen, where you can see all of the source connectors currently available in Platform. Note that there are so connectors for Adobe applications, CRM solutions, cloud storage providers, and more. Let’s explore how to ingest data from CRM systems into Experience Platform. Each source has its specific configuration details, but the general configuration for CRM source connectors are somewhat similar. For our video, let’s use the Salesforce CRM system. Select the desired source. When setting up a source connector for the very first time, you will be provided with an option to configure. For an already configured source connector, you will be given an option to add data. Since this is our first time creating a Salesforce account, let’s click on creating a new account and provide a source connection details. Complete the required fields for account authentication, and then initiate a source connection request. If the connection is successful, click next to proceed to data selection. In this step, you can explore the list of accessible objects in Salesforce CRM. Let’s search for the loyalty object and quickly preview the object data before we continue. Let’s proceed to the next step to assign a target dataset for the incoming data. You can choose an existing dataset or create a new dataset. Let’s choose the new dataset option and provide a dataset name and description. To create a dataset, you need to have an associated schema. Using the schema finder, assign a schema to this dataset. Upon selecting a schema for this dataset, Experience Platform performs a mapping between the source file field and the target field. This mapping is performed based on the title and type of the field. This pre-mapping of standard fields are editable. You can quickly clear all mapping and add a custom mapping between a source field and a target field. To do so, choose an attribute from the source file and map it to a corresponding schema attribute. To select a source field, you can either use the target field dropdown option or typing to find a field and then map it to a target field. Like how we mapped the loyalty field, let’s map the CRM ID target field to our schema field. Similarly, you can complete the mapping for other fields. Add calculated field option lets you run functions on source fields to prepare the data for ingestion. You can choose from a list of pre-defined functions that can be applied to your source fields. For example, we can combine the first name field and the last name field into a calculated field using the concatenation function before ingesting the data into a dataset field. Upon selecting a function, you can notice the function documentation on the right-hand side of the screen. You can also preview the sample result of a calculated field. Let’s save all the changes leave the window. You can view the calculated field displayed as a source field. Now, let’s quickly map the calculated field to a schema target field. After reviewing the field mapping, you can also preview data to see how the ingested data will get stored in your dataset. If the mapping looks good, let’s move to the next step. Scheduling lets you choose the frequency at which data should flow from source to a dataset. Let’s select a frequency of 15 minutes for this video and set a start time for data flow. To let historical data to be ingested, enable the Backfill option. Backfill is a Boolean value that determines what data is initially ingested. If backfill is enabled, all current files in the specified path will be ingested during the first scheduled injection. If backfill is disabled, only the files that are loaded in between the first run of the injection and the start time will be ingested. Files loaded before start time will not get ingested. Select load incremental data by assigning to a field that helps us distinguish between new and existing data. Let’s move the Dataflow step. Provide a name for your Dataflow.
In the Dataflow detail step, the partial ingestion toggle allows you to enable or disable the use of partial batch ingestion. The error threshold allows you to set the percentage of acceptable errors before the entire batch fills. By default, this value is set to 5%. Let’s review this source configuration details and then save your changes. We do not see any data flow on statuses as we have set a frequency of 15 minutes for our current data flow runs. So let’s wait for the data flow to run. Let’s refresh the page and you can now see that our data flow run status has been completed. Open the Dataflow run to view more details about the activity. Our last data flow run was completed successfully without any failed records. If there were any failed records, since we enabled error diagnosis for our data flows, we should be able to view the error code and error description for the failed records. Experience Platform also lets users preview or download the error diagnosis to determine what went wrong with the failed records. Let’s go back to the Dataflow activity tab. At this point, we verified that the data flow was completed successfully from the source to our dataset. Let’s open our dataset to verify the data flow and activities. You can open the Luma customer loyalty dataset right from the data flow window, or you can access it using the datasets option from the left navigation. Under the Dataset activity, you can see a quick summary of ingested batches and failed batches during a specific time window. Scroll down to view the ingested batch ID. Each batch represents actual data ingestion from a source connector to a target dataset. Let’s quickly preview the dataset to ensure that data integration was successful and our calculated fields are populated. We now have the dataset populated with data from Salesforce CRM. Finally, let’s see how to enable this data for real-time customer profile. In the real-time customer profile, you can see each customer’s holistic view that combines data from multiple channels, including online, offline CRM and third-party data. To enable our dataset for the real-time customer profile, ensure that the associated schema is enabled for real-time profile. Once the schema is enabled for profile, it cannot be disabled or deleted. Also, fields cannot be removed from the schema after this point. These implications are essential to keep in mind when working with the data in your production environment. It is recommended to verify and test the data ingestion process to capture and address any issues that may arise before enabling the dataset and schema for profile. Now, let’s enable profile for our dataset and save all the changes. In the next successful batch run, data ingested into our dataset will be used for creating real-time customer profile. Adobe Experience Platform allows data to be ingested from external sources by providing you with the ability to structure, label and enhance incoming data using platform services. -
Template workflow (Salesforce)
Learn how to configure the source connector for Salesforce CRM using the template workflow. This workflow auto-generates assets needed for ingesting Salesforce data based on templates. It saves you upfront time, and the assets can be customized according to your needs. This workflow is not supported for all CRM source connectors.
Transcript
For more information, please see the following documentation:
Experience Platform
- Platform Tutorials
- Introduction to Platform
- A customer experience powered by Experience Platform
- Behind the scenes: A customer experience powered by Experience Platform
- Experience Platform overview
- Key capabilities
- Platform-based applications
- Integrations with Experience Cloud applications
- Key use cases
- Basic architecture
- User interface
- Roles and project phases
- Introduction to Real-Time CDP
- Getting started: Data Architects and Data Engineers
- Import sample data to Experience Platform
- Administration
- AI Assistant
- Overview
- Agent Orchestrator
- Agent Orchestrator interface
- Get access
- Audience Agent
- Journey Agent
- Experimentation Agent
- Data Insights Agent
- Product Support Agent
- Onboard with a new product
- Learn about products
- Validate responses
- Discoverability panel
- Find unused audiences
- Operational insights
- Impact analysis
- Security overview
- APIs
- Audiences and Segmentation
- Audience Builder
- Introduction
- Upload audiences
- Audience rule builder overview
- Create audiences
- Use time constraints
- Create content-based audiences
- Create conversion audiences
- Create audiences from existing audiences
- Create sequential audiences
- Create dynamic audiences
- Create multi-entity audiences
- Create and activate account audiences (B2B)
- Demo of streaming segmentation
- Evaluate batch audiences on demand
- Federated Audience Composition
- Segment Match
- Tutorials
- Audience Builder
- Audit logs
- Data Collection
- Collaboration
- Real-Time CDP Collaboration Overview
- Intro to Collaboration
- Real-Time CDP Overview for Agency Practitioners
- Collaboration - Process and People
- Set permissions
- Set up an Advertiser account
- Reference audiences as an advertiser
- Connect with publishers
- Create a project
- Discover audience overlaps
- Activate audiences to collaborators
- Brand to Brand
- Dashboards
- Data Governance
- Data Hygiene
- Data Ingestion
- Overview
- Batch ingestion overview
- Create and populate a dataset
- Delete datasets and batches
- Map a CSV file to XDM
- Sources overview
- Ingest data from Adobe Analytics
- Ingest data from Audience Manager
- Ingest data from cloud storage
- Ingest data from CRM
- Ingest data from databases
- Streaming ingestion overview
- Stream data with HTTP API
- Stream data using Source Connectors
- Web SDK tutorials
- Mobile SDK tutorials
- Data Lifecycle
- Destinations
- Destinations overview
- Connect to destinations
- Create destinations and activate data
- Activate profiles and audiences to a destination
- Export datasets using a cloud storage destination
- Integrate with Google Customer Match
- Configure the Azure Blob destination
- Configure the Marketo Engage destination
- Configure file-based cloud storage or email marketing destinations
- Configure a social destination
- Activate through LiveRamp destinations
- Adobe Target and Custom Personalization
- Activate data to non-Adobe applications webinar
- Identities
- Intelligent Services
- Monitoring
- Partner data support
- Profiles
- Understanding the Real-Time Customer Profile
- Profile overview diagram
- Bring data into Profile
- Customize profile view details
- View account profiles
- Create merge policies
- Union schemas overview
- Create a computed attribute
- Pseudonymous profile expirations (TTL)
- Delete profiles
- Update a specific attribute using upsert
- Privacy and Security
- Introduction to Privacy Service
- Identity data in Privacy requests
- Privacy JavaScript library
- Privacy labels in Adobe Analytics
- Getting started with the Privacy Service API
- Privacy Service UI
- Privacy Service API
- Subscribe to Privacy Events
- Set up customer-managed keys
- 10 considerations for Responsible Customer Data Management
- Elevating the Marketer’s Role as a Data Steward
- Queries and Data Distiller
- Schemas
- Overview
- Building blocks
- Plan your data model
- Convert your data model to XDM
- Create schemas
- Create schemas for B2B data
- Create classes
- Create field groups
- Create data types
- Configure relationships between schemas
- Use enumerated fields and suggested values
- Copy schemas between sandboxes
- Update schemas
- Create an ad hoc schema
- Sources
- Use Case Playbooks
- Experience Cloud Integrations
- Industry Trends