This video shows how to stream data to Adobe Experience Platform in real-time using the HTTP API endpoint.
Data ingestion is a fundamental step to getting your data - in experience platform. So, you can use it to build a 360 degree real-time customer profiles and use them to provide - meaningful experiences. For this video, we are using a fictional - retail brand called Luma. When a Luma customer - visits the online store using a web or a mobile device, user interactions and events - can be sent to Platform with the help of a web or mobile SDK and edge configuration that - tells where the data should go. Let’s say our fictional customer Edith who likes a cool jacket on Luma site and then visits a nearby - Luma store to try it on. Finally, Edith may - decides to make a purchase since the transaction happened in store we do not want to lose the - customer event details. We should still be able to - send the data to Platform. So, in this video let me show you how to stream data to - Experience Platform in real time using the SGDP API endpoint. Let’s fit screens and open - our Platform homepage. When you log into Platform, you will see schemas - in the left navigation. Click to open and let’s - browse through the schema list to find a custom schema - that I built for this video. Open Luma in Stores Event Schema. The schema is based on an - XDM experience event class and contains a custom mixing that captures in store - purchase event details. For the schema, we also have - identified the loyalty ID field as a primary identity field, - and it is marked as required. This means the data - ingested into a dataset based on the Luma in store events schema should contain the loyalty - ID value for each record that gets ingested. We will cover this with an - example later in this video. We now have a schema to capture the in stock purchase events. Now let’s create a - streaming source connection. From platform homepage, - navigate to Sources in the left navigation. Clicking Sources will take you - to the source catalog screen where you can see all - of the source connectors currently available in Platform. Let’s navigate to the - streaming source category and add the SGDP API connector. Provide a name and description - for the connection. Enable authentication lets you make sure that data is coming from a trusted source. You can skip the authentication for now and then click “Connect to source.” If the connection is successful, let’s proceed to the next step - to assign a target dataset for the incoming data. Let’s choose the new dataset option and provide a dataset - name and description. To create a dataset, you need - to have an associated schema. Using the schema finder, assign - a schema to this dataset. Click Next, to move the data flow step and then finally review the - streaming source configuration. Click Finish to save your changes and you will be redirected - to the Sources window with the list of data flows - associated with an account. Please note the streaming end point as we would need it in the next section to stream data to our dataset. Let’s open the target dataset and currently there is - no data stream to it. In the next section, let’s - stream data to Platform using the streaming endpoint that we obtained from the previous step. We also would need the - dataset ID to stream data. Let’s copy that as well. To stream data into Platform, - I’m using Postman Collection but not required to use - Experience Platform APIs. Postman makes API workflows easier and Adobe Experience Platform provides the sense of Postman collections to help you execute API calls - and learn how they operate. In Postman Collection, make sure you choose the right environment and the environment - variables are populated before you execute an API card. I have a collection created in Postman that authenticates and - generates and access to open. In Platform UI we have already created a streaming connection and have obtained the streaming end point. So, we can skip the create connection and get defaults collection. In the stream profile record collection, we have completed the first - two steps in the platform UI. So, I guess now it’s time - to stream data to Platform. Streaming integration APIs - support two modes of validation, synchronous and asynchronous. A synchronous validation - is a method of validation that does not provide immediate feedback, instead that is sent to a - field batch in data lake to prevent data loss. This field data can later recruit for further analysis and replay. This method should be used in production. Unless otherwise requested - streaming ingestion operates in a synchronous validation mode. Validation checks makes - sure that XDM field value is in the right format before ingesting the data to platform. Let’s take a look at the - asynchronous validation method when streaming data to Platform using the platform end point. We are making a post method - card to the streaming endpoint. Let’s make sure that we are passing - necessarily requests headers and also explore the body of the request. You can note that it contains - information about schema, IMS organization and destination dataset. If you scroll down, you can also view the XDM entity that includes information on the store purchase event by edit. One thing to note here is that the loyalty ID is missing for this event. A moment the schema UI, well we saw the loyalty - IDs primary and required. Let’s see what happens - in the asynchronous mode when data is sent to Platform and does not meet the schema definition. You received a 200OK responses, so clearly, the request did not fail, but we are not sure if the - data got ingested successfully. We can check that later in this video. Now, let’s make another streaming request, this time with the validation - mode selected as synchronous. By default, synchronous - validation is not turned on. To enable it, you must pass in the - optional quality parameter. Again, let’s make sure that we are passing - necessarily request headers and explore the request body. In the XDM entity, we - now have the loyalty ID and other store purchase event details. To see what happens when - data is sent to Platform and does not meet the schema definition, let’s remove that loyalty ID and make a streaming ingestion request. We received a 404 bad request with an edit message for the - missing loyalty ID field. Let’s add loyalty ID back to - our request and make a card.
This time, we received a 200OK response and the synchronous - validation past this time. This means data will stream - successfully to the dataset. Asynchronous validation - is a method of validation that provides immediate feedback about why an ingestion failed. However, upon failure that our - cards had failed validation are dropped and prevented - from being sent downstream. As a result, synchronous - validation should only be used during the development process. When doing synchronous validation, the callers are informed of both the result of the XDM validation and if it failed, the reason for failure. It might take few minutes for the data to land into our dataset. Let’s fit screens and open - our platform homepage, navigate to Datasets and open the Luma offline event dataset. Under the dataset activity, you - can view to batch ingestion. The first ingestion - that has a failed status is the asynchronous streamy ingestion that we executed from Postman without the loyalty ID field value. Let’s open the batch ID. You can view the error code and edit description - for why the batch failed without ingesting data into the dataset. This helps platform users understand what needs to be changed - in the streaming connection for successful batch - ingestions in the future. Now, let’s opened the batch that burst. In the overview window, you can view the number - of records ingested and view additional details. Using the preview data set option, you can view the data - ingested into the dataset. In the preview model, not how you can select - different fields of the schema on the left to preview - those specific data points. At this point, we were able to successfully - stream data into a dataset using the API endpoint. Once you’re confident - that your data is clean, you can enable your dataset - for real-time customer profile and identity service. Now, that’s animal profile for our dataset and save our changes. In the next successful batch run, data ingested into our dataset will be used for creating - real-time customer profiles. I hope I was able to - provide you an overview of how to stream data - into Experience Platform using the SGDP API endpoint. - -