Upload schedule data to track live content
You can upload schedule data of past live Streaming Media content to track viewership of live content more easily and accurately. You can track viewership for individual programs and even specific topics or program segments.
The following are examples of live content that are supported with schedule data upload:
-
FAST (Free Ad Supported TV) platforms
-
Local streams
-
Live sports
-
News or topical programming
Capabilities
Various capabilities are available when using schedule data uploads of past live Streaming Media content. This section describes some of the key capabilities that help to analyze program performance.
These capabilities are available regardless of how you implemented Streaming Media Collection.
-
Accurately track program schedules: Identify the start and end times of each individual program in the live stream for the period of time that you want to analyze. With accurate start and end times, the precise running time is accurately reflected and can be analyzed against each viewer session.
For example, precise beginning and end times are not always known for a live sporting event until the event is over. Schedule data uploads allow you to get accurate reporting by updating the start and end times after the program finishes.
-
Track individual topics or program segments: Create new time-based dimensions for specific topics or program segments (time slots) within a given program. These time-based dimensions allow you to analyze viewership of a program at a more specific level, helping to gather insights about which topics or program segments resonated best.
For example, when analyzing a live sporting event, such as a soccer match, you can create separate dimensions for the first half, half time, and second half. Tracking specific topics or segments within a program in this way allows for more detailed breakdowns of viewer behavior.
-
Build user journeys in Journey Optimizer: Track which programs a person viewed in a given session (or even which topics or program segments the person viewed), then use this data in Adobe Journey Optimizer to build user journeys for customers who watched a certain program or who showed interest in a particular topic.
Understand how schedule data works for Streaming Media
The schedule data functionality for Streaming Media works in the following way:
-
Reads from the schedule program dataset for schedule program records, filtering by the date of the schedule.
Only works for programs that occurred 24 hours to 48 hours in the past.
-
Reads the media close events from the media dataset, filtering by date and by the XDM path in the schedule program records.
-
For each media close event, the same number of media schedule start events are generated as there were shows overlapping with the media session.
Each media schedule start event contains the name and the length of the schedule.
Also, a new time metric called scheduleTimePlayed contains the number of seconds that the media session overlapped with the scheduled program. The timestamp of the schedule start event is the timestamp of when the show started.
-
Writes the new schedule start events in the AEP media dataset.
Prerequisites
To upload schedule data of past live content, your Streaming Media environment must meet the following prerequisites:
-
Streaming Media Collection must be enabled for tracking on the content for which you want to upload schedule data, as described in Tracking overview.
-
Use Streaming Media Collection with Customer Journey Analytics. The ability to upload schedule data is not available with Adobe Analytics.
Create a program schedule dataset in AEP
Before you can push schedule information, you must create a program schedule dataset in Experience Platform:
-
Create a schema based on the Media Analytics Scheduled Program XDM class.
This is the XDM definition of the Media Analytics Scheduled program class.
-
Create a dataset based on the schema that you created.
-
Continue with the following section, Push schedule information.
Push schedule information
After you Create a program schedule dataset, you can push schedule information:
-
Create a .json file with the schedule information.
The .json file must contain an array of Schedule Program objects, in accordance with the XDM schema.
-
Upload the .json file:
note note NOTE The cURL examples in this section use the following variables: -
For authentication with Adobe Developer:
- CUSTOMER_API_KEY
- AUTH_TOKEN
-
organization id: CUSTOMER_ORG_ID
-
dataset id of the record dataset created in the setup: DATASET_ID
-
batch id created in the first request used in the file upload: BATCH_ID
-
The name of the file used to push records: FILE_NAME
-
Create a new batch, then get the batch id from the response.
Consider the following example of using cURL to create a new AEP Batch:
code language-none curl -i 'https://platform.adobe.io/data/foundation/import/batches' \ -X POST \ -H 'Accept: application/json' \ -H 'x-api-key: <CUSTOMER_API_KEY>' \ -H 'x-gw-ims-org-id: <CUSTOMER_ORG_ID>' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer <OAUTH_TOKEN>' \ --data-raw '{"datasetId":"<DATASET_ID>","inputFormat":{"format":"json","isMultiLineJson":true},"tags":{"test":["2"]}}' HTTP/1.1 201 Created { "id": "BATCH_ID", "imsOrg": "CUSTOMER_ORG_ID", "updated": 1749838941763, "status": "loading", "created": 1749838941763, "relatedObjects": [ { "type": "dataSet", "id": "DATASET_ID" } ], "version": "1.0.0", ............ } -
Push the .json file that contains the program schedule data records using the batch id.
To push schedule information you should use AEP batch APIs, as described in Batch ingestion API overview.
Consider the following example of using cURL to push a file with the schedule records:
code language-none curl -i 'https://platform.adobe.io/data/foundation/import/batches/<BATCH_ID>/datasets/<DATASET_ID>/files/<FILE_NAME>' \ -X PUT \ -H 'x-api-key: <CUSTOMER_API_KEY>' \ -H 'x-gw-ims-org-id: <CUSTOMER_ORG_ID>' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer <OAUTH_TOKEN>' \ --upload-file ./schedule_21_05_2025.json` -
Complete the batch.
Consider the following example of using cURL to complete the batch:
code language-none curl -i 'https://platform.adobe.io/data/foundation/import/batches/<BATCH_ID>?action=COMPLETE' \ -X POST \ -H 'x-api-key: <CUSTOMER_API_KEY>' \ -H 'x-gw-ims-org-id: <CUSTOMER_ORG_ID>' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer <OAUTH_TOKEN>'
-
-
Continue with the following section, Log a support ticket with Adobe Customer Care.
Log a support ticket with Adobe Customer Care
Log a support ticket with Adobe Customer Care with the following information:
-
Media dataset: Specify the dataset ID of the dataset from which the media sessions data is read.
-
Schedule dataset: Specify the dataset ID of the dataset to which the schedule records are pushed.
-
Output media dataset: Specify the dataset ID of the dataset to which the schedule start events are saved.
This dataset ID can be the same dataset ID that is used for the Media dataset. If it is a different dataset ID, it should still have the same XDM schema as the Media dataset.
-
Organization ID: Specify your organization ID.
Example of a schedule .json file with two records
The following example is of a schedule .json file with two records. Each .json file should contain all scheduled programs for a day.
[
{
"_id": "any_identifier_as_id_1",
"customMetadata": [
{
"name": "Sample value",
"value": "Sample value"
}
],
"defaultMetadata": {
"album": "Sample value",
"artist": "Sample value",
"assetID": "Sample value",
"author": "Sample value",
"cdn": "Sample value",
"dayPart": "Sample value",
"episode": "Sample value",
"feed": "Sample value",
"firstAirDate": "Sample value",
"firstDigitalDate": "Sample value",
"genreList": [
"Sample value"
],
"label": "Sample value",
"network": "Sample value",
"originator": "Sample value",
"publisher": "Sample value",
"rating": "Sample value",
"season": "Sample value",
"show": "Sample value",
"showType": "Sample value",
"station": "Sample value",
"streamFormat": "Sample value"
},
"mediaProgramDetails": {
"length": 1800,
"name": "Show Name",
"startTimestamp": "2025-05-01T00:30:00+00:00"
},
"scheduleDate": "2025-05-01",
"scheduleFilter": {
"filterPath": "xdm.mediaReporting.sessionDetails.channel",
"filterValue": "Channel Name"
},
},
{
"_id": "any_identifier_as_id_2",
"customMetadata": [
{
"name": "Sample value",
"value": "Sample value"
}
],
"defaultMetadata": {
"album": "Sample value",
"artist": "Sample value",
"assetID": "Sample value",
"author": "Sample value",
"cdn": "Sample value",
"dayPart": "Sample value",
"episode": "Sample value",
"feed": "Sample value",
"firstAirDate": "Sample value",
"firstDigitalDate": "Sample value",
"genreList": [
"Sample value"
],
"label": "Sample value",
"network": "Sample value",
"originator": "Sample value",
"publisher": "Sample value",
"rating": "Sample value",
"season": "Sample value",
"show": "Sample value",
"showType": "Sample value",
"station": "Sample value",
"streamFormat": "Sample value"
},
"mediaProgramDetails": {
"length": 3600,
"name": "Show Name 2",
"startTimestamp": "2025-05-01T01:00:00+00:00"
},
"scheduleDate": "2025-05-01",
"scheduleFilter": {
"filterPath": "xdm.mediaReporting.sessionDetails.channel",
"filterValue": "Channel Name"
}
}
]
Understand schedule program fields in the example
-
mediaProgramDetails: Should contain the minimum information required to create the schedule start event:
-
startTimestamp: The time when the show started.
-
name: The friendly name of the show.
-
length: The number of seconds the show lasted.
note important IMPORTANT If you have multiple schedule data requests, they cannot have overlapping start and end times.
-
-
scheduleDate: The date on which the show was aired. The format should be YYYY-MM-DD. It is used to filter the schedule dataset and get all the schedules for which adobe creates schedule starts.
-
scheduleFilter: Used to filter all media session close events.
- filterPath: An XDM path to the field that is used for filtering.
- filterValue: The value used for filtering.
-
customMetadata: Custom metadata that you want to add to the schedule starts events. This metadata is used to overwrite the custom metadata present on the session close events.
-
defaultMetadata: A specific list of dimensions that can add or overwrite the default medatata present on the media close calls.
Consider the following examples of dimensions you could create and then report on in Customer Journey Analytics:
-
“Episode name”: This dimension could help you learn which episodes in a particular series are performing best.
-
-
Continue with Analyze data in Customer Journey Analytics.
Analyze data in Customer Journey Analytics
Within one day of uploading your data file as described in Request and upload the schedule data file, your data is ready to report on in Customer Journey Analytics.
To report on your past live Streaming Media data in Customer Journey Analytics:
-
Create a new project or open an existing project.
-
Build out the project by creating any tables or visualizations that you need for analyzing your past live Streaming Media data.
When building out the project, use the information that you included in the schedule data file and sent to Adobe Customer Care. This includes the matching key, dimensions, and any additional metadata. For more information, see Request and upload the schedule data file.