Run and Operate overview
When batch jobs fail or deliver incomplete data, you need to quickly understand what caused the issue. The root cause could be data availability issues, incorrect timing, configuration problems, or system capacity constraints. Without clear visibility, you may spend hours investigating multiple systems before finding the answer.
With Run and Operate tools, you can:
- Inspect your data operations: Get a complete view of job execution status and health across all your workflows.
- Troubleshoot faster: Access detailed diagnostic information and execution history to quickly identify root causes and reduce your mean time to resolution.
- Prevent issues proactively: Analyze job patterns, detect configuration problems before they cause failures, and optimize your data operations.
Target audiences target-audiences
Run and Operate tools are designed to serve multiple audiences across your organization:
- Data and IT teams: System administrators and data engineers who maintain reliable data pipelines and troubleshoot technical issues.
- Marketing operations: Marketing technologists who inspect data delivery to marketing platforms and resolve activation issues.
- Implementers: Practitioners who validate implementation efficiency and reliability, and who troubleshoot technical issues.
Prerequisites prerequisites
To access Run and Operate tools, you need the View Job Schedules and View Profile Management access control permissions. Contact your system administrator to ensure you have the appropriate permissions.
Getting started getting-started
To access the Run and Operate tools from the Experience Platform UI:
- Log in to your Experience Platform account and select Run and Operate from the left navigation.
- Select the tool that matches your inspection or troubleshooting needs.
Available tools available-tools
The following tools help you inspect and optimize your data operations.
Job schedules job-schedules
- Batch data lake ingestion
- Batch profile ingestion
- Batch segmentation
- Batch destination activation
With Job Schedules, you can inspect all scheduled batch operations across your organization, per sandbox, including data lake ingestion, profile ingestion, segmentation, and destination activation. View job execution status, performance metrics, and execution history to identify patterns and diagnose configuration issues that affect reliability.
Job Schedules provides three levels of investigation:
- Inspect job schedules: View all datasets and their scheduled jobs in a timeline to identify patterns and scheduling conflicts across your entire pipeline.
- Identify anti-patterns: Learn to spot and resolve common configuration issues like schedule overlap, dense batch stacking, and excessive batching that impact performance.
- View job details: Drill down into specific datasets and individual job runs to investigate failures, check timing, and verify records processed.
You can also understand dependencies between data processing stages, helping you ensure reliable data flow throughout your Experience Platform workflows.
Health checks health-checks
With Health Checks, you can proactively detect schema and identity configuration issues before they impact your business operations. Currently, health checks run daily static scans across your schemas and identity namespaces, surfacing missing best practices, misconfigurations, and patterns that lead to downstream failures.
Health checks currently evaluate five foundational areas:
- Identity field validation: Verify that identity fields have proper length and pattern constraints.
- Identity graph linking rules: Confirm that linking rules are configured to prevent profile collapse.
- People and non-people identity configuration: Validate correct identity type usage across schema classes.
- Custom identity namespace description: Ensure namespace metadata is complete.
- Deprecated identity namespaces: Detect obsolete namespaces for cleanup.
Next steps next-steps
Now that you understand the purpose and capabilities of Run and Operate tools, explore the following resources to deepen your knowledge:
- Learn how to use health checks to detect schema and identity configuration issues
- Learn how to inspect job schedules for your batch ingestion and activations
- Learn about batch ingestion to understand how data is ingested into Experience Platform
- Understand how to configure scheduled activations for batch destinations
- Explore dataflow monitoring for destinations