Copy schemas between sandboxes
CREATED FOR:
- Intermediate
- Developer
This video shows how to copy a schema from one sandbox to another in Adobe Experience Platform using the Export/Import Schema API. Build and test your schemas in development sandboxes and then copy them to production. For more information, please visit the schemas documentation.

Transcript
In this video, I’ll show you how to copy a schema from one sandbox to another. We recommend that you build and test your schemas in development sandboxes before using them in a production sandbox. Once you finalize your schemas, you can use the Export/Import schema API to copy them from your development sandbox to your production sandbox. I’ll assume you’re already familiar with the platform API and how to authenticate. If not, please check out our other content on getting started with the API. You’ll need to have developer access to both sandboxes and the relevant permission items related to the schemas that you’re trying to copy over. For this demo, I’ll use Postman to make the API calls. The Export/Import schema API is part of the Schema Registry API Collection. In the API reference documentation, there’s a link to the Schema Registry API Postman collection on GitHub.
From here, I can grab the URL of the collection and import it into Postman.
In Postman, first I’ll make sure I’m authenticated.
Then I just want to confirm two environment variable settings. There are two environment variables that are frequently used with the platform API that aren’t part of the environment download from the Adobe Developer console. Sandbox name should, at first, be set to the sandbox name you’re going to copy from. Container ID should be set to tenant, since I only need to copy over the custom created aspects of this schema. Now, let’s take a look at the export API.
Notice that it needs a path parameter for the resource ID. This is a resource that you’re trying to copy. This can either be the URL and coded ID, which you can grab out of the platform interface and encode, or an object called the Meta alt ID. Since we’re here in Postman with the entire schema registry API collection open, our quickest option to get this ID is by using a call in this schemas folder called, retrieve a list of schemas within this specified container. With my container ID set to tenant, which we just confirmed in the environment variables, I’ll make this call and it will return all of my schemas in this sandbox. Now, you can see both the unencoded ID, which I would have to URL and code, and the Meta alt ID. I’ll copy the Meta alt ID, return to the export API call, paste it in and attempt to make the call. Now I get that this 400 bad request error. Why is that? If I look at the response body, it’s because it didn’t like my accept header value. It says that version needs to be specified. If we look back at the response from our list call, you’ll see my schema has a version, 1.6. Now, if I go back to my export API call and look at the headers, there are two entries for the accept header. The one I want to update is this one and I want to add ;version=1, which will pull down the last 1. Version which was my 1.6.
Now I can make the export call. If we look at the response body, you’ll see that it pulls out all of the sub entities, like data types and field groups, previously known as mixins. Just be aware that it doesn’t copy any of the identity settings you might have had on the fields or profile settings.
Now, let’s put this schema in our other sandbox. The first thing we need to do is update our environment variable with the destination sandbox name.
Then, we can just copy the entire body from the export response and paste it into the request body of the import request.
Send the call and that’s it. We’ll now see the schema in our list of schemas for this sandbox and if we open it up in the interface, let’s say it contains all of the field groups and data types. What’s great with this approach to copying schemas is that under the hood, all of the entities like the schema, field group and data types, will all have the same ID’s in both sandboxes, making it easier to manage subsequent updates with the API.
Just so you’re aware of the limitations, the new schema won’t be enabled for profile, like it might have been in your source sandbox. Identity fields won’t be labeled like they probably were in the source sandbox. Any custom name spaces used in the identity field settings will need to be recreated. The Export/Import API works great to copy the whole entire schema, but any iterative updates you make after this point with the API will have to be made with PATCH requests. The good news is, since all of the sub entities have the same ID’s, you’ll just need to switch out the sandbox name and you can make otherwise identical API calls to both sandboxes.
So, that’s how to copy a schema between sandboxes, good luck. -
Experience Platform
- Platform Tutorials
- Introduction to Platform
- A customer experience powered by Experience Platform
- Behind the scenes: A customer experience powered by Experience Platform
- Experience Platform overview
- Key capabilities
- Platform-based applications
- Integrations with Experience Cloud applications
- Key use cases
- Basic architecture
- User interface
- Roles and project phases
- Introduction to Real-Time CDP
- Getting started: Data Architects and Data Engineers
- Authenticate to Experience Platform APIs
- Import sample data to Experience Platform
- Administration
- AI Assistant
- Audiences and Segmentation
- Introduction to Audience Portal and Composition
- Upload audiences
- Overview of Federated Audience Composition
- Connect and configure Federated Audience Composition
- Create a Federated Audience Composition
- Audience rule builder overview
- Create audiences
- Use time constraints
- Create content-based audiences
- Create conversion audiences
- Create audiences from existing audiences
- Create sequential audiences
- Create dynamic audiences
- Create multi-entity audiences
- Create and activate account audiences (B2B)
- Demo of streaming segmentation
- Evaluate batch audiences on demand
- Evaluate an audience rule
- Create a dataset to export data
- Segment Match connection setup
- Segment Match data governance
- Segment Match configuration flow
- Segment Match pre-share insights
- Segment Match receiving data
- Audit logs
- Data Collection
- Collaboration
- Dashboards
- Data Governance
- Data Hygiene
- Data Ingestion
- Overview
- Batch ingestion overview
- Create and populate a dataset
- Delete datasets and batches
- Map a CSV file to XDM
- Sources overview
- Ingest data from Adobe Analytics
- Ingest data from Audience Manager
- Ingest data from cloud storage
- Ingest data from CRM
- Ingest data from databases
- Streaming ingestion overview
- Stream data with HTTP API
- Stream data using Source Connectors
- Web SDK tutorials
- Mobile SDK tutorials
- Data Lifecycle
- Destinations
- Destinations overview
- Connect to destinations
- Create destinations and activate data
- Activate profiles and audiences to a destination
- Export datasets using a cloud storage destination
- Integrate with Google Customer Match
- Configure the Azure Blob destination
- Configure the Marketo destination
- Configure file-based cloud storage or email marketing destinations
- Configure a social destination
- Activate through LiveRamp destinations
- Adobe Target and Custom Personalization
- Activate data to non-Adobe applications webinar
- Identities
- Intelligent Services
- Monitoring
- Partner data support
- Profiles
- Understanding Real-Time Customer Profile
- Profile overview diagram
- Bring data into Profile
- Customize profile view details
- View account profiles
- Create merge policies
- Union schemas overview
- Create a computed attribute
- Pseudonymous profile expirations (TTL)
- Delete profiles
- Update a specific attribute using upsert
- Privacy and Security
- Introduction to Privacy Service
- Identity data in Privacy requests
- Privacy JavaScript library
- Privacy labels in Adobe Analytics
- Getting started with the Privacy Service API
- Privacy Service UI
- Privacy Service API
- Subscribe to Privacy Events
- Set up customer-managed keys
- 10 considerations for Responsible Customer Data Management
- Elevating the Marketer’s Role as a Data Steward
- Queries
- Overview
- Query Service UI
- Query Service API
- Explore Data
- Prepare Data
- Adobe Defined Functions
- Data usage patterns
- Run queries
- Generate datasets from query results
- Tableau
- Analyze and visualize data
- Build dashboards using BI tools
- Recharge your customer data
- Connect clients to Query Service
- Validate data in the datalake
- Schemas
- Overview
- Building blocks
- Plan your data model
- Convert your data model to XDM
- Create schemas
- Create schemas for B2B data
- Create classes
- Create field groups
- Create data types
- Configure relationships between schemas
- Use enumerated fields and suggested values
- Copy schemas between sandboxes
- Update schemas
- Create an ad hoc schema
- Sources
- Use Case Playbooks
- Experience Cloud Integrations
- Industry Trends