Oracle Object Storage connector
- Topics:
- Sources
CREATED FOR:
- Developer
Adobe Experience Platform provides native connectivity for cloud providers like AWS, Google Cloud Platform, allowing you to bring data from these systems into Experience Platform for use in downstream services and destinations.
Cloud storage sources can bring your data into Experience Platform without the need to download, format, or upload. Ingested data can be formatted as XDM JSON, XDM Parquet, or delimited. Every step of the process is integrated into the sources workflow. Experience Platform allows you to bring in data from Oracle Object Storage through batches.
IP address allow list
A list of IP addresses must be added to an allow list prior to working with source connectors. Failing to add your region-specific IP addresses to your allow list may lead to errors or non-performance when using sources. See the IP address allow list document for more information.
Naming constraints for files and directories
The following is a list of constraints you must account for when naming your cloud storage file or directory:
- Directory and file component names cannot exceed 255 characters.
- Directory and file names cannot end with a forward slash (
/
). If provided, it will be automatically removed. - The following reserved URL characters must be properly escaped:
! ' ( ) ; @ & = + $ , % # [ ]
- The following characters are not allowed:
" \ / : | < > * ?
. - Illegal URL path characters not allowed. Code points like
\uE000
, while valid in NTFS filenames, are not valid Unicode characters. In addition, some ASCII or Unicode characters are also not allowed, such as control characters (0x00 to 0x1F, \u0081, etc.). For rules governing Unicode strings in HTTP/1.1 see RFC 2616, Section 2.2: Basic Rules and RFC 3987. - The following file names are not allowed: LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, PRN, AUX, NUL, CON, CLOCK$, dot character (.), and two dot characters (…).
Connect Oracle Object Storage to Experience Platform
The documentation below provides information on how to connect Oracle Object Storage to Adobe Experience Platform using APIs or the user interface:
Using APIs
Using the UI
Experience Platform
- Sources overview
- Available source connectors
- Adobe applications
- Advertising
- Analytics
- Cloud storage
- Amazon Kinesis connector
- Amazon S3 connector
- Apache HDFS connector
- Azure Data Lake Storage Gen2 connector
- Azure Blob connector
- Azure Event Hubs connector
- Azure File Storage connector
- Data Landing Zone
- FTP connector
- Google Cloud Storage connector
- Google PubSub
- Oracle Object Storage
- SFTP connector
- Amazon S3 and Azure Blob connector
- Consent & Preferences
- CRM
- Customer success
- Databases
- Amazon Redshift connector
- Apache Hive on Azure HDInsights connector
- Apache Spark on Azure HDInsights connector
- Azure Databricks connector
- Azure Data Explorer connector
- Azure Synapse Analytics connector
- Azure Table Storage connector
- Google BigQuery connector
- GreenPlum connector
- HP Vertica connector
- IBM DB2 connector
- MariaDB connector
- Microsoft SQL Server connector
- MySQL connector
- Oracle connector
- PostgreSQL connector
- Snowflake Streaming connector
- Snowflake connector
- Teradata Vantage connector
- Data & identity partner
- eCommerce
- Local system
- Marketing automation
- Payments
- Protocols
- Streaming
- API tutorials
- Create a base connection
- Explore data
- Collect data
- On-demand ingestion
- Filter data at the source level
- Monitor dataflows
- Update accounts
- Update dataflows
- Retry failed dataflow runs
- Delete accounts
- Delete dataflows
- Ingest encrypted data
- Save a dataflow as a draft
- Apply access labels to a dataflow
- UI tutorials
- Create a source connection
- Adobe applications
- Advertising
- Analytics
- Cloud storage
- Consent & Preferences
- CRM
- Customer Success
- Databases
- Amazon Redshift
- Apache Hive on Azure HDInsights
- Apache Spark on Azure HDInsights
- Azure Databricks
- Azure Data Explorer
- Azure Synapse Analytics
- Azure Table Storage
- Google Big Query
- GreenPlum
- HP Vertica
- IBM DB2
- MariaDB
- Microsoft SQL Server
- MySQL
- Oracle
- PostgreSQL
- Snowflake
- Snowflake Streaming
- Teradata Vantage
- Data & identity partner
- eCommerce
- Local system
- Marketing automation
- Payments
- Protocols
- Streaming
- Configure a dataflow
- Advertising connection dataflow
- Analytics connection dataflow
- Batch cloud storage connection dataflow
- Streaming cloud storage connection dataflow
- Consent & Preferences connection dataflow
- CRM connection dataflow
- Customer success connection dataflow
- Database connection dataflow
- Ecommerce connection dataflow
- Marketing automation connection dataflow
- Payment connection dataflow
- Protocol connection dataflow
- Create a sources dataflow using templates in the UI
- Ingest encrypted data
- On-demand ingestion
- Monitor batch dataflows
- Monitor streaming dataflows
- Update accounts
- Update dataflows
- Delete accounts
- Delete dataflows
- Subscribe to sources alerts
- Save a dataflow as a draft
- Apply access labels to a dataflow
- Create a source connection
- Self-Serve Sources (Batch SDK)
- Overview
- Configure your connection specification
- Self-Serve Sources (Batch SDK) API guide
- Documentation guide
- Streaming SDK
- Get started with Self-Serve Sources (Streaming SDK)
- Create a connection specification for a streaming source
- Update a connection specification for a streaming source
- Update the streaming flow specification
- Test and submit your connection specification for verification
- Document your source (Streaming SDK)
- Documentation self-service API streaming template
- Documentation self-service UI streaming template
- Error messages
- Flow run notifications
- IP address allow list
- Frequently asked questions
- API reference
- Experience Platform release notes