Overview of Adobe Experience Platform integrations

This session will give you an overall view of different ways that Adobe Experience Platform can integrate within your ecosystem and things to consider when planning the integration work.

Transcript

My name’s Eric Neen. I’m gonna give you guys an overview today of the Adobe Experience Platform from a technical lens. And I have with me Pahit Vonsori, who is one of the product managers over this beast of a thing that we call Experience Platform. So he’ll be assisting me today. I’ll come and be giving a presentation if I’ll add some color commentary here and there as needed. But to kind of jump in the agenda for today is really I’m gonna talk about the problem statement we were trying to solve when we built this, what the architecture is behind the scenes when we look at platform and then data ingress and egress patterns that use developers really to have knowledge of. And then I’ll kind of open it up for Q&A at the end. Should take about 20 minutes and so hopefully should give us some time to ask some good questions and jump in.

So when I talk about Adobe Experience Platform from a developer standpoint, I think the biggest thing to understand is when we developed this, a lot of this came from the need of our own solutions first. So we have, as many of you may know, a lot of different applications that live in our experience cloud. There’s the analytics tools, an audience manager tool for advertising. We have advertising cloud within search, keeping creative, we have the marketing cloud, campaign, Marketo, experience manager, Target, and then you have the whole commerce piece. What each one of these pieces brought to our stack early on was through the acquisitions was they all had their own profile source. So if you wanted to ask analytics who they were talking to, they would speak in terms of an EC ID and maybe at some point there was an authentication event and so you could say, well, that cookie ID actually is Eric, audience manager had its own view of the world, Target had its own view of the world. And what customers commonly would start to ask is, well, how do I start to see things amongst these tools? And this is where engineering shows up and say, well, hey, way we can connect analytics to audience managers will do some second share. So those things that you’re defining in your analytics toolkit of what people are doing in that conversion funnel, maybe of interest to you, you want to go advertise to some of those people via audience manager and so we’ll start sharing segments. And this worked when you had one or two solutions, but over time, what you start to realize is everyone wants segments everywhere. And so these things would start to happen before you would have people asking for that second form analytics, not just an audience manager, but in campaign and Marketo and Target. And then Magento shows up and Magento also wants to be able to get segments as well. And these solutions aren’t just saying, I want segments from one it’s, well, I’m also creating segments, please share it back. And so core services starts to look like a FedEx facility, just shipping data around. And so that seems like it’s manageable, except then you have to layer on the reporting side of the business, which is, all right, I also need to report on what’s happening in my target campaigns. And I want to report what’s happening in my email campaigns and what’s happening in my lead campaigns and what’s happening at my storefront. And the customer then also shows up and they say, oh, by the way, I want to enrich all those profile stories. So I have my own warehouse or set of warehouses underneath. And so I’d like to get you data and I want data back from your tools. And so you basically just, what we started to have was this very monolithic database, if you will, one large database, basically running over core service that was trying to ship the same data amongst all of our own solutions, low on customer solutions. And this is where experience platform kind of showed up and we said, all right, this is the legacy problem that you kind of have in the marketing space. You have an analytical system, left brain, and you have an operational system, right brain. And these systems traditionally have always been kept apart because the workloads look so vastly different. But when you look at what marketing teams are typically trying to do, they’re leveraging that same data, albeit it’s just a subset of the overall data that you put in an analytical system. And it’s probably a majority of what you might see in an operational system. And so in our own tooling, we were realizing that we had a lot of complexity. Every time you want to share your segment, there’s a latency introduced. We have to figure out with different engineering teams, how are we gonna get this data over there? People have different schema designs and they speak a different language, which then means that complexity starts to add to cost. And then you have latency. And if we’re in a business and a world where we’re trying to have everything on demand, you can’t wait for batch. Batch doesn’t work. And we needed to solve that problem. So that’s kind of where experience platform was birthed. That was the problem that we had. But we saw that there was a customer problem within that too, customers were looking for the same thing. All their BI and reporting and all that they’re trying to run on who their customers are, are the same thing those marketing teams want to drive the activations on. They wanna know who that person is at any point in time. What’s the Eric buy, where did Eric go? What is Eric doing? So that we can give Eric the right message at the right time as well as put them into the right campaign or stop an email from going out the door or pull that ad. Because in theory, if we can start to do these things faster I can start to save money as a marketing team on maybe acquisition costs or on advertising people that already know. And I can start funneling that money back into different programs or shifting that money around. And so in a modern system, you typically have both of these things next to each other, but they’re the same system. And so we’re all using the same data.

So enter re-envisioned experience cloud, if you will. There’s not only single unifying data structure and platforms with some services sitting underneath all of our applications that exist in the cloud. And so what that platform starts to deliver is a real time customer profile, standardized AI and machine learning and open ecosystems. This is a really big important point, I think, to understand when you looked at our legacy stack in our applications, some of them had APIs, some of them are rest, some of them are soap, some of them didn’t have the APIs that a customer wanted. So we would engineer new APIs and they weren’t very extensible. You were given a model of the box, you must fit to that, or it was very customized and you would get kind of from a customer implementation to a customer implementation, a very bespoke model. So the platform kind of starting to unify all these profile stores underneath into a singular store, we can start to really unlock some power, not only from the legacy stack, but also from some new applications. You can imagine building on top of this where we can start to offer services or microservices really that are powering specific functionality now built off of that platform. So what does that mean to developers? What does that mean to marketers that are in the field? What does that mean for us as engineers? I think the biggest thing to look at is when you look at this kind of from a left to right scenario on this architecture diagram, everyone has data and everyone wants to get data into the system. And so you have to support the basics of I need to batch that data or I’m gonna stream that data in. But that system, in this case, a Joe Experience platform has to support the various minds that are gonna come into it. So I need to be able to drop a data architect in and be able to let them architect what the system needs to look like to hold that data. But I also need to be able to have that data be serviceable to the data science scientists and their teams to start driving some insights on that. But those insights should be flowing directly back into the same system that our real-time customer profile is being consumed from. And so everything in the box in red is Experience platform. And that’s really where your app dev lives, your data engineer, your architecture scientist, really the IT function of the marketing teams live. And then on top of it, you start to see here the Adobe applications. So think this is a legacy deck, analytics, audience manager, campaign, et cetera. And then net new services we can start to build where we can say, since all of that experience data is now sitting in a singular spot, I can start to do some really powerful things with analysis that you couldn’t do when you were looking at analytics. Analytics was only about what was happening on your web or mobile device. Customer journey analytics is about what’s happening, not just on the site, but also all the things that you may be able to tie to back offline. So what is my attribution? Where are my customers converting from my site and then ending up in my call center? Journey optimizer can start to optimize individuals events as they’re happening. So Eric buys from a store. I can send him that transactional email, and at the same time turn around and pull him out of an advertising, marketing communication at the same time. So that data is now flowing all through the same system, and it really allows you to start lighting up some really cool capabilities on top of it. Realtime CDP, an offering we have as the complimented evolution of audience manager, where we can take the things again you build within the realtime customer profile, activate realtime outbound, and then offer management, a reinvention of an offer library that customers typically ask for to service offers across any channel. And then all of that obviously feeding back into, you’re gonna also have requirements that can’t be met by just services that we build, so we need to provide the tooling for them to be able to build, customers be able to build their own integrations on top. So from a development standpoint, what becomes really interesting that is, okay, cool, Adobe, you’ve built this neat thing. If I’m a data architect or an engineer, or I wanna build something on top of your tool, how the heck do I do that? There’s basically three things you have to know as a developer. How do you get access to that tool and where are the APIs? Do I have an environment to actually plan if I wanna start trying this thing out? And then what is this thing functionally built around, what language is it speaking, quote unquote, so that I understand what services I might have to do translations for and or what’s coming out of this thing when I get done. So I’ll do a little bit of a deep dive, or I shouldn’t say deep dive, but I’ll kind of do a surface dive into each one of these in the next few slides here.

So access control for developers. Everything you see within the platform is managed via Adobe IOH. So as long as you are flagged as a developer within your IMS work, you can come in and actually work with any of these APIs and this is using all the goodness that we offer within IOH. So if there’s a single environment where you can set up your projects, authentication with that is done via JWT token. We have packages and libraries that show you how you can generate that JWT. And then because you now have that project and that access set up, you can really start to do some cool development against this tool for data management. How do you wanna put your data into this tool, applications you may wanna build on top of it as well as any data engineering law. And so the hub for really starting anything with experience platform starts at Adobe IOH.

And as a developer, if you’re coming into the platform, wanting to load data, wanting to read data out, you have to do development somewhere. So we’ve done this via sandboxing. So Adobe Experience platform, you can spin up in sandboxes. I think today’s limit is about 75, but we’re gonna grow that even farther. And a sandbox is really nothing more than if you wanna think of it like a virtual environment or containerization of what you would get in a production setting. So sandboxes, any actions you take within them are confined to that sandbox. So if you create a schema, if you create a data set, it’s only in that sandbox. If you build a profile and look up the profile, you have to look it up from a specific sandbox. You can’t just ask the Adobe Experience platform to give you error. It’ll basically ask from where do you want to see error? Cause you may exist in a few places. And all of this again, API control. So I’m kind of showing here in the graphic is, this is my own environment that I’m working right now with my team, but we have a few sandboxes. One of them is prod, there’s four here that are in development. We could have more prods if we wanted, we could have more developments if we wanted. So you’d have flexibility in choosing what’s a production sandbox in development. No delineation today really on the backend for what this means, but there’s a bunch of stuff that we’re actually developing to start to lock down a production sandbox and how that workflow would work compared to development. And again, all manage the API. So you can spin this up the API, you can reset them the API. You can really build your workflows around these, the API. And then production sandboxes, things as developers, I want to, I’m done with that thing, I’ll reset it. I’m done with the thing, I want to promote it. I want to take certain components out of it. So we do have reset features within these to completely nuke the environment when you’re done. And there’s some features that’s coming that will start to allow you to copy specific flows out. So you may want to say, move all these datasets from sandbox A into sandbox B. Today we can do that all via API, but there’s a bunch of stuff coming to even really allow marketers to do some of this basic development work and do micro-packaging, if you want to think of it that way, to move specific objects between environments. So sandbox is super important because that’s kind of the environment you work in. Adobe IO, that’s how you get in to the environment and get access to these APIs to start working within the tool. And then the biggest thing. So experience platform, what’s it built around? So we call it XDM system. XDM means nothing more than the experience data model. And there’s kind of four ways you can look at this. One, it’s open and extensible. And so what I mean by that is it’s built on technology that’s been around for a while, but you don’t see used often within the marketing stacks. So everything is built on JSON schema, on a JSON-LD. And everything in our environment has a schema-based spec behind it that’s used to represent the data. So what this means is there’s a standard language when we talk about the definition of a person, when we talk about the definition of a purchase event, when we talk about the definition of a call center log, we help define those things based on this architecture. And that’s what you start to see in the catalog and schema registry that customers will bring or that you can create. We will provide Adobe kind of a library of standard field groups. So I think there’s nothing more than a collection of standard fields that might describe who a person is, just like you’d see on jsonschema.org. We’ll provide some of these building blocks, and then you are allowed to use those building blocks to kind of assemble a schema as you see fit. So for example, I might want to describe a person as having a first and last name and a personal email address and a home address and also some loyalty information. We’ll give you a bunch of those, what I would call accelerants to help build that schema and define those pieces consistently. And then you can customize it at C-Fit. And then as you bring data into our tool, you’ll see those things show up within our catalog. So that’s where your data sets would exist, batches, streams, things you go down. Why is that important? Because when you start to get more to that marketing side where marketers would interact, all the segmentation is derived off of what it is within the catalog, which is based on that open extensible system. So any audience creation or profiling you do, we talk about accessing profile APIs. We talk about doing personalization on channels, all of it’s built off of that XDM system. And within the system, that’s what it also enables the real-time customer profile. So we’ll take the various fragments that you’ll see from these various systems, stitch them together in real time on request, and then also expose them back to the developers via API access. So that last part is interesting because I don’t think we talk about this stuff internally, even amongst our teams, but combining multiple fragments, what does that mean? And I wanna just talk about that slide here.

When we talk about the real-time customer profile, which is what the Adobe Experience platform is really all about, the problem you have is typically customers have a bunch of various fragments. So if you think about our old architecture, analytics had a fragment, target had a fragment of Eric, the customer at some point asking how did Eric interact within both of those tools, because they were both doing different things. That’s what the real-time customer profile is doing, saying, I want each of those fragments of Eric. That could be coming from our own tools, that could be coming from your backend tools, that could be coming from a CRM system. And all of those fragments get brought in, and through an identity graph on the backend, you can dynamically serve a view of the customer. And where this is really cool is, I can also start to apply some really cool governance policies on top, so that I can deliver to my marketer almost that warehouse of data that they’ve always asked for, but also have it policed so that they can only use it appropriately depending on where they’re trying to take it outbound. And that’s where you kind of see this governance, privacy and consent layer.

Before the dynamic profile and this data access, usage and enforcement piece post-profile, they’re the bookends of the governance rules that kind of get put in place on this profile and how you’re allowed to use the data. So why is this important to know as a developer? Because it really means you are free to bring really any data into the tool, depending on what your use cases are, as long as we can really back to a person. We don’t just want log data that doesn’t have any relevance back to a person. The goal is to build a singular view of who it is across all of these various channels, both known and unknown, and then be able to take that data and send it outstream systems, both marketing systems or back into the enterprise systems or into custom applications that you can imagine building on top of that.

So how does this differ than what you would typically see in this space? I always like to compare it to RDB master data warehousing concepts.

In a data warehouse, you typically may have a bunch of different feeds here, CRM, loyalty and mobile, all of these things coming together through a set of ETL jobs to basically, and a bunch of rule sets to say, here’s how we create the joins, here’s the rules for how we determine what first name, last name we’re gonna keep. And I can give you a singular view of who John Smith might be, and I’ll assign him some identifier of 98, 97. Powerful, works well, downside is it’s slow. I had to run ETL. I had to run some business rules and therefore I’m computing in advance who John Smith is. Where our system differs is, I’m not gonna do that predetermined view, I’m dynamic. So I’ll take those three same sources, but I replace ETL here with an identity graph and merge policies. So you can think of an identity graph here as nothing more than those join keys that you’ve used in ETL. A little bit different obviously than a join key, but what are the identifiers of who John might be? And you can think of merge policies as the business rules for how do I want to view John? And what this allows us to do is start to say, on request, when a customer wants to ask for who John is, they can almost dictate what view they get back. So I might have a CRM view, which is actually looking at all the fragments, but in a certain order. So I’m gonna take CRM as truth, I would like the royalty mobile pieces to fill in. I may have an advertising view that’s just focused on the mobile data. I still want to leverage all the identities of John, but I just care about a mobile view of that profile. Or I may have a view of a default view in the system, which is just time-based meeting. I don’t really care, or I don’t have a source of truth for the data, I trust all of it. So whoever the most recent copy, whatever system that’s from, I’ll take as truth and then everyone else can fill in the gaps. And that’s what the real-time customer profile is. So again, big difference I’d like to highlight here is, in the older systems, R and VMS, warehousing, you typically had large ETL jobs that were doing these types of computes, which meant latency. And the real-time customer profile, we’re trying to solve this problem with an experience platform of very low latency type requests. On demand will surface to you who Eric is. When we get into segmentation and some of those concepts, we’re doing the same types of things and saying I want to be able to qualify that event for a set of segments upfront versus having to run segment definitions against a bunch of events.

So, cool to kind of, I think, understand that. But as engineers, as developers, how do you put data into this tool? How do you get the data back out? What’s really happening in the scene? So this is one of my favorite slides to talk to. I wish I had it animated for you guys so it’s not such an eyesore. But there’s two patterns, ingress and ingress, for how we talk about putting data in. I’ll start on the batch side first. So from a batch perspective, there’s really three ways you can put data into the tool. If you have an ETL vendor, Informatic is a big one that we’ve worked with before. They can bring data in. Snap logic, I think, is another one that we’ve worked with. So they can batch data typically into our system. So they’re all basically handling all of your data engineering work. And then pushing data into our platform. We also have the ability, if you really want to geek out, not that I recommend this, but if you want to write your own Python scripts, if you want to do your own cron jobs, and you really want to chunk files and load them through our gateway to us, we give you a batch data API to do that. Reason I don’t typically recommend this is there’s much better ways to do this, either through source connectors. So drop the data onto a blob, onto an S3 bucket. If you have to, an SFTP. And we can pull that data directly in via batch into our system. Anything that comes into batch always writes to the data lake first. So we will do some minor transformation on that. Meaning if you bring us a CSV file or a JSON file, it doesn’t need to be mapped one for one to the schema. It just needs to be close enough to where we can do some last mile, what I would call ELT. So we’ll load it and do the simple transforms on it and put it into the right structure for the lake. And then anything in the lake optionally can be flagged to also be going to the profile store to build out that view of who Eric is. On the streaming side, similar concept, just obviously a different set of connectors. So there’s streaming API. So we have a HTTPS inlet that you can send all your data in.

We will also have source connectors for you to pull data out of enterprise systems. So Kafka Connect, Kinesis, Event Hub, a few other vendor platforms out there. Same thing on the sources side for batch. And then we also have our SDK. So anything that you see within Adobe Launch, or if you heard of Alloy, all of that data that comes off of our mobile devices, we can also pull in inclusive of any of our all of our Adobe products today. That comes in through the streaming service and lands on our Kafka pipeline. And on that pipeline, we also do that translation layer, pipeline smarts, but we’ll do that simple mapping job. And the big difference on the streaming side is as the data is coming in, if that data is intended to eventually go to profile, we will actually read it off pipeline and process it into profile immediately. We won’t first write it to the lake and then push it up to profile. So streaming is really, again, intended for if this data needs to be in profile fast because you want to be able to action off of it in real time or disqualify or qualify someone in real time, you stream the data in. If it’s more of a batch based operation, we batch the data in. So again, left brain, right brain, depending on what your use cases are for, we can solve it. If it’s marketing actions where I need to do something, stream probably fits the bill quite well if your system’s supported. If it’s more analytical in nature, there’s probably not a big reason to stream. You would be totally fine with the batch, but we’ll support either method. And on the stream side, everything always writes daily, but we just have a higher latency on that. We’ll micro-batch things off those topics back into the daylight about every 15 minutes. So that’s data coming into the system from the left-hand side. And then there’s a bunch of, this is where the cool stuff starts, which is, well, great. I put all this data in, I put the trophy on the shelf, as I like to say, I want to use that thing to actually do something with Adobe. What do I do? Bunch of egress options. And again, depending on what you’re trying to do, we probably have an answer for you. So if you’re doing data science, you have the ability to basically bring your models to us with those recipes and we can run the scoring jobs for you. If you want to do, we expose a query service capability, the Spark SQL kind of behind the scenes, that you can connect Power BI or Tableau up and do reporting off of those data sets in the data lake. Likewise, if you have the Postgres driver installed in your machine, you can just CLI directly in and run queries against the data lake itself for kind of your own discovery. Any data that lands in their data lake is yours, we say to customers. So there’s a data access API where you can come and download that data as much as you want and pull it back to your systems. And then anything sitting in the profile store, we give you a number of ways to get out. There’s batch destinations that will support, so I’m putting here more of the technical ones, S3 blob, SFTP, but we also have marketing based batch destinations. Braze is some, any email that you can typically take out of this batch in there, so Salesforce responses, et cetera. And then on the stream side, we have a bunch of streaming destinations. So as that data is flowing in the profile and it’s qualifying for the segments your marketing team’s building, we can basically stream that qualification event with the associated profile payload that you can find outbound, whether it’s back to some HTTP endpoint, again, Azure Event Hub, Kinesis, or some marketing based destination, Facebook, Google, Pinterest, the algorithm with LinkedIn, et cetera, et cetera. All that happens in stream. And so this is where the evolution of that text data starts to show up from the marketing side is the marketing world, a lot of this was batch before. It was batch, batch, batch. You want something outbound, batch it.

We support batch, but we’re trying to, everywhere we can, when we connect with any external system, stream, because that natively enables you to start to do some really interesting things in the marketing space. But it’s just not about segmentation in this tool. As a developer, if you’re gonna build this real-time profile, you probably want to take that profile and use it somewhere. So good examples, we have this out in Europe. There’s a big customer that we did some work with, but we actually take that real-time profile and the offers associated to them, and that’s what pops in the call screen, or the call center screen when the agent picks up the phone. As they can see the offers that should be given to that person, and if they deny those offers, or they proposition those offers, and the customer refuses them, that feedback comes directly back into the platform via stream and disqualifies from them from that offer, and in real-time on that page, they are no longer qualified. So if they call back five minutes later, those offers are no longer there in the screen. So that’s just a concept there of what you’re seeing is the near real-time profile access, where I can go look up Eric and see the attributes of Eric. I can see segments he might be a part of. I can also see offers that he might be currently eligible for, and then some things coming on the Edge profile, so this is where you’ll see our own solution start to leverage this, but it’s great if you have a profile in the hub, but there’s latency there. We need to be able to get down to sub, 100 millisecond delivery, so things like Target, onsite, same page, segmentation, and personalization can exist. So there’ll be an Edge profile that we currently have today that Target will start reading from here actually within the next month, where we can read that projection of who that person is on Edge and start to make decisions in real-time on the Edge. So again, that ancillary profile story that’s sitting in these Edge systems now actually has been moved into the real-time profile. And then you can do some really cool things now that all of our data’s in the same place, which is I want to subscribe to the events happening in platform to build monitoring jobs. I wanna understand if my segmentation job’s falling off. I wanna get alerting when my pipeline starts to become unhealthy from my own side and it’s not processing into Adobe. So Adobe IO events, all of our events get published back into IO and you can subscribe too.

And Project Firefly, big internal initiative we’ve had for a few years here, but Firefly is basically the compute that we provide the customers to build applications, which also natively can tap into IO, which means it also can natively tap into the things happening with an AP. So you can start to do some interesting things with Firefly in terms of what’s in the data lake, how is ingestion jobs running, what’s my monitoring process look like, where am I sending things to, et cetera, et cetera. And then there’s one last line here that you’ll see, which is this little dotted line coming out of the streaming data collection core service or DCCS, launch service side. And I will talk about that here in the next slide because I think it’s a compelling thing to really understand. So data collection being kind of core of why we built experience platform, kind of has a long history of the Adobe, but just like we have multiple databases for all these different marketing pieces, we also were in the business of direct position, all of those came with our own SDK. So you can imagine walking into a client, let alone our own engineering teams and saying, hey, I want the analytics data and I need target to run. And I’d like to have a pixel drop so audience manager can figure out what’s happening to that customer so I can build traits. And I’m gonna have basically three, four, five client side calls all asking for the same data, more or less, maybe a 10% difference. And then routing it to all of the Adobe stack anyways, and in at the same time, it would sit there and tell customers, but it’s our data, so you can’t have access to it. And I would hear this from engineers. I have friends in the field that do this, they would always complain to me, Adobe, it’s our data, like you just standardized it into a particular format, but we would love to tap into it because we might use it as engineers for debug, to understand what’s happening on the page. So as part of the experience platform build, we also were looking at a lot of our SDKs and saying, there’s gotta be a better way to do this. So we did, and there’s a great session, my colleague Joe Curry will be talking about where they deep dive into the edge network and how they really re-envisioned data collection. But of interest, I think for this group is, we standardized everything into basically one SDK for web and one SDK for mobile. And what that allowed us to do is say, any data now you collect on page, we will route to the edge and the edge will make the decision to route it to one or more Adobe applications, but we’re not just gonna route it to our own applications. You also can take that same data and send it anywhere else server side. And so this is really big, I think for customers, as well as our own engineering teams understand is, with that one SDK now you can deploy and route that data to any system, not just an Adobe system. We have customers that route this stuff just back to their own stack. I want to have the data coming from these application submission pages with the appropriate information, because at some point my application went offline and I don’t know who I did process. And so I would love to have something there to true up against and say, did we miss something? Right, Adobe typically is asking for some of the same data at some point in time within the marketing world. And so customers want to see it as well. And all of this really is built again, tons of stuff in this other session, again led by Joe Curry, but one JavaScript library that’s deployed, there’s a single beacon type, there’s data streams, you can configure server side destinations. This is inclusive and not, as I said, just Adobe applications, but third party use cases. And you can even build your own destinations on top of this from launch. So some really cool stuff on the data collection side of this and really the last piece I like to always talk about is, guardrails, when we talk about bringing data into these tools, what does that mean? So some guardrails to just be aware of when we talk about bringing data in, how fast does it really process Adobe into this profile before I can see it start showing it back outbound. On the streaming side, we’re pretty fast. So any data that you stream to us, whether it comes in through the edge or it comes in through a source connector, such as Kafka, or Kinesis, we will typically process data under a minute into the profile. And then it’s micro-vatched roughly about every 15 minutes back into the data lake, if it’s meant for the profile store. Likewise on the backside, things are a little bit slower here because it is batch. So you can see here why we have an API that you can do batch ingestion for, but I don’t necessarily recommend it, depending on your goal. Is if you go to the batch API route, we’re a little bit slower on the processing end because you’re having to chunk the files and load them through our gateway, and then we’ll reassemble it on our side into larger file. Versus if you use the source connector, we can get much better throughput just because of the cloud infrastructure’s in place. And so you can see kind of the different latencies here on things to your profile. So important to understand, because this does impact what we talk about when you get to the egress side. If things are slow coming into a system, you wouldn’t or you shouldn’t expect them to be fast coming out.

But some good ideas here of just inner workings of what the platform is and some latencies and details or outputs that you can expect to see.

So that’s it. That’s my big overview of Platform. So I’ll open it up for some questions at this point, but just to keep you guys in the loop, make sure you continue your conversation on Experience League. Come find me if you have questions or if you need.

We’ll open it up.

NLS is the source connector SDK available today.

Source connector SDK, I’m not sure what you’re referencing there specifically, but there is an SDK to load data via the batch API. So there is a documented set of API calls and some tutorials that walk you through how you can leverage that, but there’s not a way to, we don’t have anything right now to kind of roll your own source, but there’s some works. Happy minds, thanks for calling.

Roger asks, will AAM be phased out in future? Yes, that is the plan. It’s not gonna happen overnight. Google digs punt with privacy sandbox and cookies. We’ll kind of keep that thing rolling longer. Really, it’s gonna be up to customers to move off of that, but as the internet continues to move more and more walled garden, the future will be people-based destinations. So you’ll have to have some authentication event at some point if you wanna go market to those people. Roger, right? Right on the AAM front here. He had a follow-up question around when you look at audience manager, audience manager did a lot of predictive stuff with look-like modeling and things of that nature. Yes, a lot of that is being worked on by product right now. We have actually something in the works called segment match and so what that will functionally allow you to do is within kind of even between sandboxes, but amongst customers if they’re willing to participate, you could be, let’s say a big CBG customer that makes soft drinks and you want to use a segment match to figure out how to better advertise people going to a big e-commerce store. And so what we have is segment match where we’ll be able to share those identities with those solutions to better enhance your targeting or basically increase your reach. So you’ll see some more stuff kind of coming along those lines from the identity team to help support a lot of those use cases.

Anything planned for the EU around that? Yes, so segment match, everything we’ll do will be state side first here in the US just because that’s where we typically do the testing, but then it will be rolled out occasionally. Not sure of the exact timelines on that, but I can definitely try that for you. Ah, Amrish had a question on the near real-time profile lookup API.

So let’s see what the question here was. Can you explain the part where customers can access the profile attributes? Yeah, so you can actually, if you put all this data in to Adobe to build this 360 view of who Eric is, at some point someone’s gonna ask a question about, can I see Eric, not just all the segments or all the people that live in Chicago, which is where I live, but I wanna say specifically, Eric and his attributes for specifically Eric and his events. So we exposed what we call a profile entity API, where if you have the developer rights, you set up that project in Adobe IO, you can actually make a rest call to our system and then leveraging the identity graph, you can ask for any one of the identities of Eric so that could be my email address, that could be my CRM ID, that could be my EC ID. And we will return to you all of the attributes of Eric. And then you have the ability as a developer to kind of control also what attributes come back in that request. So you can imagine for call center screen, you may trust profile to give you segment membership and maybe a few things such as first name, last name, and some contact information, but maybe your SOR or SOT for loyalty points and expiration dates coming from a different system. So you can start to control what comes back and that you can also control or in that request, say I don’t wanna ask for the attributes of Eric, I wanna ask for the events of Eric and then say, I’m only looking for these types of events. Another interesting one we’ve seen in call centers where typically sometimes it’s, I wanna see the last five pages Eric has viewed on the site because maybe he’s calling them because his problem was he was trying to open a checking account and he was getting stuck in the application or he just gave up inside the column and then he finished it. So to help give some context, but all those APIs again are exposed, they’re on Swagger docs publicly. You guys can come in and play around with them.

Ah, yeah. So Roger, great questions, keep them coming. Roger asked, is the ID graph framework and have any integration with the privacy platform? I didn’t really talk about this, but it’s probably something I should have. Yes, everything that you see with GDPR, CCPA, some of the things coming out of a few of the other markets, I forget the other one in Europe that was being discussed, all of that supported via privacy service. Anything that you submit via privacy service, we will actually process for HIPAA delete in our system as well. And this is even, there’s even some pretty large enhancements coming to privacy service, not just for GDPR and privacy related requests, but even just delete requests in general. Think this more from an application engineer perspective of, I deleted this record from my source system because it’s just gone stale and provided to Adobe at some point. I also wanted to delete it from there. We’re gonna start running that stuff through the same service in the future. And that’s part of a lot of the work we’re doing around HIPAA compliance and things of that nature.

So yeah, that’ll be keyed off of basically everything, the identity graph is how you can make those requests and then we’ll go find it within our system and process it.

Good question.

All right, thanks guys. Well, I hope this was helpful. There’s a few other sessions going on, on experience platform. As I said, Joe Curry, my good friends, we can get a deep dive on the whole SDK stack in edge network. So I highly encourage that. Project Firefly, great sessions around that if you guys are interested. And the heed, I think is giving some deeper dives into the APIs. But if you have any questions, feel free to reach out to myself or PD directly. And as always, happy architecting. I’ll catch you guys later.

Additional Resources

recommendation-more-help
3c5a5de1-aef4-4536-8764-ec20371a5186