Data collection highlights and roadmap
What we’ve released and what’s coming up with Adobe data collection
Data collection is all-important at Adobe! You need to be able to trust the data you work with. We’ve had an eventful last year with releases and have great things coming up.
Hey everybody. Welcome to Experience League Live. I am your host Doug Moore, and glad to welcome you to the show today. Glad to have anybody here that is watching live and anybody who is watching later on. I guess you’ll, you’ll like that too I hope anyway. We got a great show today. Hopefully you saw on the lobby piece there that we’re going to talk about data collection, what’s been going on over the last year or so, and what’s coming up. So lots of great stuff to talk about with data collection today. And before I forget, you know, this is brought to you by Experience League, of course. So if you go to experienceleague.adobe.com, you’ll see our documentation tutorials, free courses, even where you can chat with your peers on the community, and kind of everything self help your one stop shop for that. So let’s dive in. We’ve got some great guests today. We’re going to start off by welcoming Rudy, Rudy. Rudy Stoppard, how you doing Rudy? Thanks for thanks for having me on one of these again. Yeah, great to have you here. And we’ll bring in Mitch Rice. Mitch. Oh, Mitch. Mitch got a bigger, bigger reaction from the crowd in the studio audience. Sorry about that Rudy. Great. Well, welcome, guys. And let’s just kind of hear just a couple minutes about you guys saw our crowd knows, you know, a little bit about you guys. So, Rudy, you are an evangelist, you are trying to remember the, you know, it’s super duper world renowned evangelist, something like that. Is that what’s on your card? Something like that, you know, it’s a senior evangelist, I do a lot of spend a lot of my time focusing on data collection, and helping our clients understand how to get all of their great data into our various Adobe systems. So nice. Okay. Awesome. And Mitch, you are senior product manager. I am. Right. So product for for all of data collection. So I’m most everything that we’re going to be talking about today, I’ve had my fingers in in some way, shape or form. All right. That’s awesome. Appreciate that. Now, I just want to just kind of, you know, double click, as they say in our industry on on you guys a little bit here. Hopefully they saw that Rudy that you understand that you were looking to adopt an alligator soon. As a pet at your place. No, it says that you have is that you had a rational, very rational fear of alligators. Absolutely. Based on experience or only you know watching the, the Steve Irwin show. No, based on experience so long long ago. I was for some reason in a canoe in the Okefenokee Swamp for some godforsaken reason. An alligator went off the bank under the canoe in the scoop up bump in the bottom canoe and I felt the vibration. Like I said rationally terrified of being close to alligators. So I wouldn’t do that moment. extra clothes in the canoe, I guess, at that point. Yeah, it was something.
Nice, nice. Awesome. And Mitch, we had for you that you are an avid, I know that you’re an outdoorsman in general, and hiking is one of those. And especially you told me that you were super interested these days in petroglyphs in southern Utah. So you’ve been able to go see some of those. Yeah, definitely. So I’m not sure if folks here have heard of Bears Ears National Monument, but down there, there’s just some incredible stuff from petroglyphs, pictographs, little huts and stuff like that that were built, you know, seven, eight hundred years ago. And it’s incredible to be able to go down there and see how people used to live. It’s mind blowing. Yeah, that’s cool. That’s cool. Were you able to gather any stories from those petroglyphs? Any, you know, translation of those? I mean, to me, it reminded me of seeing some of those petroglyphs, reminded me that’s, I think that’s where they got the pictures that people put on the back of the minivan with the mom and dad and the kids. A lot of them are like that. Usually it’ll be, you know, handprints of everybody who is living there or, you know, pictures of, you know, the people who live there. In terms of actual stories, most of them, I think, are pretty hard to decipher, but there’s a user imagination for most of it, I think. Yeah. Early data storage. Exactly. Yeah, that’s data storage. We don’t have the right schema to know what they were trying to tell us. That’s the moral of this story.
Make sure you’ve got the schema set up correctly so you can understand your petroglyphs in the future. Yeah. That’s right. Awesome. Okay. So, hey, just everybody on on the line here. We’d love to hear your questions. We’re doing this live because we want you guys to ask questions and stump the, you know, stump the experts here. You don’t have to try to stump them. But if you have questions and comments and those kinds of things, please do put them in. If you’d like, you can start off by telling us where you’re located. I always like to hear where people are dialing in from. That’s always fun. But go ahead and add comments, add questions, and we’ll ask, we’ll answer those along the way. So, we’d love to hear from you guys in the chat there.
That being said, then, how do you guys want to jump in, Rudy? Do you want to start? Yeah. So, you know, when we started talking about doing this session, we wanted to kind of do some of our favorite kind of highlights of what’s new for data collection or what came into the product, new features and functionality into Adobe’s data collection offering. And, you know, Mitch is going to give us some sneak peeks on the roadmap of some cool stuff that’s coming down the pipe. But I thought I’d start off with a couple of things that I really liked that came to fruition this year. So, I’m sharing my screen here, Doug, if you want to flip over. Yeah. Let’s jump over. I think this is probably you right here. Yeah. So, the thing that I really, that came out in the market this year was we had data streams, which was how we, you know, when we were talking about data collection using the Web SDK and unifying Adobe’s data collection so there’s, you know, one call going to the edge servers and being able to syndicate that data out to different, you know, not only different destinations within Adobe itself, but also different non-Adobe destinations via net forwarding. But part of the process that customers used to have to go through was they used to have to make sure they had every, all the data, you know, formatted in a, you know, a single data layer on their, on client side or formatted in the XDM format. But thanks to the great works of the engineers and the other folks on the product team, they introduced the ability to do data prep for data collection. And so, what that allows you to do is say, hey, I’ve already got a fully formatted or a fully constructed data layer that I’ve been working with. And I know I need to get it into the XDM format so I can send it to platform and I can work with experience edge and things like that. But they’ve made, we’ve made it easier now. So, I’m logged into, you know, the data collection suite of tools here in the Adobe experience platform. And I’m inside of one of the test data streams that I’ve got set up. And so, in order to leverage this, you can just go over here and edit the mapping. And when you edit the mapping, it’s going to bring up a, you know, a nice easy screen here where you can say, I’m going to select the source field. So, the source field is what’s on the client side. And then the target field is into the schema that you’re working with. But what’s really cool is there’s a couple different ways where, you know, what you may have to create this manually one time. But after you do that, you can import import mappings from different properties so that it’s very efficient to be able to repeat this, especially if you’re using the same structured data layer across multiple properties. But my favorite thing here is this add JSON. So, if I go up here to add JSON, you can go over here and say, I’m going to post the JSON structure of my data layer. And I’m actually going to paste the exact one that’s in our documentation. So, if you go and you look for data prep for data collection in the Adobe documentation, you’ll actually find this exact same snippet of JSON that I just pasted in. And then when you paste it in here on the preview pane, it gives you a nice representation of the structure of that JSON. So, you can, you know, you can validate it, look at making sure everything’s here where you want it to be. You know, like great, that’s perfect. Now I’m going to go in here to next, and now it’s really easy for me to go in and add a new mapping.
And I will go here and select the source field. And this is actually going to let me navigate that JSON data structure that I just pasted in here. So, I can go in and very easily go in and select, all right, I want the URL. And then I’ll go over here on the right-hand side for the target field. And this will pick the schema. You know, I was joking about those, the schemas that we needed for the petroglyphs, but this is the schema that I already set up inside of platform. And I can go in and select exactly where I want this data to go. And so, instead of having to make sure on the client side that you formatted all the data on the client side in the XDM format before you send it over, you can leverage data prep for data collection to do this mapping exercise. So, I think that this is really powerful and super useful. This is something that, we didn’t realize how much we needed it before we had it, but it’s definitely been a huge help to customers that already have really complex existing data layers or complex and existing implementations. So, kudos to the engineers and the product team on this one. This one’s, this is a big win getting this out. Yeah, that’s nice. That’s nice. And especially, like you said, and you showed just to be able to bring that in, in one little swoop there so that you don’t have to type in every single field that you’re bringing in from your data layer, right? Right. Okay.
Cool. One question that we got was from Fernanda. She wanted to know whether or not you could potentially use this for both a mobile app and a web property. And the answer is absolutely yes. So, each of the data streams has a specific data stream ID. And as long as that data stream ID is being used in the web SDK and the mobile SDK, that data is routed to the exact same mapping sets that Rudy is showing here with Data Prep, as well as to the exact same upstream solutions as well. So, tons of flexibility in terms of being able to kind of mix and match with the data streams to meet the needs of your specific implementation. But yeah, absolutely, these are reusable. That’s great. And even if you don’t want to, and let’s say, you’re right, Mitch, these are absolutely reusable. But let’s say, you want to use most of one for like web and mobile, and maybe you do want to kind of separate all of the properties. You could create one of these data streams and one of these mapping tables for your web property. And then for a mobile property, let’s say you wanted to use most of it, but maybe you’ve got some different things because we all know that there are some differences between mobile and web. Taps and pinching in and zooming out is not the same thing as scrolling and mouse click. So, maybe you have a little difference that you want to track. You can do this import mapping where you could go in here and select one of your other pre-existing data stream mapping that you’ve created and be able to pull all of those in. And then you can go and remove the ones that you don’t want, add a couple of custom ones. So, we’ve got them, you know, one of the things I think is really powerful and sometimes can be, you know, a little overwhelming to our customers if they don’t approach it right, is that there’s so many ways to use all these different features. Don’t get stuck into thinking there’s just one way you have to use this. So, like Mitch said, you can use the same one or you could create multiples with just some slight variations if that’s what your business requires. So, great. Hey, so I wanted to bring this up. We have this question from Chris, and I don’t know if this is kind of fits in here, but how does using this kind of and does this kind of fit into maybe doing a GA implementation from, you know, from launch, from tags, from data streams, from the edge, and stuff like that? So, I’ve long been a proponent of any work that you do on creating a data layer. For, you know, if you’ve spent time creating a data layer for any other system, whether it’s, you know, GA or any other system, then it’s, that work’s not going to go to waste. You’ll be able to leverage this mapping to be able to do that. And so, and so, even if you were trying to deploy, you know, GA4 from Adobe launch, if you have a data layer that’s meaningful to your business, and so, you know, I’m not talking about having one that says, you know, you know, my.data.evr1, you know, structured like that, but something like web page details and URL, once you’ve got that data point, you can send that to any destination you want. So, you can, there are GA extensions within tags, which is the client side tag management. We also have integrations to send that data over to GA using event forwarding. And so, it’s, you know, if you think about it as more of a kind of a data distribution system, then our closed, it’s not a closed off proprietary system, it’s we’re providing you multiple tools to say, give us the data any way you can, tell us all the different endpoints that you want to send it to. We’ve got multiple ways to help you do that. You can do it all client side via tags. You can move some of that to server side with event forwarding. It’s really just how do you want to interact and how do you want to syndicate that data out. Cool. Cool, cool, cool. Yeah, I’m not sure this is totally applicable to the, this question is to the data, data streams and data collection here, but I’m going to pretend that it does for a second from Boris with analytics and customer journey analytics. Doug, do you mind, can we talk for one second about GA4 for. Yeah, yeah, yeah, yeah, before we go on. From a roadmap perspective, one thing that I just wanted to call out is that we are working with some Google analytics migration templates so that if you are using standardized Google analytics data layers, that we’ll be able to take and map that in automatically over on the left hand side, which is the source data, and then you can start mapping that into XDM. So that’s on the roadmap. It’s probably going to be later on this year, but that is something that we’re working on as well. Oh, nice. I’m looking forward to playing with that. Yeah, thanks for, thanks for popping that in there. Sorry for being a little bit slow there. No, that’s perfect, that’s perfect, yeah.
That’s what you guys get with live. It’s with live, it’s like Doug, you need to shut up for just one second please while we give good information. That’s good. Okay. So following on the heels of this, you know, data prep for data collection, and I’m stressing that title because when you go and you start looking through the documentation, you know, we’ve got a couple different things inside of the Adobe ecosystem that data prep is used for, but if you’re looking for information specifically on this, make sure you’re searching for data prep for data collection and it’ll take you to the docs for this particular feature and it’ll also give you that JSON, that sample JSON that you can paste in there to test with yourself if you don’t have one ready to go of your own data layer. But following on the heels of this is, you know, one of the other things that, you know, some of the early adopters ran into was as they’re trying to move to this new way of data collection, moving from legacy collection libraries like from launch to using the Web SDK, they had to think about data collection a little differently. They had to think, spend a little more time about organizing and structuring the data they were going to send before they sent it over. So, and some customers are already doing this and some of them were, you know, very much is kind of ad hoc. I’m grabbing stuff on the page. I’m throwing it into the analytics extension and I’m firing it off. So, Web SDK requires you to put a little more thought and planning into structuring that data ahead of time. And so, one of the things that we’ve discovered is that oftentimes customers will have multiple objects. And when I say objects, I’m talking about little small chunks of a data layer. You know, you might have one JSON structure that’s got four or five attributes. You might have another one that’s got, you know, seven or eight. Or maybe you’ve created multiple XDM objects within tags and you want to send them all at the same time. They came up with this idea of being able to do a deep merge or merge objects. And so, that’s what I want to, that’s the second thing I want to call out here, Doug and Mitch, is I want to go back into tags now. I’m actually going to pull up my test site. So, I feel like we should go like this. I feel like we should have something like, like when it’s like a new tip. Nice. I like it. I got to do the little bewitched nose whipping. Yeah, that’s right. Yeah, yeah. That’s the sound that I thought of when you said, or that’s what I conjured up in my mind when you did that. Sorry, go ahead. No worries. So, I’m inside of tags, which is client side data collection. And I’m in one of my test properties. I went into data elements and you see, we got a long list of data elements here. And so, for example, I’ve got a couple of XDM objects. And so, if I click into this data layer, you’ll see that I’m using the Web SDK extension. I’ve got the schema here selected and I’ve got a couple of items populated. I could also have just created a data element on my own. I could go in here and say, I’m just going to do core and I’m going to reference a JavaScript variable that already has a data layer, that is already connected to the data layers. I could pick either one of these. Okay. And then what I want to create a merged object. And so, what I’ll do is I’ll go in here and say, I’m going to add a data element to merge under the core extension. I’m going to go down here and say, I want to merge these objects. And so, now I’ll go over here and every time you see the cylinder icon inside of tags, you know that this is going to bring up a menu that lets you pick other data elements. So, I can choose XDM object one. I’m going to add another and I’m going to do XDM object two, select. And I’m going to save this. What this is going to generate, this is going to generate a new data element called new merge. And actually, I’m going to get rid of the space because I don’t like spaces in my names. But what it’ll do is it’ll create a complete merge of these two objects. So, and why this is important is because if I go over here to the rule screen and I open up one of these rules here and we go to the send event rule action inside of tags, it’s where here’s like, well, here’s the XDM data that you’re just sending or any kind of free form structure data. So, I could now go in here and select that merged object. And instead of trying to do some custom code, write some crazy custom code that would go through the JavaScript and loop through it and build a new object, this functionality is here readily available that I can just say, okay, I want you to merge these two objects. I’m going to send it over. And then I can go into that mapping screen and do mapping and pull those items out and send that data to wherever I want. But the ability to have within the interface of tags, be able to have it go through and do this merge object for me is super useful. There was a time in the past where I liked opening up the console or I liked opening up a JavaScript editor and writing complex JavaScript and maybe some regular expression. Those days are gone. I don’t enjoy that part of the process anymore. And so, that’s for smarter people like Mitch and the engineers he works with. But this is a really powerful thing. And I think it got overlooked this year when it came out. People were like, oh, merged objects. And they didn’t stop and think about what this could mean as far as being able to combine different pieces of data, different chunks of data with multiple data elements or multiple attributes and nodes in there and be able to combine that into a single object. So, I could send it over here with the web SDK. Now, because it’s a data element, I can send that data to anything using any rule action that accepts data elements as a method to send data. So, it’s not just for web SDK, but that’s certainly a great primary use case. And so, they have, sorry, question is if they have like a, if they have a, if those two objects that you’re merging, so if they have a similar, you know, I’ll say grandparent field, right, then it will actually kind of demerge those and put them into the same, will it kind of put them into the same tree, if you will? That’s my understanding is if it’s got the same, like, you know, root attribute or root title, then yes. And so, that they’ll be merged correctly, or at least so you can reference them correctly using the dot notation. I haven’t done any extensive testing on that exact thing. Mitch, maybe, you know. Yeah. So, if, for example, you have an element or an object with that’s common to both of those separate objects, like page, like if you have a page in XDM1 and page in XDM2, that demerge will actually take all of the values that are underneath that and put them under a single element called page. Nice. Yeah. Great. Makes it super. Thanks. I’m sorry. I cut you off, Rudy. Sorry. No, no, I’m just saying that makes it super handy because if you don’t have to worry about, worry about things like that, you know, any kind of collisions or overriding, but it just adds to it. Yeah. Makes it a much more useful function. So, again, kudos to the dev team for getting that out the door. And for the folks that haven’t looked at that, I’d recommend spending some time testing it out and seeing if that could help make your implementations a little more efficient. Might even allow you to maybe remove some of the workarounds you’ve had in place before that feature existed. Yeah. Cool. Well, before we leave the data prep here, we had, we have now a couple of questions from Simon. Let me bring this one up and we can go over this one here together. Does this mean on the client side you use alloy or send event to send the JSON of whatever your data layer, this data structure is, and that’ll be picked up in the data streams mapping? I’m reading the thing again, make sure I understand.
So the client side you use alloys and events. So what it means, so when I talk about doing the data mappings, so it’s there, it’s kind of, I think the answer is yes, but I just want to clarify why that is. So the data is there, but it won’t be picked up automatically, meaning, and if you want to be able to map attributes in there, you have to go in there and create those mapping rules. And so either you have to know ahead of time what that dot notation is exactly, or so you can, so you can type it in, or you’ll need to go in like I did and paste the JSON structure in so that you could, so that you could do that mapping. So either you go to add new mapping. If I know I’ve got Rudy.data.item1, and I know exactly that’s what I passed over in that beta field via the web SDK, then I could type that in as long as I get it correctly. And I think data might actually have to go first there. Mitch can check me on my syntax. But then I can go into the target field and map it over. So the mapping doesn’t happen automatically. You still have to go in here and specify source fields and destination fields, even if they line up, you think they line up pretty well. So you just have to make sure that you either correctly type in the dot notation correctly, which I didn’t do the first time, or do that add JSON that I did. And I would recommend if you have a JSON structure, absolutely copy and paste it so you don’t make mistakes like I just did there, so that everything will work better. Right, so you can just select it instead of typing it. Right, because I have typos all the time. So anytime I can just select items from, you know, the fields here, the better. Great, and we’ve got a couple more merging questions here then. So we’re excited that people are learning about that. When the merge object and the Xium and data into one WebCK send event, what exactly is getting sent? Right, it’s the whole thing now, like big JSON object with both things in it. How’s it handled when received? So I mean, it’s handled, I mean, it’s exactly so one object is received. So, you know, the edge, for lack of a better term, is rather indifferent for the format of the data. What matters is, do you have a mapping set up or a rule and event forwarding that uses the right notation to extract the bits of data out of it that you want? So, and that’s one of those instances where I think it’s really powerful, but you just need to, you know, make sure you’re paying attention to how your data layer is structured or how you’ve set up the schema so that it, you know, it’s not, so it’s not such an arduous task for the mapping that’ll be easier if you design the data layer with meaningful names as opposed to item one, item two, item three, like I showed in my example. That doesn’t mean anything to anybody, but if you have better naming conventions when you do it, then it’ll be easier for you to extract those different data points that you want and then let the edge network send them off, whether it’s direct to platform or to analytics or target or over to event forward. Right, great. I know we’re going to move on here in just a second, maybe just a quick answer to a couple more of these. Do we need to have a unique ID or something to tie the data together at the time of merging? And it’s just based on field name, right? So if you’re, you don’t have a unique idea when you’re merging objects, okay, you’re not doing like visitor stitching or anything like that. You’re saying I’ve got these two data points and it might be just, you’ve got two data field, two data elements, and you want to be able to send them combined as a single data object where you’ve got more of a nested structure. And so the merge object just combines those so that you can send them off, send them over to the edge. So it’s less concerned about unique IDs or anything like that. It’s still just the transactional data that’s happening as a user is interacting with your site or your mobile app, combining that up in a way that just makes it easier to send over and then to be extracted into the various systems that will feed on that data. Okay, yeah. That’s why we get a shopping cart in the store. Yeah. So we can send it, go through with one thing instead of having to carry and send it as a terrible analogy, sorry. No, actually, I think that’s a great analogy. It’s much easier to do this. Does this affect client side and server side implementations? So absolutely. I mean, so I mean, in order to do, to leverage event forwarding or RTCDP action, which is our server side functionality, then the data has to be sent over via the edge. So any kind of mapping you can do or any kind of cleaning up of the data before you send it over, then it’ll just make it easier when you’re in the event forwarding interface to say, I’m looking for this particular either section of the data layer or this particular item, maybe two or three levels deep. I’m looking for a product ID or I’m looking for the entire contents of the shopping cart to use, to continue on your analogy.
You have the capability to grab sections of that data layer or individual nodes all the way down. So it absolutely has an impact on that. And hopefully the impact is a positive one that allows you to leverage event forwarding to be able to remove a lot of the, maybe the third party client code that you have running on your websites to help gain some efficiencies there. Oh, thank you. I love that we’re getting so many questions. Everybody, thank you for these great questions. And we’re going to go to you, Mitch. But I can’t move on without putting this one up there because I love to hear about the practical use cases so that this stuff is really, we can bring this home for people who want to use this. What are some practical use cases that you guys have seen, including event forwarding, as far as using this kind of stuff? What have you already seen? Do you mind if I jump in here for a moment? So I think one thing that we’ve seen is that customers who have separate data layers that they use for target and analytics, this gives an opportunity for them to combine those into a single object and then send that off to the edge where then they would map some of these values that are specific to their platform-based implementation. But it doesn’t necessarily have to be an analytics data layer and a target data layer. I’ve seen a variety of different scenarios where they’re just looking to combine different data that’s stored in different objects on the page and they want to see that as being the same object. Yeah, maybe even people have put together different parts of the data layer, different groups, and then somebody comes and goes, that’s the same thing that I created over here. Yeah. Yeah. Okay, well. And it’s been kind of an evolution. I mean, I think that we’re being as both proactive and reactive to how clients are trying to use the different tools and technologies that we’ve gotten. So data collection has evolved tremendously, not only in the past 10 years, but just the last two or three since we released the Web SDK, as we’re responding to meet the different customer needs about being able to offer things like the mapping or the merge objects and some of the cool stuff that Mitch is going to show. Yeah, cool. Thanks, you guys. Hey, Mitch, let’s jump over to you and let’s talk server API, right? Oh, there it is. Yeah. Yeah. So I’m going full screen here, so I love the comments or anything like that. So if anything, I’ll keep an eye on it. Just jump in. So what I wanted to do is take a moment and talk about platform data collection. I think many people have seen this slide, but if you have not, I want to take just a moment and kind of run through this really briefly. So over on the left hand side here, these are the methods that customers use to get data into platforms. So first is the Web SDK. I think a lot of people have heard of that. We’ve also got the mobile SDK and then the server API. And the server API is actually what I want to talk about today. It doesn’t get nearly as much airtime as the Web SDK or the mobile SDK, but I think the use cases are incredibly powerful related to it. So each one of these methods of sending data over in the left hand side, interact with the platform edge network. And Rudy showed the data streams UI a little bit, as well as the mapping UI. Basically anything that’s configured within that data stream, then it provides directions to the platform edge network in terms of how data should be routed. So if you’ve got requests coming from the Web SDK or the server API, and you have analytics and audience manager configured, then basically what’s going to happen is on each incoming request that contains that specific data stream ID, we’ll route the data to analytics and audience manager for those requests. Same thing with the platform solutions as well. So we’ve got real time CDP, Adobe Journey Optimizer, as well as customer journey analytics, all of which are built upon platform. And each one of those have basically upgraded and made updates to much of what you’re used to using in the experience cloud solutions.
Down here at the very bottom of the slide, basically what we have is the ability to forward data onto third party data sources. So it could be Facebook, could be Google, could also be a data warehouse that’s owned and operated by you. But the general idea here is that once you’ve sent this data to the platform edge network, there’s a variety of different ways that you can manipulate that data, map that data, and then send it on to either Adobe solutions or to third party solutions that you might have a need to send that data to. So like I mentioned, I really want to focus on the server API today. There’s a lot of reasons for that. But first what I’ll do is I’ll give a quick overview of what the server API is, and then we’ll talk about some of the reasons why you might use the server API. So first off, the server API leverages a new data collection endpoint, which is server.adobe.net. So if you’re using the web SDK or mobile SDK, those leverage edge.adobe.net. And for the most part, server and edge are basically the same. There are some minor differences in terms of how we process the data and the type of data that we require. But for the most part, all of the functions and things that you can do using the web SDK, you can also do using the server API. So the server API, like we saw on the previous slide, it provides the ability to interact with platform solutions, as well as the experience cloud solutions like analytics target and audience manager. And there’s two primary ways of doing that. So the first is there is an interact method. And this interact method is primarily used to capture single events. You can think of this as kind of a two way conversation. For each request that you send to the edge using the interact rate method, you can also expect a response to that call.
Now we also have the collect method as well. And the collect method allows you to send either single events or batch events to the edge. But the primary difference there is that that’s more of a one way conversation. So if you’re just looking to send data to platform or to do analytics, for example, and you’re not looking for any sort of response, then you might, you could make an argument to use the collect method instead of the interact method. But one of the perhaps the most unique thing about the server API is that it supports authentication. And so you can, you as a customer can choose to say, look, I want all of my traffic to this endpoint to be fully authenticated. And any requests that are not authenticated will be rejected as they arrive at the edge.
So we’ll talk about why that’s important here in just a moment. But what I want to do right now is just kind of give you a quick overview of what a server API implementation might look like. So here in the bottom left hand corner, we’ve got an end user device. And this end user device could be a laptop like is pictured here, could be an IoT device, could be a set top box, pretty much anything that is connected to the internet could potentially use the server API.
So what typically happens is as that end user device requests content from a customer owned server or a server that you own, basically the first step there would be to set a cookie on that device to make sure that we’re able to consistently identify that device.
The next thing that would happen is that that device would send that first party ID back to the customer server, as well as some sort of site interaction. Now that site interaction could be loading of a page, could be clicking on content. If you have like a thermostat, it could be a matter of raising or lowering the temperature by a few degrees. Basically the idea here is you have full control over how specific interactions with your server would then trigger a request to the edge network. So in this particular case, what we’re going to do is we would send that first party ID along with a request payload to the edge network. Now depending on your data stream configuration, you may have things set up to interact with the platform, analytics target audience manager. Basically anything that’s configured within that data stream configuration will determine how we route the traffic once it arrives at the edge network. Once we have sent that data to an Adobe solution and received a response, we will then return that response back to the customer server. Now in this case, it could be segments, could be personalization, could be a variety of different things, but the general idea here is that that customer server kind of acts as an intermediary between the Adobe Edge network and the end user device. So once those segments and personalization have been returned back to the customer server, that server then has the ability to potentially customize that experience for the end user device in a variety of different ways. So this becomes really really powerful in that now you’re no longer limited to just mobile devices or just web-based applications in terms of being able to personalize experiences, gather data and analytics around what’s being captured. So again, I can’t see anything. Doug, any questions? Oh yeah, so no questions yet. We’re just loving it. We’re learning new things. We’re learning new things. We’ve long had customers who have various other, whether it’s smart devices or they’ve got some kind of internal kiosk or some other set of data that just for whatever reason doesn’t lend itself well to what we’ve in the past considered traditional data collection methods. So now having access to the server API that gives you pretty much the exact same functionality as if this data was flowing through a client-side website, it opens up. So now it seems like, you know, again, however the customers have data and however they need to get it to us, we have an avenue where they can take advantage by leveraging the server APIs here. Yeah, exactly. Yeah, and we’ve seen customers use this in a variety of different scenarios. For example, one customer that I worked with recently was using this to capture data from an IVR or a telephone system so that they could track the specific paths that customers were taking as they were mashing the buttons on their phone, you know, trying to get to the right place. And so they were able to then start using that analytics data to optimize their IVR experience. Cool. Nice. Let me just run through a quick overview of why a customer might use the server API. So first and foremost, it is fast. The primary reason that we have customers who are using the server API is that you can start sending requests as soon as a page starts to load without having to load any JavaScript-based tag managers or SDKs. That’s huge.
Additionally, there’s a lot of flexibility, and we’ve kind of alluded to that quite a bit in terms of being able to tag pretty much anything. The third thing is secure authenticated communication. So this is huge as well if you have any sort of PII or PHI that you’re looking to send to real-time CDP. This allows customers the ability to ensure that any of that data, one, is not traveling over the public internet, but rather is being secured and is being sent directly from a server that’s owned by a customer to a server that’s owned by Adobe. And so there’s no real risk in that scenario of traffic being sniffed. The other thing that’s really important as well is that the team that has built the server API also owns the web SDK. And this is important because hybrid implementations is something that we hear about all the time, and they’ve designed the server API in a way that it’s going to be able to support use of the server API alongside the web SDK. So what that means is that you could potentially use the server API to retrieve personalization and segments at the very top of the page and then use the web SDK in order to capture page interactions and any other analytics data that you might need to using the web SDK. So there’s tons and tons of flexibility in terms of being able to match the server API with the other SDKs in order to ensure that you’ve got as much security as possible, that you’ve got the fastest experience possible, and then you’re also able to tag a wide variety of different applications and devices. So what I want to do is just run through a quick demo of what the server API might look like if you were to instrument it locally. So what I want to do is I have a site here, and the first thing that I want to do is just kind of run through a quick refresh of this site. And what you’ll notice and what I’d like you to take a look at is that the first thing that we’re going to do is write a first party cookie. Now that first party cookie is useful for identifying this specific device.
And then what we’re going to take a look at after that is the network tab. So we’ll take a look at this first. I’m going to hit it and do a quick refresh. And what we’ll see is that we’re going to get, oh great. There we go. Okay. So first thing that we have is we have a first party cookie. So first thing that we have is we have my first party ID that’s been written to a cookie here. We also have some other cookies that we are writing to this device as well. This is an encoded version of the EC ID. We also have a region that indicates which region I’m currently interacting with in terms of the platform edge network. So that’s the first thing that I wanted to show you. The second thing that I wanted to show you is that typically in order to be able to interact with the edge network, you would fire an interact call directly from this and from the webpage or from the device that’s making that request. I’m going to do a quick refresh here, but what you’ll see is that there is no interact call. So all of this communication is happening directly between the server, the platform edge network, and then it’s pushing any changes down to the end user device. So the other thing that I wanted to show here really briefly is that we also have personalized content that’s being sent by target. And so basically what’s occurring is I’m doing a refresh here. The server is making a request to the edge network. It’s getting personalized content and then pushing that content back down to this end user device.
So one thing that I want to do is I’m going to uncomment some code. And this code right now is the code that will generate a client-side request. I’m going to do a quick reset of my server.
And what we’re going to see now is that we’re going to start seeing interact calls flowing using the web SDK directly from the browser as well. So the idea in this particular scenario is one where we’ve kind of instrumented that hybrid use case where we have target content being pulled down by the server. And then we have specific page interactions that are being captured using the web SDK and sent that way. So another quick refresh of the page and what we’ll see is we still got the modified target content. And now we’re starting to see interact calls that contain information about this specific page load. If I were to, for example, click on get in touch, in this case, what we’re going to see is that there were some changes in the page that we’re going to see another link being sent to the platform edge network. So I click here, we see that there are additional calls being made.
Nice. Hybrid.
Yeah, truly. So one thing that I want to finally show is what I have in terms of a configuration. So what we can see here for this configuration for my Docker demo, I’ve got target enabled, I’ve got event forwarding enabled, and then I also have Adobe analytics enabled. So one thing that I want to show here just briefly is that I’ve got event forwarding, sending data onto this web hook. So what I’m going to do, and I should have shown this just a moment ago where we didn’t have the client side call being sent, but basically when I hit refresh here, that initial request that was being made to retrieve that target content is now being sent to this and forwarded onto this web hook site. And you can see how fast that was occurring. Like it’s literally seconds from the time that it hits the edge and then it’s forwarded on to a third party. So this stuff is all just lightning fast. Nice.
The final thing that I wanted to share is that this is all integrated as well with the AEP debugger as well as assurance. So when I click on refresh here, you can see that these events are showing up in the debugger as well. So a lot of the tools that you have been using for quite some time in order to instrument both the mobile SDK and the web SDK are also available for the server API as well. Nice.
All right, Doug, I think that’s all I got. No, that’s awesome. That is great. And it’s going to give people a lot of options as far as sending data and stuff. And we’ve had a couple of questions and I told you guys to ask all the questions. And that’s great. We may not have a chance in this call to answer all of them. If we don’t answer your question, come back to this page a little bit later on, maybe like tomorrow. And I will have set up a page on the community where we can get answers to all of those questions. Okay, so you will get the answers to your questions. You know, for example, I know there’s a question here about about first party ID. And we’re not going to have time to go into exactly how you set that up in this one. But but we’ll give you some more information on on what you need to do that. There’s a question kind of about, you know, elaborating on segments and personalization. And am I right in kind of saying you guys that, that, you know, the segmentation and the personalization, that has happening from say target, or other kinds of applications like that is all the same. It’s just like how you’re getting that information to and from the page is now on a server level, as opposed to that, you know, you know, the on the browser on a client side level. Great. Yeah. Okay. Yep. Okay. I can be taught. I can be taught. Well, I think we’re seeing a lot is the customers instead of instead of trying to use the VEC and push all of their content directly back to the page, increasingly with target and offer decisioning, we’re seeing customers just basically set a feature flag. And instead of then, you know, actually pushing this specific content using target, they’re indicating on the server, oh, I should present this entire experience, as opposed to this experience. And so it then becomes something where they have a great deal more control over the actual content, it’s able to be developed in the environment that they’re used to. And then target then just becomes this decisioning and reporting engine.
Yeah. Now, as we’re getting, you know, we’re getting kind of close to the hour here, and we don’t have to know we’re gonna need all the power is not going to shut off at the hour. But, but I know that we want to keep this relatively not crazy long for everybody. But so I’d like to do if we can, Mitch, I know that, you know, people probably sit on the edge of their seat waiting for understanding what’s coming up. And so I’d like to do that. But you did mention assurance. And so, Rudy, I don’t know if you want to share anything, but I would like if you can just give us like, I’m sorry, like one minute on what assurance is. And we can get some more information, maybe links to, to how to set that up and how to use it later on. But if you can, you know, we kind of mentioned assurance. So I’d love to have you at least talk about that. Yeah, so go ahead. I’m sure on my screen, because I’m a visual talker here. Okay, how about over here like that? Is that right? That’s perfect. So just like Mitch showed you using the experience platform debugger, you know, you have the ability over here on the edge to actually see different things that are happening. So like, if I connect here, and this is, you know, using the standard web SDK implementation, once that, once that session starts, and I go over here and refresh my browser, and I start navigating to different parts, you know, we’re very used to, we’ve got complete visibility in what’s going on in event forwarding or servicing. So but what’s really cool is, you know, in the past, when you said this is great, and I close the session, and all my data is gone. So with assurance, so now within the experience platform data collection interface, if I go over here to assurance, it’ll refresh for me, you can see here’s what I did earlier this morning. And here’s the one that I just closed out. So these are saved in your Adobe interface. Now, I can go back into this web debugger and look at all those events and things that were happening, even though they’re no longer sitting up in that debugger browser extension. So very powerful. And there’s actually some very new features that have just been released, that, you know, that, you know, the release notes are coming out and things that I’d love to talk on in more detail. We don’t have time today. But there’s some really cool features that are coming out with assurance. So definitely go hit the documentation, you know, look at the community, because we’ll put some more information out there. But this is, it’s huge to be able to go in and say, I know there was that one data field I was looking at, but I can’t remember where it was. And you can even name them. So like I even named it so that I could go back and find it easier.
So it’s just another tool in the tool bag here for how you can not only send data, but this will give you that, you know, hopefully that warm fuzzy of, okay, I see the data moving to the edge, I can see the server API stuff like Mitch just showed me. So I feel good that what I’ve configured and set up is working properly. And so that’s what this is here for is to give you that assurance. Nice. I feel so assured. That’s why you should. Yeah. And we have a request for another session on assurance, maybe going into the details. We’ll talk about that on another episode. Yes. So awesome. Thank you for that request. All right. Thanks, Rudy. Yeah, let’s jump back to you, Mitch, here and tell us a little bit about what’s coming up. All right. We’re going to focus primarily on H1 and we’re just going to fly through this. So the first thing that I’ve got is we’re working on a read only fetch end point. So previously I talked about the interact endpoint, which was that two way conversation. And then the collect endpoint that was really focused on that one way conversation of sending data to analytics and CJA. The read only fetch endpoint is really focused on a one way conversation. Well, it’s still a two way conversation, but its primary goal is to retrieve content from the edge without recording an event. So this is something that customers have been asking for for a long time.
Things have often been instrumented top of page, bottom of page. This endpoint is going to enable a lot of those use cases and is something that we’re really looking forward to. Without increasing server calls. Correct. Yeah. Yeah. Yep. So the goal here is be able to interact directly with target offer decisioning audience manager, no server calls being sent to analytics. So that’s the goal there. That’s something that we anticipate is going to be available in April for the web and H2 for mobile. Next up, we’ve got configuration overrides. So in the past, for example, you were able to dynamically swap out report suites. The goal here is to do that same thing for your data stream. So swapping out report suites, product, or excuse me, target properties, data sets, things like that. You will now have control to be able to make all of those sorts of changes on the fly. We’re also working on IP obfuscation as well. We have a lot of requests for this, especially from our customers who are based in Europe to either partially or fully obfuscate the IP address. And that’s something that we anticipate is going to be available shortly as well. Client hint obfuscation is something else that we’re working on as well. So I’m not sure if folks are aware, but over the course of this past year, Chrome made a pretty significant change in terms of how user agents are captured. User agents are now client hints, and they’re far more granular than they’ve ever been in the past. One request that we’ve had, again, from a number of our customers in Europe is to fully obfuscate those client hints so that we’re not capturing any of that data. So that’s something else that we’re working on.
The other thing that we’re working on are geo device and carrier lookups based on IP addresses and client hints. And so the goal here is when those values are available, we want to make sure that we’re able to provide as much information as we can around geo device and mobile carriers.
And then one thing that’s been kind of a big ask for at least a couple of months now is being able to access the identity map object within Dataprep for data collection. So that’s something else that we’re working on. Specifically for the Web SDK, Rudy mentioned a lot of the tools that have been built into tags recently in terms of the deep merge and things like that. The Web SDK is actually working on a number of improvements that are going to really ease the transition from an existing analytics implementation to one where you can start sending data directly to the edge with very little effort. And so look for that in April as well.
For the Web SDK, we’re also looking to release activity map in the next couple of months. One thing to note though is that this is the raw data only for activity map. And so when it comes to the actual overlays that are available in the Chrome extension, those are not going to be available until a little bit later on this year. But in terms of the raw activity map data, that’s coming right away. And then we’ve also been working on AJO web channel support. And so this is really going to facilitate being able to send messages using AJO via the Web SDK.
And then finally for the mobile SDK, we’re working on media analytics for iOS. I’ll just note that in H2, we’ve also got media analytics on the roadmap for the Web SDK as well as for Android as well. And then back over to the final item on H1, we’re working for Flutter support for AEP optimize, which is the personalization tools that are built on Adobe platform, as well as journey optimization messaging. Doug, I think that’s all I have time to cover today. That is nice. Okay. Yeah. That’s a good list for us to look forward to this year. We needed that applause sound effect around the migration improvements. There we go. It was a little late, but still nice. There we go. Nice. Yeah. Well, you know me, I’m a little bit late. That’s about right. It’s all good. Yes. Okay. Well, you know, we’re closing out the show here. I don’t know. I can’t remember if one of us decided that we did have A. Okay. Hang on.
Oh yeah. Unrelated cool tip. Can’t leave without that. So what’s our quick unrelated cool tip today, Rudy? So can you go full screen? Because again, I’m a show and tell person. Oh gosh. Let’s see full screen for you here. Okay. I don’t think I have my magic button here for that. I do a little 3d printing at the house. And so if you’d use, if you’re trying to make like a little stone statue, try to move it right there. There it is. Textured spray paint. And if you’re careful and do a nice little light coat, not only will it fill in the layer line, so it doesn’t even look printed, but it gives a nice effect and seems more like a real sculpted object. So, wow.
So there’s my textured spray paints on 3d prints for that spray paint.
Home Depot or those? Just Home Depot. Awesome. Picked one up at Home Depot. I think it was like a limestone. Was the color just nice and light? You know, if you do too heavy, it fills in the detail lines and you lose detail in the print, but you just want to, you know, a couple of light coats and it looks like it’s carved out of stone. So I love it. Very cool. Hey, well, thanks to Rudy and Mitch for so much great, great information today on data collection, things we’ve done, things that are coming up. Appreciate that. Thank you everybody for being with us today for this episode of Experience League Live and look for upcoming events as well. You can always again go to experienceleague.adobe.com and get the replays of these if you’ve missed some Experience League Live events in the past and you’d like to learn more about, you know, any of our products, go ahead and check that out also. That’s the other thing that’s on experienceleague.adobe.com is live event replays. So great to have everybody here and we will see you next time. Thanks for having us. Okay, see ya. Thanks everybody.
Watch the video above to view this live stream event, where Adobe data collection experts gave a recap on important recent updates, as well as a glimpse into upcoming roadmap items.
Have questions about it? Continue the discussion on this topic on the Adobe Experience League Community post.