Real-time CDP Data Readiness
Learn the strategy behind organizational preparedness from a data readiness perspective for a streamlined Real-time CDP launch.
- Understand real-time CDP data readiness
- Learn how to align on business use cases as an organization
- Key resources & actions for real time CDP readiness
Transcript
See my screen, Katie? Yeah, looks like we’ve got the agenda up right now. Awesome. Well, thank you everyone for joining. Before we get started, I thought we could take a quick look at what we’re going to discuss today. I designed this session to really be a thought starter as you get Adobe’s real-time customer data platform set up in your organization. We’re going to talk about data readiness, why it matters. Then I’m going to share some key considerations and resources to get started. As Katie mentioned, we’ll save some time at the end for Q&A. I hope we’ll have a discussion then and again, hopefully this is a thought starter and really gets those questions going. I wanted to note that while this content is geared towards customers who are thinking about licensing real-time CDP or in the early stages of implementation, it will also be helpful for those of you who are live with the solution. You may be thinking about activating new use cases, which require new data sources and management and governance of that data and all of that. So my hope is that this really serves a broad audience. The other thing that is important to note is Adobe’s real-time CDP is built on top of the Adobe Experience platform. So some of what we’ll talk about today are capabilities built on top of the platform. And I wanted to note that just so that you all keep that in mind as sometimes I’ll alternate between referring to the platform. Again, that’s the foundational layer or real-time customer data platform itself, which again is built natively on top of the platform. Okay, I think we’re all set. Now let’s really jump in. So before getting into any best practices or considerations or recommendations, I wanted to give some context as to why we’re even talking about this today. As you know, I’m a customer success manager and what we’ve heard from many of our customers is that data quality, data integrity and data management are of utmost importance, but organizations can experience a lot of challenges when it comes to these concepts. Gartner published a study that found that many businesses are really taking a reactive approach to data quality problems, meaning that they’re creating bandaid solutions that are time consuming and fail to find and fix the underlying causes. So that’s really the challenge. Now the question becomes, how can organizations become more proactive and avoid these costly issues in the first place? Well, it all starts with prioritizing the right data. Prioritizing the right data is vital for success. When you think about a customer data platform, you might think of a unified profile or segmentation service. You might even think about person and account matching if the nature of your business is B2B. The one thing that all of these concepts have in common is that they rely on data. So prioritizing the right data to bring into your customer data platform is really the most fundamental step during implementation. This theme of data prioritization is really the crux of our discussion today. So the question now becomes, how do you know what data to prioritize? Well, we’ve all heard the phrase, it’s not the destination, it’s the journey. And while that may be true in life, in the case of implementing a customer data platform, it really is about the destination. What are you looking to achieve? One way to think about this is like going on vacation. You’re going to pack differently depending on what kind of vacation you’re going on and what activities you’re going to do once you’re there. So whether you’re going to the beach or skiing, going to a big city or the mountains, you need to pack for the right destination. You don’t want to be this person, thank you Adobe Firefly, on the beach in ski gear. Similarly, with a customer data platform, you don’t want to end up with something where you just have a bunch of data that’s completely irrelevant to say that the email marketing use case you’re looking to activate or site personalization. You want to ask yourself, what customer experience use cases are you going to be activating? Again, different use cases require different data sets. You’ll need data for segmentation, you’ll need data for reporting. This is the first step in answering the data prioritization question. So again, you don’t want to be this person on the beach, this beautiful beach with ski gear. Instead, you want to be either this family on the left, hat and sunglasses on, towel on hand at the beach, or our friend on the right with all the right gear to hit the slopes. So again, it starts with use cases. This is really the number one prerequisite to a successful customer data platform implementation. Defining your segmentation, activation, and reporting use cases. Maybe you’re going to be doing onsite personalization for unknown visitors, or perhaps you’re looking to upsell a new loyalty member to a higher tier of your loyalty program. Defining these customer experience use cases is how you’ll determine what data you need to prioritize and ingest into CDB. So I think we all get it. Use cases, use cases, use cases. I cannot stress that one enough. Only after defining your business use cases can you start to think about things like data modeling, sorting, stitching. What you see here on the side is our recommended approach for designing your data model. So the first step is really identifying the applicable data sources that are required to carry out your business use cases. And these will vary from organization to organization. Once you’ve defined or identified those data sources, you’ll want to create a high level entity relationship diagram to help guide the process of mapping your data to experience data model schemas. Then before constructing those schemas, use a top-down approach by sorting your data tables. Doing this will really streamline the process of that XDM schema creation. One thing I’ll add is as a best practice, make your schemas as simple as possible and only add new fields when absolutely necessary. On point four regarding data stitching, it’s important to have a clear idea of how you’re going to connect data across sources. This could be customer ID, ECID, or another identifier. This is a really critical piece of data accessibility and usability. Finally, a data minimization strategy is vital when it comes to ensuring you prioritize the right data sets for your business use cases. So this includes setting things like expirations or time to live, which we’ll get into subsequently in this presentation. All right, so you’ve defined your use cases, you’ve confirmed data accessibility, which is identifying your sources and designing your data model for ingestion into the platform. Let’s talk a little bit about ingestion cadence and how that relates to data integrity. So as you can see here on the screen, there are two ways to ingest data into the platform, streaming or batch. Streaming data can be collected via tags or one of the many pre-built source connectors that Adobe makes available. Streaming data collected via tags is forwarded to the edge network, which we’ll talk about in just a minute. So we’ll put a pin in that. Streaming ingestion will allow you to ingest data from real-time messaging systems, other first-party systems and partners. And then that data is placed on the experience platform pipeline for consumption by other systems in real time, which is really the main point here in real time. So while streaming ingestion is useful for immediate data processing use cases, batch is helpful on the other hand for processing and analyzing large amounts of data periodically. As you start to think about what ingestion cadence you’ll use for which source, it’s important to tie this back to use cases. Do you need your data in real time? An example of this would be behavioral data for same page or next page personalization. For other use cases, perhaps an email campaign, batch is typically recommended. The golden question here really is what frequency do you need to ingest your data to ensure it’s one, accurate and two, relevant? Let’s think about this from a use case perspective and look at the three different segmentation options that you have depending on what you’re trying to accomplish. So say you’re a retailer who wants to personalize an experience for someone scrolling on the same page. Let’s say you want to recommend, I don’t know, a pair of jeans that would go perfectly with the top they’re viewing, for example. That segmentation and that delivery has to happen in milliseconds to be meaningful. So edge, as you can see here, is the right frequency for that use case. On the flip side, again, let’s say you’re sending out a personalized email. That might require information by the hour, let’s say, in which case batch would be the right segmentation and delivery method for that use case. I think the key takeaway here is it’s really about flexibility. Not everything needs to be in real time. Think about what you’re trying to accomplish from a business perspective and then pick the speed. All right, so so far we’ve talked about data accessibility, ingestion, segmentation. I want to now shift to modeling and management of the data that you’re bringing into the real-time customer data platform. So Adobe sets default guardrails for CDP, and it’s really critical that you model your data with these guardrails in mind for optimal system performance. So there are default limits set on the four services here, schemas, identities, the profile, and audiences. I would recommend two actions on this note. If you’ve licensed real-time CDP, review your contractual limits, which is checking to see how many profiles you’re licensed for annually, and take a look at the license usage dashboard that I’ve included a link to here on the slide so that you can monitor your usage as you start to ingest data. Secondly, familiarize yourself with Adobe’s default guardrails. Again, there are default guardrails on these four services. These change from time to time, so please check the documentation that I’ve linked here regularly for updates. Really, the point here is, again, prioritize your data and only bring in what’s needed for your use cases. And again, all of this relies on accomplishing that step of identifying those customer experience use cases. Okay, so now we’re getting into data management best practices. So at this point, you know your license limits, you’ve reviewed Adobe’s default guardrails. You might be wondering how to manage your data to comply with those default limits that we just reviewed in your organization’s policies. That’s obviously a critical piece of the puzzle as well. So there are six important capabilities to help you adopt data management best practices. As more data is ingested into the system over time, it becomes increasingly important to manage your data stores so that data is used as expected, and it’s updated if incorrect data needs correcting. And it’s deleted when organizational policies require it to be removed from the system. So I’m not gonna go in depth into each of these, but there are three that I want to highlight. Data prep, pseudonymous profile expiration, and event expiration. The concept of data prep ties directly back to use cases. It’s the process of minimizing your data during ingestion and prioritizing only, prioritizing ingesting only what’s required. Pseudonymous profile expiration and event expiration are really complimentary features. So let’s say you have a website and you get a lot of anonymous visitors. You collect data, a first party cookie, but if this cookie isn’t matched to a known profile, it’s usually not relevant for that long. Similarly, let’s say you’re collecting event data for visitors landing on your site. You can very quickly collect a lot of data. Think about how many clicks do you typically make on a site? Each of those could be collected as an event. And this is actually one of the biggest reasons that customers go over their license usage, which we just reviewed on the previous slide, is collecting all of those events and storing them in the system for long periods of time. So the event expiration capability allows you to expire that event data when it’s no longer needed or really relevant. So for both of those concepts, autonomous profile and event expiration, ask yourself as a marketing organization, when does data stop being useful and do you really need it in your customer data platform? So key takeaways from this slide, review the capabilities, incorporate them into your data minimization strategy. We always recommend setting up time to live at the very least. And that can be considered again in the early implementation stage. Last thing on this note, on this point, I will note that if you have more advanced data preparation or manipulation needs, we do have premium offerings such as data distiller. So if you’re looking at all this and you think that you may have a use case that would require something a bit more complex or robust, we do have offerings to serve those needs. Okay, so we’ve covered a lot. We’ve talked about the importance of defining your business use cases, the concept of data accessibility and ingestion cadence, profile guardrails and data minimization and data management best practices. So the next topic and a really important one is data governance. How do you manage and govern your data once it’s in your system? And how do you manage and govern that data when it’s going to be used for activation or segmentation for that matter? There are two flavors of data governance, if you will, data usage labeling and enforcement or dual and access controls. So under access controls, we have attribute based access controls and role-based access controls. So we have the data usage labeling and enforcement, which is Adobe’s patented framework to help you scale your governance practices and enforce data usage policies downstream. We’ll see an example of both of these in just a minute. On the right, we have on the center and on the right, we have two different types of access controls, again, attribute based and role-based. So let’s look at a sample use case or an example for each of these, and then we’ll revisit this slide to highlight some key takeaways. So starting with data governance, you have some sources on the left, you’re ingesting some data into your CDP and you’re activating it to some destinations for your business use cases. Very, very generic setup. So let’s see how the dual framework streamlines the process of applying labels and policies to actually ensure the correct use of that data and enforce it. So you have a data set and you want to limit the use of that data set, let’s say prospect profile. You label it, let’s just call it third party, a fictitious label third party. This label is reflecting a privacy related or contractual conditions that need to be considered when that data is being used. So next you’re looking at building out a segment. Let’s say you want to target potential vacation package purchasers. As you can see here, you put a label on that segment, which is third party and the approved marketing action that goes with that label is onsite targeting. So imagine that you want to do that onsite targeting experience again, for those vacation package prospects. The point of dual, which we’ll see is any other use of that data is going to be restricted and you won’t be able to activate it. So let’s see it in real time. So you go to activate that segment, you want to activate it to social media, let’s say, or you try to activate it to social media and you also map it to your onsite personalization engine. When you map it to social media, it’ll be blocked because that was not one of the approved uses of the data. However, if you map it onsite to via a personalization engine to your onsite experience, it’s activated because this is an approved use of the data. So this really just illustrates how streamlined and simplified it makes dual makes the process of managing, labeling, putting policies around and activating your data. So this is dual. Let’s take a look at an access control example. So we have the same architecture. This time you want to apply access controls to restrict the activation of that data and what’s done with that. So the same similar setup, you have some data in, you have two roles, one of which is an internal marketer and one of which is a third party agency. So you put a label on that role that corresponds to the internal marketer, C9 data science. So let’s say you have a high interest CD offer that also has the label C9. Note that both roles have permission to view and edit the segments. But the key point here is once you go to activate that offer in an email campaign, role two will not be able to activate that because they don’t have that label. That label does not correspond to that role. Whereas role one does have the label that matches with the segment that you’re looking to activate. So that’s our simple example of access controls. Again, key takeaways here, dual simplifies and streamlines the process of setting labels and policies and enforcing those so that your data is being used correctly. It enables you to scale by automating workflows and really gives your compliance teams peace of mind. Attribute based and role based access controls are part of an access control management strategy and should be used in parallel with dual to govern user access to your data. I’ve included some links to both dual and attribute based and role based access control here. Okay, so I also wanted to share some resources to get started. Again, the goal of this presentation was really just to be a thought starter and to get you thinking about the key considerations when you’re implementing CDP. And I sprinkled some resources throughout this deck but wanted to provide some additional resources at the end as well. So firstly, I’ve included a video and a white paper on Adobe Center of Excellence model. We’ve covered a lot of ground today with topics like use case documentation, data integrity, data management, governance. A center of excellence is a team that coordinates these efforts. So they bring in the necessary stakeholders from across the business. They establish processes around the topics that we’ve covered and creating a center of excellence to facilitate the usage of your customer data platform will allow you to maximize your investment. So I would recommend reviewing the video and the white paper to learn more about this model and how to get started. Another thing on the people on process note as it relates specifically to implementation and data readiness, in addition to a project manager and involvement from marketing teams, which is critical for defining and documenting use cases and organizational alignment, we typically recommend these four roles from IT to support implementation. And again, this is all in the spirit of having the right team in place to stand up your customer data platform and govern the use of that platform. So that’s where the center of excellence comes into play. All right, so I’ve also included some links to all of the topics that we’ve covered. These are great resources on Experience League that really go in depth on each of the topics. I picked out the most comprehensive guide for each because I really didn’t want to overwhelm you with too many resources. I hope that this content was helpful. I saw that there was some conversation in the chat, so maybe we can take a quick look at that, have some Q&A, but again, I hope that this was helpful to get you thinking about some key considerations and the process of documenting use cases and how all of these puzzle pieces fit into a successful CDP implementation. Because there is a lot, things like the default guardrails, data management, dual, a lot of topics. So my hope is that this really sets the stage for how all of these are related and what you need to start considering in that early stage of implementing a CDP. So again, I hope that this was helpful. I really appreciate you taking the time. Let’s open it up to some questions because I did see that there was some good conversation going in the chat. So Katie, would you like to facilitate the Q&A portion? Sure. And I think that most of them have been addressed so far, but Stukanya, is there any that you think we should speak to that might be beneficial for the larger audience? I didn’t see it come through in the chat. Stukanya is muted, it looks like automatically. All right, one second. Sorry about that. All right, give us just one moment. One second folks, hold on. Again, why we work on, getting her unmuted. If there’s any other questions that you guys want us to speak to while we’re on the line, feel free to go ahead and filter those in as well. Just one moment and I will get this adjusted. Thanks, Katie. Yeah. All right, I think I’m just a couple of clicks away. Okay, you should be unmuted. Yeah, hi Katie, how are you? Thanks. Okay, so the last question I was typing, I can actually answer here. The question was like whether the data is captured from XEM event will be available or can be used for personalization in AJO. So if you are thinking the personalization of an AJO journey with some specific from your profile attributes, then no, we cannot. Event data we generally use to define who can get into the journey and when he’ll be out of the journey. So I don’t think we can use event data for personalization. Any other questions? Give it another moment. Megan, I think you did a great job of addressing those in the chat as they came through. While we’re waiting to see if any other questions come in, I just wanna note that there is a feedback form link that I am posting in the chat and the Q&A, if you guys before you roll out can take a moment to give us feedback on this session. It really is valuable to help us shape future sessions going forward as well. All right, I’m not seeing anything else come through, which is fine. I know that was a lot to take in over the last few minutes here. So as you think through today’s content, if you do have any questions at all that come up, feel free to reach out to us or your Adobe account team and we’d be happy to walk you through the final points that you’d like more details on. So thanks again, Carolyn, for taking us through today’s content. Thank you everyone for your attendance. Again, please submit some feedback through that form if you’ve got it. And we look forward to seeing you on upcoming sessions as well. So again, thanks for your time. Sorry to interrupt you. Yeah, there is a question about whether or not we’ll be sending out the presentation. Yes, we will. So all of the links will be made available to you as well as the recording. And again, please reach out if you have any questions, whether it’s to, like Katie said, your Adobe team via the survey. We’re here to answer any questions. Yeah, and just note that follow-up will be coming through. It should be three to four days. We just take some time to process and get the recording posted to Experience League and then we’ll send that out with the PDF and you can follow up with us from there or reach out to your account team. So I would say look for it early to mid next week. It seems like there’s one more question that’s come through. What are the limitations for sending data to custom destinations? Okay, so I need to understand like what destination we are talking about here because there may be some limitation depending on the destinations. So we have out-of-the-box connectors for different destinations, like mostly like all of the common ones, but sometimes we face some new destinations where we don’t have the out-of-the-box connected. There we actually build some custom solution, maybe through API or any other way. But yeah, I mean, you have to be very specific like what exactly destination you are talking about and I’d actually like you to go and check in our documentation what are the out-of-the-box connectors available for different destinations. I think most of them are covered there. If not, then definitely we’ll have some custom build also for different destinations. Thank you for that. Let’s see. It looks like there is one more question. Any recommendations on use cases should be migrated from AAM to CDP? So Konya, I can take that one. We do have some documentation on some use cases that you can continue to be running in audience manager and some that you can be running in parallel in CDP. I would be happy to share that out as well. Katie, what’s the best way to share that with Leigh? Just add it into the presentation appendix and we can put that with a PDF when it goes out next week. Great. So we all include that in the PDF. So yeah, I’ll share that out so you can take a look. That’s a great question. All right. And we’ll hang out for just another couple of minutes if anything else comes through. Again, feel free to post it in the chat. Thanks everyone for joining. All right. It looks like that is about it for the chat. So again, thank you all for your attendance. Again, if additional questions come through, let us know. And outside of that, have a great rest of your day. Katie, we got to the last one probably. We’ll take it. Go ahead. For the TTL, I thought there should be a default value. What do you think is the current system? Sort of a test basis policy. Yes. No, for TTL, we don’t have any default. So what we do, because it totally varies or depends on the use cases and customer need. Some customer want their web interaction data only for the seven days to look back, right? So it depends on the look back window. What do you want to keep and what do you want to purge? Because it comes with the, there is a limitation comes with your license also, right? So lots of things depends, which will define your TTL window. So there is nothing default. The data architect generally, you know, collects the requirement from customer side. And then we create a ticket store engineering to set up the TTL. Okay. Next question. Would be interested in use cases in general for RTCDP. So yeah, Caroline, can you share some RTCDP use cases? Yeah. Yeah, thanks for that question, Shane. I’ll include that as well. Audience manager versus CDP use cases and general CDP use cases. Again, like Katie mentioned, I’ll just create an appendix section and we’ll share those resources after this call as well. Any other requests for content, subsequent content, please feel free to add it to the chat pod and I’ll be sure to include that. Awesome. I think we’re going to go ahead and end the meeting, but I’ll feel free to follow up and if you have additional questions, do arise. Thank you. Thanks everyone. Have a great rest of your day. Thank you. Bye.
recommendation-more-help
abac5052-c195-43a0-840d-39eac28f4780