Adobe Analytics Rockstars
Four “Rockstar” customers will each present their best Adobe Analytics tip or trick. Who will dazzle you beyond belief? Watch and vote in real time.
So now let’s move along to the next part of the program. And now we’re going to hear from some of our biggest Adobe Analytics rock stars. They’re going to give us some solid gold tips straight from their experience. Now, the rock stars are super users and they are innovating in major ways at their jobs. And then they’re going to share new and exciting things that they’re doing to get the most out of our tools. You’re going to see four rock star tips. Now, at the end, listen carefully, at the end you’re going to get to vote for your favorite Adobe Analytics tips and tricks from our rock stars, okay? So as you listen, obviously you want to learn, but also think about, hey, which one do you find to be the most useful, most insightful, and then vote for your favorites, all right? So let me introduce them one at a time. First one up is Selin Kan. She’s a digital optimization manager with Commonwealth Australia. So take it away Selin.
Hello everyone, I’m Selin Kan. I’m a digital optimization manager at Commonwealth Bank of Australia. Today I’m going to present you two of my tips about analytics, hope you find them helpful.
So first one is a calculation of six week average sales for rolling sales week.
And what is this? So we have a business problem. If you are here listening to this, I’m sure you know that numbers are important and especially for a sales business, it’s really important to identify and address issues as soon as possible. So an organization should compare daily numbers and then compare weekly performance to recent history to find any problems. This is the ideal scenario and of course we had some problems. So first one is our reports compare data on a week to week basis with comparison only possible at the end of the week. So previous week has seven days and then this week has only three days, we can’t compare. And then secondly, comparing current week to only a single previous week is potentially misleading because previous might have some unusual numbers, for example, low numbers because of a public hold day. And then lastly, predefined weekend in analytics is not compatible with our sales week. It starts on a Sunday in analytics, but our sales week start on a Saturday.
But we have a solution. So I have created a formula that calculates the average sales for the last six rolling sales weeks, which provides a more reliable baseline for comparison. And then once you have this formula, you can actually visualize it different ways as you like. For example, you see here, I can compare daily numbers, daily sales numbers, or you can compare your products or any other metrics that you like.
And how we create this formula.
I’m sorry if it’s too technical and I’m sorry if it’s not technical enough, but here are the steps. First, create your time segments.
Create seven segments, one for current week and then six for the previous six weeks.
I think this is the key step. Keeps the start date of the week rolling weekly and then last day rolling daily. And then if your sales week is different than the Adobe sales week, you can change the start date of the week at this step. And then create a previous week segment that combines previous six weeks.
And then create an average number of sales formula, dividing the sales metric by six. This is the number of weeks that you include. For example, if you are after like three weeks a rolling formula, then divide it by three.
And then display the previous week segment and average number of sales in a table. Then you are good to go visualize the table as you like. And then that’s it.
What are the key takeaways? So the problem for us was lack of timely monitoring and meaningful comparison for sales numbers. And the solution is redefining the sales week and creating a formula capable of calculating the average sales for the last six weeks. And takeaway for us is making time, making use of the time segments and empowering our business with timely insights.
And here’s my second tip. It is automatically calculated number of days that the campaign is live.
So if you have ever been into a campaign or if you are doing an experimentation or if there’s any period that you have to track your sales or numbers for, then this might be for you.
So ideally we’d like to automate our reports completely so that we can get insights in less time.
But of course there are problems again. It’s possible to automate most of the data points, but built-in formulas do not directly calculate the number of days that a campaign has been running.
Rolling dates are helpful in that sense. You can change the dates for your report, but it’s not possible to visualize through these rolling dates.
Again, there’s a solution.
So an existing analytics formula was manipulated to automate the calculation of the number of days that the campaign has been running. So it enabled us to automate the campaign reports fully. You can see on the slide, like there are some uplift numbers or unique visitors. You can just visualize them. And the last one is 37 days. That’s my campaign is live for. And then tomorrow it will be 38. It will automatically update itself.
And how do we create it? This is really simple.
There’s only four steps. So first select date range as first day fixed and then last day rolling. This is for your campaign report, not a segment. And then secondly, create a metric by using raw count formula. Literally create a segment and then direct raw count formula. That’s it.
Thirdly, create a table with days as dimension and your raw count metric as metric. So because you created the report as rolling, dates as rolling, the number of the days that appearing in your table is actually the days that your campaign is live. So the count of these rows will give you the number of days that your campaign is live.
And that’s it. Then visualize your number and schedule your report. You are done.
So what are the key takeaways here? Problem for us was limitation in automation of campaign reporting. So whenever you don’t completely automize reports, your stakeholders will come to you and ask questions. I’m sure you have a lot of things to do, so it’s hard.
Solution is easy, manipulating an existing formula to calculate the number of days the campaign is running for.
And then takeaway is full automate the campaign reports with simple formulas and save yourself time to focus on finding more insights instead of just taking care of yourself, really.
That’s my tips today. I hope you enjoyed it. Thank you so much for listening. All right, terrific stuff, Selin. Don’t forget, you’ll all get a chance to vote for your favorite at the end. If Selin’s your favorite, just keep that in mind. Now we’re going to go on to our next rockstar tip, and this is Leo Lau. Leo is Senior Manager of Digital Analytics for AIA. On to you, Leo. Hi, hello everyone. Good day. This is Leo Lau from the AIA company. I’m the Senior Manager of the Digital Performance Analytics, which I look after all the digital analytics for the company, especially on the Adobe Analytics, which I will talk a little bit today on how we use it, how we implement it in our corporate websites, portal, and apps. Okay, so let’s get started. So the first thing I want to cover today is utilizing the Adobe Launch to consolidate clicks tracking. So mouse click tracking is something very common on our website alongside with the page view tracking, because we got a lot of different types of clicks that we are implementing. So you see some example now that we have something like the cursor slide that maybe we want to do the tracking on which slide the user click on, or we may have some button click, some way common generic button clicks to we want to capture every single button clicks the user clicking on. Another common thing is the file download clicking that we want to know what file the visitor has clicked and downloaded. So they are something simply easy to be done using Adobe Launch by creating some rules by those slide, by those button, by those file download link, we create the event on the click corresponding CSS selector so that we can capture the download. But what if we have a little bit complicated scenario that we have a slide and then embedded a download link and then embedding a button for the user to actually click on the download. So if we implement them as three individual rules to do the file download, we will have two problems. One, it will be three server call because we have three rules firing off the tracking. And then we have three server calls and then two Adobe analytics. And then at the end call means money that we have to pay for three times on this one single user click. And then secondly, is there a three separated hit at the backend? So if we are only doing some simple workspace poacher doing the aggregated number on how many download, how many button click, et cetera, it may not be a big problem. But if we want to do some deeper analysis that if we are looking into the data fit, there will be three separated hit. And then when we start looking at this raw data, that it may be a little bit of problem if we have three separated line of hit coming from one single user click. So what we want to do is we want to only have one single hit sent to Adobe analytics, no matter in what combination of those click is, whether it is a button with the file download or if it is a file download with the slider. So the Adobe launch rule order is one of the answer to how we do that. You see that we originally have three separated rules for the slide click, button click, and also the download click, which set up the data and then also sent the Bitcoin to Adobe analytics. But when we want to consolidate all the thing together, we will separate the set variables and also the same Bitcoin into two set of rules. The first set of rules, we will use the rule order 50. In this set of rules, we only set up the variables in the Adobe analytics. So we set up all the E1, all the properties, all the E1 we want, with all the data we want to send over to Adobe. But we are not sending the data yet until at the end, we have one single rule with the rule order 100, which do not set any variables, but only sending the data. So which means when the user click, all these three depends on the situation. Rule order 50 will be executed first. And then at the end, it will be sent into Adobe analytics in one go using this rule order 100. So it sounds simple, but actually it is not. Because unfortunately, if we set up those rules like this, we still have three hits sent over to Adobe analytics. You see in the log that we have the button click file and the Bitcoin send, we have the download link file and then Bitcoin send, and then we have the slide click file and then the Bitcoin send again. Why this happen? Because if you look at the rule order 100, you see that we are simply putting all these three event together into this rule as the triggering event. But you try to remember how we set up the kind of example, HTML that we have a slide embedding a download link and then embedding a button. So when the user clicked on the button, actually it is a click on the button itself. So the button click rule file and then the send become file because of this button click event in this 100 rule. And then the event get bubble up. It start from the button, then it got all the thing done. And then it bubble up to the download link. And then it got the setup rule 50 trigger. And then it got the rule 100 trigger and then it bubble up to this slide. So at the end, we still have three set of them being sent and then three hit sent. So it is not what we wanted. So in this case, we only want one single event instead of three in this rule 100. So how we do that? So we have to consolidate all types of hit into one single types of event. And then where we are actually, it is the any element. So which means no matter what element the user clicked, it will trigger these rules. But it is again, not what we wanted because we want some rule response to the button click. We want some rules response to the file download. So instead of having all this control in the event by specifying the CSS selector, we do this through the custom code. We evaluate whether the element the user clicked is enclosed in a button for this example. So if it is, then we set up a indicator in a data layer to say, okay, we have a click on some element we want to capture. And then it is true. And then we return the true from this custom condition. So this rule will trigger. So you can think about the download link and also just like that we have a different selector up in the if statement. So it will determine if this particular element is within a download link or if it is within a slide. So no matter how deep it is, as long as it is within a slide, then this evaluation will be true. And then it will set up these indicator as true. So at the end, when all these rule 50, they are all evaluated. If any one of them true and then we should do the tracking. And then in the rule 100, we don’t evaluate, we don’t compare whether it is from what element. We just simply look at this indicator to see if there is any rule 50, say I want a tracking. Then the rule 100 will send out the data. Of course, because we are not using the element directly, we have been using the indicator to indicate we want to do the tracking. So once we finish the tracking, we finish sending the data to Adobe Analytics, we have to clear this indicator. So when the next time on the same page, if the user click on something else, we have a clean indicator to start with that we are not sending repeating hit to the Adobe Analytics. So by combining all these, here is the result that you see. If I click on that example, we have the button click, we have the file download, and then we have the slider. All these three rule 50 executed first, and then only one single send Bitcoin rule 100 is executed at the end. So using this setup, the good thing is because in the rule 100, we don’t have any other specification on what kind of element we are trying to monitor. We just look at the indicator on whether it be sent or not. So we can actually have more than three. We can have five, 10 or different types of click tracking, which we just simply need to create another set of rule 50 that look at the any element click and then evaluate what the element the user is clicked on. And if it is something we want to capture, we want to track, we set up the Adobe Analytics variable or the URL pops and et cetera, and then set up the indicator. Then this rule 100, we pick it up and then send the data to Adobe Analytics. And it can correctly describe what the user is doing one single click, but so happen multiple events. But one need to issue with this implementation is because we are listening, we are monitoring to click event on any element. So if you turn on the debugger, you will probably seeing a lot of tracking message in the console that we got another set of rule they are not fired because of the condition not met, but it is something we can bear with. We got the big benefit and got some correct information on the Adobe Analytics. The next thing I want to discuss today is using one single dimension for complex data. And then of course we need the classification to support it. Think about some situation, if you are in some multilateral company, it is very common that you have the public website that is similar across multiple countries. And then you may have some similar function on all of them. Let’s say just it’s very simple, big generation form you want to collect some of the visitor information. So your sales representative can contact this prospect, but maybe for different market, they are collecting different information. Or even if you are on one single market, you just one simple business, maybe you got some similar feature across your website or portal. Let’s think about if you are doing some online sales shop, you may have a function for the visitor to do some portal filtering on your website to find some portal that they want to buy. And then at the same time, after the user login, and then you got some function for the user to look at all the past order they have, and then you have a function for them to filter what the past order they have posted. You also have some retail store in place, and then you got some function feature for the customer to find where are all those shop address, and then you got another filtering options for the customer to find, okay, where the nearby shop they can find you. So in order to implement the tracking on the lead generation form and the filtering option, we want to capture all this data. Some simple and direct implementation is we have one E-Waa or one Pup to capture all this information. However, if we’re just doing something simple and direct, we have four E-Waa for the lead generation form, because we have in total four different input fields for the user to fill in, and then we have eight input options for the filtering options. So it is 12 already for the lead generation from across countries or for the filtering option for three different filter. On the other hand, each of the E-Waa, they can store up to 255 characters. And the thing about that, if we only store the gender, male, M, or female, F, in a single E-Waa, we are using only one single character and then wasting 254. And then also think about if we are in some multinational company, we may have tens or even hundreds of website portals and apps we want to capture, and then all of them have different function, different data we want to capture. And if we want to maintain a standardized custom dimension across all my care, all function, that may be a little bit difficult to achieve that. So what we are going to do is we will combine similar data or the data in one single function into one single custom dimension. And of course we need to have some format. So this is some example for the name-value pair. On the lead generation that you see, I have the name and value in pair that the age equal to the value the user selected, put in the lead gen form, the measure status, and then the shopping preference. And then we have a vertical bar to separate all this information. Same for the filtering option that in one single custom dimension, we have multiple data. We can keep only the data the user entered. So you see on top, when we have the lead gen form, if the user only entering only three, why bother to send all those empty thing that we only send three with the data? Same for the filtering. But when we are doing this one in this way, we need to be aware that we still have the 255 characters limit on this custom dimension, and then including the equal, and vertical bar that separating the name-value pair that we have to take into the consideration. So now we are putting all this data into one single custom dimension. We save a lot of YI-WA and POPs in our Adobe Analytics, but then our user may not be very happy on this one, because when they do the reporting, they drag this combined dimension for the report. It will be very difficult for them to, if say they want to do the reporting on the lead gen form, and then only look at the gender for male, or for some certain income level, they got some difficulties on here. So we will also do the data breakdown from this combined data using the classification. The good thing for the classifications is, it is not counted into those number for the YI-WA and POP. And then so far, I don’t see any hard limit on how many classification we can have on one single custom dimension. The next thing we need to do is, we have to break the data into those classification. Of course, we are not going to use the importer to upload those data manually. We will use the classification rule builder, and we will use the regular expression to extract the data from this combined custom dimension. So at the end, when the end user doing the reporting, they can have a new dimension logically in their analytics workspace they can use. So like here, you see, I have the classification for they’re looking for age, age, and gender, and they’re all coming from the same custom dimension, but then we break it down into four classification. So when the user in the analytics workspace, they will actually also seeing all these four classification as four individual dimension, and then they can use it as they are originally custom dimension, drag it to do the reporting on certain age group, on gender. So this help us to reduce a lot of custom dimension, so we can have one standardized configuration across multiple market, and then we can reuse one single custom dimension on multiple purpose. Of course, there’s this a bit similar filtering option, but then we can use it in multiple application, multiple pages, location to capture similar data, and then we also provide the same visibility through the classification for the end user to report on the data assist that the user fill in. So that is the two tips I want to share today, and I hope that you guys find it useful and helpful in your future analytics project. Thank you. Thank you, Leo. Wow, that was a lot of great information. Now you’re skiing a lot of good stuff. So now we’re gonna move on to rockstar tip number three. So the next person is Adobe Analytics rockstar, Scott Meads. He’s a digital insights analyst with Origin Energy. He’s been pushing the limits of Adobe Experiences Cloud’s marketing automation capabilities, and he wants to show you how. Welcome, Scott.
Hey guys, my name’s Scott, and I’m gonna run you through a couple of my tips and tricks for Adobe Analytics.
So my first tip is it’s classified, offline data in an online world. I’m gonna show you how you can give insights that shift the dial by using your offline data in conjunction with your online data.
So imagine your organization has launched a new sales form and the marketing team’s running an email campaign to generate leads. It’s looking good. Click-through rates are high, open rates are high, and the form completion rate’s sitting at about 10%. You’re getting ready to send out a report saying the campaign was a massive success, but you don’t have the whole picture.
What actually happened is half of those applications got declined, and each decline application is a cost to the business. So suddenly this campaign’s gone from looking great to costing the business a fair bit of money, and you don’t even realize it.
The challenge of merging offline and online data is felt across industries, from credit card applications that need offline review to retail where customers might buy something online and then return it in store.
To give insights that can shift the dial, we need to see the whole picture of the customer journey from start to finish, because our customers don’t just live online. This is where Classification Importer comes in. It’s an underrated feature of Adobe Analytics that lets you bring in data from any external system and join it with your online data, whether it’s your sales team, your CRM, your call center, wherever you have offline data available.
Do you have an idea of how it looks in the background? We have a unique key, such as a transaction ID, an event for that key, and the status. So in this case, it could be approved, declined, or pending. The first time I set this up, merging the offline data enabled us to find a marketing placement that was costing thousands of dollars a month in declined applications, resulting in a saving not just of that processing cost, but also being able to redistribute that marketing spend into more effective channels. Let’s take a look at how it looks in Workspace. This is a simple breakdown without classifications, just application completes by marketing channel. You would have all seen this report before. Now let’s add in some classifications.
We can see now application completes broken down by channel, but also by the outcome of that application. And while it looked like Direct was doing well for us, generating a lot of the traffic, in fact, only 60% of those applications become paying customers, with the other 40% having issues or getting rejected.
We can also quickly compare this now across channels.
So with a simple calculated metric, we can look at not just app start to complete by channel, but app complete to successful conversion by channel. And straight away, we can see referring domains is actually doing pretty well for us, but there’s an untapped opportunity in the email space that has both the highest app start to complete rate, as well as the highest app to successful conversion rate.
It’s also interesting to see that despite vanity URLs performing quite well compared to Direct, it performs the worst in actually generating leads into paying customers.
So now we understand the benefit, let’s talk a little bit about how we set this up. Before you get started, you need a couple of things. Firstly, an EVA key, something unique like transaction ID or customer ID, and some CRM data in a CSV file you wanna upload. You can also do this through FTP or API, if you wanna set up a set and forget solution.
So in terms of setting this up, you jump into your admin tools, choose the EVA you wanna classify against, in this case, transaction ID, add the columns you want to match on your offline data, it can be one or it can be several, and then analytics will generate an Excel file for you to copy and paste your offline data into. You simply fill that out, upload it back in, and you’re done, those classifications will appear in the workspace.
Classifications provides answers to questions you couldn’t otherwise answer confidently. It lets you answer things like what channel generates the most leads, and not just the most leads, but the most high value leads.
And it’s not limited to one vertical, almost every vertical have a use case for classifications. And you can also go backwards. So for example, you could upload a key against your customers that says high lifetime value, medium or low lifetime value, and start to segment and understand the online behavior of those different customers, and treat them differently onsite, or understand which customers look like they might be going to become high value customers.
Online data is fantastic, but it’s not everything, and our customers don’t live exclusively online. Classifications is a simple way to give yourself a 360 degree view of what your customers are doing, and drive impactful insights.
So tip two is product string, more than just for products. Product string is a feature usually used for e-commerce implementations and tracking buying behavior online. But I’m gonna show you how you can get inside your customers’ heads by understanding what they see when they’re interacting with your site and with your brand.
Tracking page content is difficult, and I’m not talking about tracking the page views, but tracking what the customer actually saw on a dynamically generated page.
Like many companies, we have a dashboard where customers can see each of their products, as well as personalized content generated from Adobe Target or from our CRM.
And the question is, how do you capture and report on which content which customers see at which point in time? Because this content is unique to that customer, and it could change every time they come back to the website.
To understand a bit more about the problem, let’s dive into how page views work for a quick refresher. So page view has two main parts. It’s got events, which are things that are happening. For example, the page view, the customer starting a form, as well as EVAs, which are the context about those things. So which page they viewed or which form they started. Normally this works nicely for a general page, but it gets messy with complex pages. So where you have multiple content blocks, each with their own events and their own context or EVAs, giving information about those events.
We can’t tell from this, which content goes with which block. So for example, is the first block, the 30 day bill due or the seven day bill due, is it a leak or gas? Is that product applied to both? We can’t tell.
But when you add in some product string, a bit of magic happens. Now within a single page view, you can have multiple content blocks, each with their own events and context that will group together nicely in a way that makes sense and enables you to report on them as if they were individual pages.
Let’s take a look at how it looks in Workspace. So page views don’t say much by themselves. This is just a basic report without product string. And all we can see is that we had X thousand page views on the page. But when we add in product string, suddenly we can go much, much deeper. We see for each card on the page or each content block, an impression and a click. And we can even go down to the next level and say of those clicks, which CTA was clicked or even which CTA did that customer see? So this way you can split test different CTAs either through target or another means and understand the performance of which customers are seeing those and which are actually clicking through.
So let’s talk a bit about the how. The key step is defining what you actually wanna capture. What are your blocks and what do you wanna know about those blocks? So for example, we could give each block a name and an ID and an impression event when it’s loaded. Product string allows you to track an impression against each, what that block is. So it could be which product they’re seeing or it could be which flight the customer is booked in for at that time, as well as the CTA shown there. And so you could look at your final reporting and say this customer saw these blocks. They saw these CTAs within these blocks and this is how they interacted with those blocks.
So to summarize, when you wanna capture details of different page sections and how users are interacting, think about product string.
So many valuable questions you can use it to answer are things like how does the content show and influence customer engagement? And not just based on which pages they visit but which content they saw on that page that then drives their future behavior. So it’s a really good way to understand how different content is performing. So for example, if you have static content, a targeted experience, potentially a CRM targeted experience as well, how do each of those influence behavior? Which ones perform best both on that page and by creating segments further down the funnel? So product string is a really powerful way to get inside the heads of your customers and understand what they’re seeing and how they’re interacting with your brand.
Those are my top two tips. Hopefully you enjoyed it and you’re already thinking about how you can implement these back in your own business. Thank you, Scott, for those great tips. Now we move on to the fourth and final rockstar tips. So these are from our friends at HDFC Bank. Vipul Bilab, who leads the website Chatbot Initiatives and Pratik Bansal, who leads analytics and personalization activities. Together, they’re doing exciting things and now I’ll hand it over to them to tell you more.
Hello everyone. My name is Pratik Bansal and I look after the digital analytics and personalization here at HDFC Bank. Today, we are going to talk about the secret sauce for running an omnichannel campaign. I have with me my colleague, Vipul. Vipul, over to you. Hi, I’m Vipul Bilab. I’m the product head for website and Chatbot at HDFC Bank. HDFC Bank is one of India’s largest private sector banks. With over 5,600 branches spread across 2,900 cities and towns, we serve about 60 million retail customers. In addition to this, our digital platforms get about 75 million visits a month on an average. So let me start with what the business problem was which we tried to solve using Adobe Analytics. So while we’re running campaigns all year long, our biggest campaign called Festive Treats is usually executed around the month of October and November which is the peak Diwali period where customers are looking to buy new things, etc. This is a high intensity 360 degree campaign with huge spends to drive awareness, consideration and conversion across various product lines for a period of about 45 days. With all traffic being driven to a campaign microsite, that has been extensively personalized using customer propensity and intent. While campaign execution happens across channels, it becomes imperative to have a unified view of how each campaign is contributing to traffic engagement conversion and finally ROI across channels, campaigns, products, etc. To give you a view of the complexity, there are more than 10 delivery channels each needing to keep their pulse on the engagement levels of the customers that they are bringing in and keep track of how they are driving conversions and if not driving conversions, but how they are assisting or influencing the conversions at the end of it all. They need to regularly check and scale up and down these efforts according to the conversions that are being delivered across these different segments and different campaigns. Next, we have the creative team which also needs a view of the performance of their creatives across different product lines with an overlay of customer segments, channels and the various creative formats that are being used. They also need to do this evaluation of performance on a day-to-day basis across different journey stages that the customer is in. This enables them to get actionable insight and apply those learnings across channels, segments, journey stage to continuously tweak the communication on an ongoing basis. We also have product teams who are more concerned about leads and conversions for their respective products and in terms of conversions, how many of them are happening from which channels. This helps them to get a high-level view of how their campaigns or how their products are performing and accordingly, how they can tweak the various product offerings to help push conversions. In the end, there’s also the data analytics team who wants to understand how their predictive models are performing across various product lines and also how they can ingest these customer intents back into their models so as to optimise them further and deliver better results. With so many stakeholders at play, it was very important to have a simple yet unified view of the complete campaign, which can be broken down as per the requirement of the various channel managers. The view should be simple enough for any CXO or senior executive to look at and get a pulse of what’s happening in the campaign and also allow a fair bit of complexity for each of the channel managers to dig deeper into. So how did we really solve for this? Let me go ahead and ask my colleague, Prateek, to take you through this. Over to you, Prateek. Thanks, Vipul. Well, the problem statement was loud and clear and the complexities that came along with it. Let me talk to you about three very simple things that we did before rolling out the campaign. First is the data accuracy. Accuracy of data in Adobe Analytics is very paramount when we’re looking to create, generate reports and create actionable insights. Before we started this journey, all the stakeholders that Vipul mentioned about, they were either tagging their campaigns wrongly or there was no tracking done at all. This was resulting in a lot of data loss and required a lot of manual intervention to arrive at the results. With the end objective in mind, we sat down with each and every stakeholder, be it a channel manager, be it a grant manager, to understand their business critical data points required by them for their decision making. Once we had all the data points in hand, we worked with IT and technology team to get the Adobe Analytics implemented throughout the journey. Along with the implementation, another key task done was to create a standardized campaign tagging nomenclature, which would capture all the external parameters, example, channel, creative, segment, journey, stage, et cetera. And once the nomenclature was prepared, it was ensured that all the campaign managers are following it religiously. The end result was simple yet detailed view in workspace, which could be used to identify every channel, every creative, every segment, along with the various data points available as part of application journeys. The workspace could be broken down further with various data points and metrics available in Adobe Analytics. Second was identification of customer journeys. In a banking scenario, where the lead time of a customer conversion varies from 10 to 60 days, depending on the various products, it is very important to understand the customer journeys and to understand how customers are moving from one stage to another. For example, from explanation to consideration to application submission. To understand this, we took the help of flow diagram in Adobe Analytics workspace and to map the customer journeys in terms of different content curated for each of the states. For example, top, middle, or bottom of the funnel. Another objective which was fulfilled was how different channels are pushing the customers across these stages. The end objective was very clear over here, which was to understand the customer behavior across these stages and allowed us to create hypothesis on sending targeted communication with an objective to push the customer further down the funnel towards lead submission. It also helped us to optimize the performance of our channels, lowering the cost, and eventually improving customer experience by not sending any irrelevant communication, lowering the ups and subscription rates. Third, and the most important tip I would like to talk about is mapping the offline conversion. While more and more application journeys are getting digitized as we speak, there are many products which has an offline leg attached to it post lead submission. In such cases, the digital analytics would give us a visibility from campaign to lead submission only, and we would not be able to calculate the ROI of these campaigns. To solve for this, we use the insertion API feature in Adobe Analytics to upload the conversion data from CRM into Adobe Analytics. Along with the conversion event, we also uploaded a lot of customer data points, for example, customer type, business value, call center status, and application type. The objective of uploading all this data was to allow us to create more actionable insights from the complete dataset. Once the data was uploaded, it completed our visibility of the campaigns from landing to lead submission to business generation. We could measure the impact of every channel, creative, and creative adapt for that matter, and calculate the ROI while breaking the data with various cuts available in the workspace, and making Adobe Analytics as a single source of truth for all our campaign performances. So these are the three things that we did as part of setting up the whole omni-channel campaign behavior. We had the ingredients ready, and it was time to start rolling out the campaign. Let’s hear it from Vipul, and how did we go about it? Over to you, Vipul. Thanks, Prateek. So once the campaign started, it was time to start looking at reports and generate actionable insights using the data capture. Let me give two examples of how different out-of-box features helped us. The first one being channel attribution. Besides the traditional last touch attribution, we used other out-of-box and custom attribution models available to quantify the impact of each and every channel on the customer journey. This helped us prepare the right mix of media spends to maximize the conversions. Within each channel, we understood which campaigns, creatives, and nudges are working better for which products or demographics. This also helped us in identifying which channels are better at creating awareness and consideration versus the channels driving more of conversions, which would eventually be used to tweak the next round of campaigns. Remarketing, which constitutes a major part of our campaigns, were completely redesigned using the insights from here. Important thing to note is attribution can be run across multiple dimensions and across different stages of the customer journey, while using both digital and offline data. Next, we delve deeper into assisted conversion, that is understanding how the channels are complementing each other to help in the final conversion. So here, what is the channel overlap for each conversion? And what is the impact of each channel within that? It essentially allowed us to give due importance to each channel within the journey and further create multiple customized channel-wise communication plans for each product separately, taking into account the top path that users are taking before conversions. We also use the time lag report to understand the time taken for conversion, which would then be mapped against the lead time of the product to understand customer behavior and their pain points in a better sense and also improve product offering along the way. While three tips allowed us to set the basics right, these features helped in generating clear insights for campaigns, which would then be used recursively to optimize the next round and so on and so forth. Although there were many key metrics which we were able to improve using these insights, let me tell you a few about them. In terms of business impact, some of the bigger outcomes that we were able to create were getting about 4,000 plus cohorts as a part of the campaign and using them on an ongoing basis. In terms of traffic and engagement, we were able to generate 15% more traffic while keeping the bounce rate the same as the previous year. Besides the traffic, we were also able to engage the customers about 12% more than we were doing the previous year. Besides traffic and engagement, in terms of pure business numbers, we were able to maintain our loan numbers, even though we were spending a lot lesser than what we were doing the previous year and we were also able to increase our card spends by over 10%. So while we were able to achieve our business targets and also look at traffic and engagement in a different way altogether, let me hand it over to Prateek to give you a few parting words in terms of how we were able to leverage this over a period of time. Over to you, Prateek. Thanks, Vipul. So you can see how we were able to successfully manage and deliver an omnichannel campaign with the help of Adobe Analytics and all the while sitting at home because of the ongoing pandemic. The key was to bring all these teams together on a common platform that empowers them to have a conversation, data-driven conversation, so much so that every channel, every campaign manager’s day started with looking at previous day’s report and optimize the campaign on a regular basis. The objective was shifted from looking at the end conversion to looking at different metrics which are available as part of their analytics dashboard. Even after the festive treat campaign ended, this data-driven approach was now being used in all the campaigns being rolled out on a daily basis. And that is how Adobe Analytics transformed our journey from using a post-facto decision-making to a real-time data-driven decision-making for all our campaign. That sums up our presentation. We have all learned a great deal from this analytics journey, and we hope that we were able to share some of our learnings with you. That’s all from our side. Thank you.