AEM GEMs - Mastering Cache Efficiency for Optimal Page Performance

In a world where speed and efficiency are paramount, mastering the art of cache optimization is crucial for delivering a superior user experience. This session is designed to provide you with actionable strategies and techniques to enhance your cache efficiency. Join us to explore practical methods for implementing effective caching strategies, optimizing cache hit ratios, and fine-tuning your cache configurations.

Transcript

All right, Jaya, feel free to start. Yeah, just a second.

So hello, everyone. Welcome to our session. So in today’s digital world, speed and performance are everything. This session will equip you with practical strategies to optimize your caching, improve cache hit ratios, and fine-tune configurations for a faster, more efficient experience.

So here is the agenda. We will be covering the important topics listed here along with demos. We also have an exclusive Q&A slot at the end. And we will be monitoring TeamStrap during the session.

So what you will learn in this session, we will be mainly covering concepts around cache hit ratio and cache coverage metrics, how to interpret and use them to measure cache performance, what do you need for efficient CDN log analysis, and what are the priorities when caching. We will later dive into dispatcher and CDN configurations followed by BYO CDN specifics.

So today’s session’s focus is on AMCS. But many approaches are also applicable for AMS on premise. I also want to mention that in the future, we expect this to be covered by size optimizer. But for the time being, we need to handle it ourselves efficiently.

So as Gaurav mentioned at the beginning, today’s presenters, it’s me, Chaiyashwati Ndrakumar. I’m a senior manager in AM cloud service group in the JPAC region. Iarg Ho, a lead site reliability engineer based out of India. And Shubham Srivastava, expert cloud subject matter expert in the JPAC region.

So with this, I’m now handing it over to Iarg to kick us off with the caching concepts.

Thank you, Chaiyashwati. Thank you.

Let me share the screen. So caching concepts.

I think most of you are already familiar with the details of how caching works.

But I think it’s useful to come back to a few very basic concepts when it comes to caching.

So here, I mentioned I have here a cache and an origin. The cache is, of course, the cache and origin is the system which the cache is shielding. While here, we mostly talk about when we mention cache, we think about the CDN and origin being your AM systems. You can also reinterpret it in a different way, saying the dispatcher is a cache and the origin is your AM publish or authoring instance. So in this case, the concept is a little bit more generic. But in the overall course of the session, we will consider the cache. When we talk about cache, we will normally need a CDN.

OK, so the first thing is when we have a cache, we have, of course, the case of a cache hit. That means someone sends a request to the cache and the response is in the cache and it’s returned immediately. Origin does not know anything about it. It’s not loaded with this request and basically, that’s the main course. That’s the use case we want to have. Mostly, everything should be cached. Everything should be served out of the cache.

But of course, we also have cache misses. In the cache miss case, the request comes to the CDN and the CDN finds either the requested file is not present in the CDN cache or it is stale and it cannot be reused. In any way, the CDN decides, oh, I need to go back to origin and request the stuff new.

The most important thing here is that a cache hit, cache miss could also be a cache hit under a little bit different circumstances. For example, if the file wouldn’t have been stale on the CDN, it could have been a cache hit. But here, because of circumstances, it’s a miss.

And thirdly, we have to think about a different or third aspect of it, of caching. This is a cache pass. We call it pass because it’s passing the cache. The CDN cannot handle this request. It’s not allowed to do. And for that, a pass will never be a cache hit. That’s the big difference between the cache misses and the pass.

Okay, so this is the three topics, three types, hit, miss and pass. So let’s get to the next thing.

Now, when we talk about metrics, we often talk about cache hit ratio. That means how many requests are served from the cache. And it’s normally the baseline is everything you can cache. That means when we do it in a mathematical point of view, we have hits, number of hits divided by the number of hits plus misses.

And then we have another aspect of it, the cache coverage, which is the hits plus the misses divided by hits plus miss and pass. So what does it mean? Is it really helpful, this stuff? So let’s see. I have here an example.

For example, we have 100 hits and 9,900 passes.

So what does it mean? Quite simple. It means we have a cache hit ratio of 100%, which is great.

But because everything which could have been cached was cached. We don’t have any miss, we only have hits, but we have much, much more passes. It means in the end, the cache hit ratio, the 100% does not mean too much.

Cache hit ratio is good, but the cache coverage is very bad.

It means even in this case, I would say we have effectively no caching. This 1%, it doesn’t matter that much.

That means the cache hit ratio per se is not really the most important ratio, the most important metrics. You need to always look at the cache coverage.

You need to look at the totality of requests.

And that means the cache coverage is definitely as important as the cache hit ratio.

OK, then let’s talk about another example or another aspect of it, meaning what problems can a cache solve? It can solve a lot of issues, but it cannot solve all of them.

An example here.

For example, we have this good case. We have 9,900 hits and 100 misses and 0 passes.

I think that’s quite excellent, right? You have 99% cache hit ratio and 100% cache coverage. That means we can cache everything. Every request could be cached under some happy circumstances.

So technically, everyone would say, yes, this is great.

But, well, what if we annotate with this, let’s say we have every hit takes 10 milliseconds and every miss takes 30 seconds.

So it means every cache miss has an impact on the end user which is requesting this.

And this problem cannot be hidden by the cache.

So if you translate this into AEM terms, if you have a great cache hit ratio and great cache coverage, but your AEM is very, very slow to render every cache miss, then you still have a problem. You have a problem because every cache miss will be directly affecting your end users. And I think 30 seconds on the critical path to render a page, that’s definitely a problem.

And also independently of indirect end user impact, 30 second request taking 30 seconds to render is also quite a load on your AEM publish instances.

So these metrics are important and it’s really important to have them on these high values, high cache with ratio and high cache coverage.

Well, because technically you still have here the problem on the cache miss, but if your cache hit ratio would be lower, you would have much more cache misses and probably when your cache coverage would be lower, then you might, you will have passes and probably these passes also, it’s also a chance that these pass requests are also slow. That means high cache hit ratio and high cache coverage are necessary.

And then of course, you need to spend a little bit of time. Good. That means it’s not, that they are not 30 seconds.

Then, yeah, mine your cache misses.

Okay, now let’s switch over to the log analysis part. Now that we have the theoretical background, we know about cache misses and cache coverage.

Let’s see what you can do within AEM as a cloud service to do a lot of that stuff on your own.

In AEM as a cloud service, you need two ingredients for this process. You need to have the data and you need to have the right tools.

The data is there. You can download the CDN logs, which are essentially essential for this case. You can then download via cloud manager. You can then download via the IO CLI, or you can set up a log forwarding.

Log forwarding, traditionally, we had only for Splunk for a long time, but recently we just allowed to log, to enable log forwarding for a number of more targets, including an ELK stack, but also basically just to a Blob store so you can, and also the logs are streamed there.

We have dashboards created and made publicly available.

They are available in two types of flavors.

One flavor is the Splunk dashboard, and I’m personally very familiar with Splunk and a big fan of Splunk, but for this demo, I set up the other option. That’s the ELK stack, Elasticsearch, Logstash, and Kibana, which is provided by Adobe on a GitHub repository, so you can simply download it. So let’s just go over to the demo part.

Here, where we have the ELK. This is the documentation from this GitHub repository for the ELK stack. As I said, we have the same for Splunk, but I’m focusing here on the ELK stack.

Setup is described here. It’s straightforward, and when I set it up initially, I think what took the longest time was downloading the logs and preparing that stuff, but everything else is straightforward.

Now let me just switch to the setup here, which I have available here. Normally, you would just start here with the dashboards. We have provided three sample dashboards, which are kind of very basic. I mean, they are just showing basically what data is there and what you can do with it on a very example basis, but that’s it, and normally, I would assume you would create more tailored dashboards for your use case.

I ingested the logs of the weekend.site website here. That means we have a little bit of data. The paths might feel familiar, so it’s not very complex, not very much traffic, so basically, there is no rocket science here, nothing spectacular to expect.

It means also it’s something we can just simply go through without it taking too much load or too much stuff to process on the background. So let us first go through the WAF dashboard.

Basically, it’s displaying here the details of what the CDN did regarding for the aspect of WAF.

That means we have here in this one day, we have 70,000 requests. We have 715 are blocked. That means when we can go through here, we could just do the normal things. We could go here in details and just see here the requests by time, and then we have the WAF flags. That means every request is flagged with something.

You have here on these various flags, which are, that means describe how the WAF categorized a request. The majority is data center, which means they’re valid and okay and therefore passed on. The rest might be a little bit of specifics. That means they have been rejected for some reason. But as you can see, we have here a few of these, no UA, no user agent.

Well, might have been blocked here, and a few private files, SANS, abnormal, something even though might not sure if we have even attacked once back doors. This is for everything where we have the WAF feature enabled.

Here triggered rules.

It’s pretty much something similar. The flag here we can see, we can go into the details and we can look into each individual request.

Then let’s go into the next dashboard, which is the CDN traffic dashboard. This is a little bit more interesting, especially if you’re looking into cash optimization. You see the number of requests, you see the error rate, you see the non-cached requests, you see the origin request rate, that means the requests which have been passed on to origin.

Blocked request, triggered rules, RPS request methods, that means various breakdowns.

I think one of the more interesting thing is, the most interesting dashboard is here really the cashed ratio dashboard.

Here you see really, okay, weekend has around 84% hits. It has a lot of pass requests which constantly go through and that’s probably something to investigate here. We have a few misses and even a few errors.

While the 750, you might recognize that number. This is the number of requests above rejected. That means it’s not necessarily an internal server error. It can also be here a request which has been rejected by the WAF.

Of course, here are various ways to display it. Here, how is that going for the HTML requests, for images, and then we have here top misses, top path URLs.

The various things we have here for JavaScript and CSS, of course, good high cache hit ratio and we have a few misses.

Well, even 22 passes which is interesting.

When we come here to the JSON request analysis, we can also see here two-third passes, two-third passes which is not good. One-third just hits. It’s definitely something we want to look at. When we go here into this top path URLs, we can see, okay, here it is token JSON. The CRSRF token JSON is hit very frequently.

Then when we go here also, we can see, okay, here. Of course, no cache header, no cache header. That’s kind of expected. That will always be a cache pass.

This guy, home users, my guess would be this is the anonymous user. It’s tried to be read. I don’t know what for whatever reason. Anyway, let’s look into that a little bit deeper. Then we can go here into Data Explorer and do whatever we want. Say, okay, we want to add for ad hoc analysis. We just want to add a filter and remove this CSRF request. That means it’s not lips, CSRF lips granite.

Yes, here added to the filter and here we are. That means now we could even drill down further and try to remove these. Let’s see what’s left.

Technically, this is a very simple approach. Of course, this is at least I’m not an expert in the ELK stack. I’m a fan and user of Splunk. It means if I cannot give you so much help when it comes to these dashboards and how to do that, I think there is much more possible than just here doing a simple drill down here.

Okay, so let’s finish with this demo.

Let’s continue with the next step.

The next thing is, as I mentioned here, this is all Docker.

This is great for a demo and it’s great for ad hoc analysis. But please, if you set up this for a more long-term and permanent basis, set it up properly. It means this is a demo.

The next thing is these dashboards, you should use them as starting points.

These are very, very simple things. I think the first thing you would want to do is you want to create per hostname or per tenant, per domain, you want to create dedicated dashboards. Then, of course, you might need to create more specifics because the drill down here was just per JSON. You might have different versions of JSON. You might have no JSON at all.

For that, I would definitely suggest you to create more specific dashboards. But I think these are great starting points, especially if you’re not familiar with the data feed and the fields we have, it’s definitely a good way to actually start and to dig into the data which the CDN logs provide to you.

Having said that, I would like to hand over to Shubham, which will tell you about more how you can configure your dispatcher and your CDN to make that all work.

Hello, everyone. Now we are going to discuss regarding the setup which can improve your application caching. For caching, you may need to plan for all these important components, whether it’s images, client lips, core component images, or HTML pages.

To implement this, you may have to tweak your dispatcher configuration and set proper cache control header. We will have a quick walkthrough of the configuration later in this session.

As per best practice, it’s recommended to cache assets and client lips for a larger period and HTML pages to be cached at least for five minutes, if not more. Moving to the next slide.

Here is a quick look on how to set up caching based on cache control header. First, we have max-age, which we specify for how long the source needs to be cached at browser, CDN, or any proxy.

Second, we have xmax-age, which overrides the max-age at CDN. And the value which we specify with xmax-age, CDN is going to cache the resources accordingly.

Then we have stale-while-revalidate, which means that while CDN is revalidating the resources at backend, if it’s updated or not, CDN will keep serving the stale resource.

And then there is a stale-if error, which means in case the stale content will be served, in case there are errors with the origin, maybe due to connectivity or any other issue.

Then we have surrogate cache control, which is fastly proprietary, which most of the customers use in case they are bringing their own CDN on top of our fastly, and they do not want to cache anything at fastly. So let’s say if we have max-age as 300, xmax-age as 600, and surrogate cache control as 0, it means that the browser resource will be cached for five minutes. At any CDN, it will be cached for 10 minutes. And since we have specified surrogate cache control as 0, at cloud service fastly, it will be cached for zero seconds, which means it will be not cached.

Moving to the next slide. So here we are going to discuss how we can set up these cache control headers using directory or location-matched tag. You can specify the caching for assets and core images.

Here you can see that we are caching immutable URLs for the core image component for 30 days.

Using max-age with background refresh using stale-value revalidate to avoid any miss at CDN. Also, you can see that here we are able to use regular expression to match various file extensions. You can add missing relevant extension or missing path in case your asset lies somewhere else other than content-app.

Moving to the next slide.

Here you can see the use of max-age and surrogate cache control header. As an example, we are setting up browser caching for five minutes.

Every 12 hours.

So HTML caching actually depends on use cases. It depends from maybe some business user wants HTML to be replicated as soon as possible. So you may have to decide the timing for this, but at least the recommendation is that we should cache it for five minutes.

Moving to the next slide.

As we are all aware that specifically at AM as a cloud service, we are using version client lips, which means with every core deployment new version gets created for client lips. Hence, these can be cached for longer period. It is highly recommended for other offerings other than cloud service to use version client lips.

Moving to the next slide. Here in this section before jumping to demo, we will see how and where these headers need to be set. So first, we need to look at the special module in the core base. Then we need to check for available vhost folder on the directory.

Then there are a few mutable files which cannot be changed or deleted. You can simply copy the default one and create custom file. And then using location match header as we have discussed in previous slides, you can specify the caching.

Then coming to CDN configuration. With the latest release and development, we have exposed the functionality for our customers at cloud service to configure the fastly CDN and set up traffic filter rules.

Modify the nature of request and response applying 301, 302 client-side redirect. And to declare origin selector to reverse proxy a request to non-AAM batons.

Using traffic filter rules are highly recommended to protect the edges as well as limit the traffic to reach out to origin. It can also help in preventing DOS, DDoS attack.

Using traffic filter rules, you can configure the CDN to strip off campaign or unwanted query params, reaching out to origin. If you see this configuration, here we are just displaying an example how you can unset the query parameter to reaching out to our origin.

This is the format for the biml configuration. This needs to be deployed using config pipeline at cloud manager.

Here this is the syntax. Kind should be CDN, version should be one. And then we define the environment types. Whether you want this configuration to push to dev stage and dot. So currently as you see we are pushing this to all three environments.

Then we are unsetting all query parameters except those needed. So if you see here, the property TL is published. What we are doing? We are doing unset other than these query parameters. So search in campaign ID will be passed to origin. Other than this, all other query parameters will be stripped off.

These are the recommended rule sets. So rate limit request to origin at an average of 100 requests per second per IP.

Rate limit request at an average of 500 rps per IP on 10 second time window. And if you want to block few countries, so using country, two digit country code, you can set up traffic filter rules to block requests from those countries as well.

These are some debugging and best practices guide where you can defer. So for creating regular expression you can use this URL or view the DDoS dashboard and how to set that up. You can click here. So this deck we are going to share after this meeting.

Then we have BYO CDN specifics. This also we are going to discuss how to set this up. So these are the five steps configuration. First we need to point the CDN if you are bringing Akamai or CloudFront or any other CDN on top of our firstly. So these are five steps configuration which needs to be done. Without this there are chances that your X formatted host header is not getting bypassed or there may be some issue in your virtual configuration. So these are the required steps and these need to be done. So this we are going to discuss in detail in our demo.

Then coming back to demo. So we have set up an environment where we are going to show the configuration where it needs to be done.

So I have logged into one of my server.

So what we have discussed earlier is that we have to go inside com.d directory.

Inside this we will be seeing two folders available vhost and enable vhost. So first we need to go to available vhost. Here there are default out of the box files which as I mentioned these are immutable files. You cannot delete or change it. So what we have done we have created a copy of AEM publish vhost and copied it as custom.vhost. So here in custom vhost I have created the configuration. I have specified my server name and server alias for which domain I am configuring this virtual host file.

And at the end I have mentioned this cache control headers which are using this location match headers.

So using these tags I have specified caching for HTML as 200 seconds.

And for my fastly I am setting this up for 3600 seconds. Then for client lips I am setting this up for 43200 which means 12 hours.

Similarly for content dam I have again set this for cache control headers as 12 hours.

The same kind of configuration you can do at urine. And once this is done at available vhost then we need to go to enable vhost and just using this command ln-s we have to create a softlink.

Once this is done so you will see that a softlink is created here. Custom.vhost is now pointing to this and that’s it. So once this is set you can see that the cache control headers are being set now.

If you open this URL.

And if you see this configuration in the network tab.

So here you can see for HTML I can see that max is 200. If I go to CSS I can see it’s set for 30 days.

And for images and all you can verify it’s set for 12 hours. So it’s the configuration which are getting picked up by the special configuration.

Now setting up the traffic filter rules.

So right now what we have done is we have cloned my git repo.

And to clone your git repo you have to go to your environments at cloud manager.

From here you can access your repo. Just run this command git clone and then username and password.

And here you can see that we have created one config folder.

So you need to create one config folder and inside this config folder there will be a yml file which we need to create.

This is the yml file. Here what I have done.

Same as we have discussed in our deck. Environment type I have specified as dev, stage and prod. These are the traffic filter rules. I am setting up a rate limit. So if from a particular IP if I am getting 100 requests per second for continuous 10 seconds. I am blocking those hits for 5 minutes. This penalty you can increase or decrease based on your use cases.

Similarly for perfect countries as mentioned in my deck.

So here we can see that we have blocked few countries as well. So that any request generating from these countries will be blocked. So right now the action which I have taken is log. Instead of log you can allow or block your request.

Then this configuration is for setting or stripping off your query parameters.

And finally this configuration is required if in case you are bringing your own CDN. So with the help of this section, authentication section. We are setting up amh key. So this configuration needs to be deployed once everything is set up and done. We need to deploy this configuration using config pipeline. So you have to go to cloud manager. And similar to this you have to create one config pipeline.

Once this is triggered, the relevant configuration which you have made in your CDN.YML will be pushed to that specific fastly.

So as you can see my configuration that we are setting up this variable.

This variable you need to define at environment level. So if we go inside this environment, this is the configuration.

And let’s say for dev I want to use this key. So what I will do, I will enter this value here. The value should be coming from your, you need to run this command. OpenSSL random and 32 byte key needs to be generated. Just copy this and paste it here.

Select service type as publish and this should be your secret.

Once this is added, so this key will get deployed to fastly. So coming back to my demo. So let’s say we have a browser where we are making a hit for this domain. Then we have our BYO CDN. It can be Akamai or any CDN based on your company’s requirement. Then we have fastly and then we have a dispatcher publish or the origin servers as we call.

So if we are making this hit from browser, it will reach to Akamai or any CDN.

At Akamai, we are setting up this Akamai. As I mentioned, these five points need to be implemented here. So at Akamai you will be doing this configuration. You will point your CDN to the Adobe CDN Ingress. Then you will set up a host shader.

Then SNI needs to be set up as CDN Adobe Ingress. That is this URL.

Then we need to set up the X-forwarded host shader. So let’s say if you are hitting a domain abc.com. So this is your abc.com is your X-forwarded host shader. This you need to allow at your CDN. And then you have to pass this XAMH key. This XAMH key is used in two different ways. One is to protect the FASTLII so that FASTLII will expect traffic only from your own CDN. If someone is hitting FASTLII directly, the request will be blocked. Or if someone is hitting from some other CDN provider which is not configured properly. And maybe some malicious kind of attacks. So FASTLII will block those traffic. Secondly, other than blocking, FASTLII will use this XAMH key. And once this is set, then only FASTLII will authenticate and add X-forwarded star header. That is X-forwarded host, X-forwarded four headers. And it will send those headers to my Dispatcher publish here.

So once we receive the request at Dispatcher, based on this X-forwarded host header, the correct virtual host will get picked up. And once that correct virtual host will get picked up. So based on those rewrite rules, the location match header, the cache control headers that will get implemented for that domain. And the request will be served accordingly.

So this is the basic layman architecture of how things reach to origin in case of BYOCD.

And coming back to DEC.

So the impact of this caching is that we have seen a good improvement in poor web vitals. So we have seen customers who were running on 5% cache rate ratio. And when we worked closely with them and tried to implement all these best practices and recommendations, they have reached to 95% cache rate ratio. And at least for those few customers, we have seen that they were improvement in poor web vitals by 10 points. So this core web vitals improvement also depends on project to project and traffic as well.

Now I am passing the baton back to Chaya to wrap this meeting up. Thank you.

Thank you Shubham. We have now come to the end of this session. If you still have any questions, feel free to drop them in the Q&A pod. We have also set up an exclusive support group to help our customers improve their caching strategy. You can reach out to us directly by writing to aemcs-optima at adobe.com. One of our engineers will connect with you and help you improve your strategy.

We can now give five minutes for people to drop their questions.

The ones we will not be able to answer in this session will be posted to the contextual thread.

I will re-share the link afterwards in the general chat.

There as well, even post session, you can still post your questions there and we will get back to you with an answer.

The first one is, are there any references for CDN config with edge delivery services? No, I think for edge delivery service, the CDN settings are definitely a little bit different. Because you don’t have the config, you don’t have a dispatcher and all that stuff. Honestly, I’m not an edge delivery expert, but I think this is definitely handled in a very different way than it is here for cloud service. And just for mention, Shubha mentioned the origin selectors that means we can definitely handle aem cloud service and edge delivery service on the same settings. I think the settings are set directly on edge delivery service.

Maybe if the answer was not clear enough, maybe you can just try to be a little bit more specific what you’re interested in and set up a new question.

Yes, or post your additional context to the contextual thread after.

Let’s go to the next question. How can we enhance the caching mechanism to better handle sensitive or personalized content? I think personalized content is challenging. Because when you really go down to personalization, that means individual content on an individual person. That’s definitely I don’t know if then the CDN is the right the right thing to handle. That means if you personalize down to the individual person and specifically display stuff to them.

If you’re talking more about campaigns, the situation is definitely different. It means when you’re just to targeted content for person or groups, I think this is possible.

Sensitive content is a different thing because that requires authentication.

And I know that there are things in the making, but definitely not yet at a point where we can.

I don’t I know that something is going on, but I don’t think it’s already ready to talk about that yet. But I think we will we will have we will plan to we will want to have a solution for this as well.

Thank you, Jörg. Then let’s go to the next question. Can location match directive to set cache control be created as a separate file for reuse purpose across multiple vhosts? Yes, that can be done. We can create a separate file and using include tag, we can just include the difference of that by all the virtual hosts.

Thank you, Shubham. In case of custom CDN, will out of the box fastly CDN honor any cache control settings defined in a vhost config, or it will just pass them to custom CDN and act as a pass through for all requests and responses? OK, so in this case, yes, firstly, we’ll do that. If in case we are setting cache control using Maxage or SMaxage, so fastly will respect SMaxage and will cache for that particular time. If in case you want fastly to act as a fast through pass through, then, as I mentioned in my deck, using surrogate cache control, you can specify max is a zero so that fastly don’t cache anything.

Thank you.

All right, and another question. What or how do we categorize hit pass? Honestly, I don’t know. I don’t know. Hit pass, I think this is probably a weird mix between that.

I would have to look it up for you, Stephanie. I will add it to the responses.

All right, thank you. Then let’s take one more question. How is CDN performance calculated on the cloud manager dashboard? Is it hit over hit miss pass? Shaya, do you know? I’m not very sure. But we will get back to you with the answer. So we’ll take that question.

Okay. Then we’re at the top of the hour. We’re slowly closing this session. As mentioned, please complete the ending poll to get to rate this session and to propose future topics. Also, the recording, the questions and also the slides will be shared in the contextual thread. I’m posting it to the general chat just now.

This is the link for the contextual thread. And then I would like to thank my team for the presentation and demo to the audience. Also, thanks a lot for joining. Apologies for the audio issues we had. Have a great day and looking forward to see you in the next AMJams. Thank you. Bye bye.

Thank you. Bye.

Thank you.

recommendation-more-help
5f9e433e-d422-4bfd-9e43-c9417545dc43