Latest Adobe Experience Manager Headless Innovations for Developers

The Adobe Experience Manager headless product offering continues to evolve and get better, with APIs and other innovations that give developers more tools to get things done.

In this session, we’ll show how you can use some of these advancements, including how to manage headless content via API and import from other sources, and how to build advanced Graph QL queries that leverage the CDN for optimal delivery.

Transcript

Good morning, good afternoon, good evening, depending upon where in the world you’re coming from. I think we’re going to go ahead and get started here.

Welcome to latest AEM headless innovations for developers. My name is Sean Steimer. I’m a senior cloud architect with the AEM engineering team, and I’m presenting here today with Jabran Asghar. Jabran, you want to introduce yourself real fast, then we’ll get going. Yeah, thank you, Sean. Welcome, everyone. I’m Jabran Asghar, senior software engineer at Adobe, member of AEM engineering team. Today we are very excited to show you the latest enhancements with all of you. A quick note, there is a dedicated discussion channel for this session where you can post questions, and our team would be happy to provide answers during the session. So let’s get started. Sean, over to you. Yeah, great. Thank you. And yeah, as Jabran said, please feel free to use the chat and the Q&A pod, and we’ll make sure that we get your questions answered. As we go, we’re at the end if the question calls for a longer answer. So in terms of the agenda, we’re going to start by talking about API first authoring. And one of the common challenges we see customers dealing with is the need to automate content ingestion type operations. And so we’ll cover APIs that are available to help you do this, as well as some tools and techniques that will help make these types of things easier. From there, I’m going to turn it over to Jabran. He’s going to talk about advanced modeling and performance queries. So when dealing with a lot of content, performance can easily become a challenge when you’re talking about headless CMS. And obviously, that’s a really important thing. So we’ll talk about key enhancements that Adobe has been working on and also some best practices in how you can start using those enhancements. And from there, we’ll give you next steps. We’re going to talk about how you can start using and taking advantage of these things today, as well as in the future.

So API first authoring. I think that a lot of people, myself included, often when we think of a headless CMS, we think primarily in terms of content delivery, right? So basically, a headless CMS is something where the content is served via APIs, almost always JSON, to some sort of a decoupled application that then takes that content and renders it to the end user in whatever context that is, whether that’s a mobile app or a single page app or some other type of experience.

But I think there’s another aspect to headless CMS that is often overlooked or at least a little bit diminished, and that is the content management aspect. And so in my view, a true headless CMS has these two distinct API services, the content management APIs as well as the content delivery APIs. And so when I talk about API first authoring, what I’m really doing is putting a focus and shining a light on those content management APIs. And there’s a lot of use cases that these come up in.

And I’ll go into a little bit of detail of these use cases in a little bit, but things like upstream systems that are synchronizing content to the CMS, migration tools, developer automation is one that I’ve heard more recently from some customers I’ve talked to, and really any other API client scenario where you want to manage the content headlessly without an author interacting with a user interface or manually inputting things into the system.

So in terms of the headless APIs that AEM makes available, there’s three I want to highlight here. The first one is the content models API. This is a new API that we’ve been working on and it’s currently in the pre-release channel, the cloud service pre-release channel, and will be GA’d sometime early next year.

It’s based upon open API standards and it allows you to do full CRUD operations on content fragment models. So if you think about how AEM manages headless content, the content fragment models basically make up the schema of your content. So allowing you to create and read and update those schemas via an API is obviously a key piece to being able to manage the end-to-end experience through that API. Secondly, we have the content fragments API. Existing API has been around since 6.x based upon the assets HTTP API that you may also be familiar with. And again, full support for create, read, update, and delete operations so you can manage that content headlessly based upon the schemas that you manage through the content models API. The last one I’ll highlight is the Persistent Queries API. And Japarun is going to go into some more detail later about Persistent Queries and how those are a key to getting the best performance you can out of your headless system. So the Persistent Queries API allows you to make changes to those queries, update those queries to queries, create new queries headlessly. So often when we’re talking about that developer automation type scenario, you might want to be able to manage those Persistent Queries as well. So those are the three APIs I want to highlight. I do also want to call out, though, that there’s a lot of other APIs that are available in ADM, things like the Assets HTTP API, Sling Models, the Sling Post-Servlet, any number of other things. So just depending upon your scenario and your use cases, don’t neglect those other APIs. Obviously, they’re still valid and can still be used depending upon just what you’re trying to do.

So now I want to talk a little bit about some of the use cases where these APIs are available and kind of go through how you might use them. The first one would be upstream systems providing content. So the challenge here is there’s often a need to keep content in sync between AM and some sort of upstream system record. This could be scenarios where maybe the upstream system doesn’t have good user-facing APIs, and so you want to synchronize all of the content into AM for it to be delivered from a single platform. Or also things like where your marketers need to augment or reference or make some minor tweaks to that content that comes from the upstream system, but you want to sync it into AM where they’ll make those types of marketing changes. So in this case, these are almost always event-based. Some sort of event happens in the upstream system, whether it’s an update or whatever, and you have an API client that’s listening on those events. So either a webhook or some sort of event processing system like a Kafka message queue or whatever the case may be. Oftentimes, those events are lightweight, so you have to make API calls back into the upstream system to get the full context of all the content you need to then push that content or update that content in AM. And then obviously, lastly, the third bullet there is just making those API calls into AM to make those updates.

The second use case I’ll highlight is headless content migrations. So migrating data in bulk from another CMS, it can be difficult and can be time-consuming, and so by providing API that allows you to automate these migration scenarios. So you’ll obviously need to start with an export from that source system, whether it’s using their APIs or a lot of sources to provide JSON or XML or some other export, flat file export, if you will. You’ll want to go through a mapping exercise, so mapping the data from the source system into AM so that all your references are maintained and everything comes over correctly. And one thing I’d call out here is if you’re doing this, you’ll often want to think about pre-migrating assets. Think of the asset migration as a pre-step or almost like a different project to the headless content migration, because you’ll need to go through things like, how do you set up your asset schema, your folder structure, all those things that are kind of ancillary or tangential to the content migration. So migrate the assets first in bulk using something like bulk asset importer, then go through your mapping exercise, and then use our APIs to bring the content over into AM.

So with that, I’m going to get into a little bit of a demo. And the way I’m going to do this demo is, I’m actually going to use the, let me get my browser here.

Oh, it looks like I closed everything. So we’re going to have to open some things up here, but that’s okay. Hopefully I still have local 3000 running.

Okay, that’s exactly as expected. So I’m going to use the WKND demo app. But what I’ve actually done is I’ve deleted all of the data out of it that makes the app run, right? And so I’m going to use those APIs to migrate in the data to get the app running again.

Let me open up my AM system.

Apologize here for the delay. I had all these tabs open and must have accidentally closed them.

But if you just bear with me for a second or two, and actually, yeah, let me make some of these things larger so you can see them. Here, wow, that is loading.

Okay, here we go.

All right, so yeah, so as I said, these WKND shared models, you’ll notice that there’s two models that are missing. If you’re familiar with the WKND shared content, there’s the author model and there’s adventure model. I’ve deleted those. I’ve also deleted all the fragments that go along with those models.

So the first thing I want to do is I have this Postman collection, and I will publish this Postman collection to the event page, a link to it, so that you can take a look at it and look at all the API calls and use them on yourself later. You’ll use them for yourself later rather, that is. But the first thing I’m going to do is I’m going to create the author model, right? And this is using that models API I talked about, and it just defines the fields and where the model gets stored and different metadata on the fields, the max length and the root for content references.

And when I run this request, you’ll notice that in the response headers, I get this location header, right? So that is now the location I can actually use to retrieve this model I’ve just created. And this last part of the URL is the model ID, which we’ll come into later. So just to show real fast, if I take this model ID and I put that in, you can see that we can then use the get part of it to grab the model as well. One thing I’ll point out in the examples here, I also have this data types API, and this is useful for figuring out what metadata belongs to all of the different fields. So if I go back to my create model, you can see that like for a content reference field, for example, if I scroll down here, I think this one’s content reference. Yep, content reference. You can see that these are all the different metadata that are available for a content reference field. So as you’re trying to figure out how to use these APIs to create and update your models, using this data types API is a pretty useful technique for knowing that that exists for how you would set up your API calls.

The next thing I’m gonna do is I’m gonna create the adventure model. And in the adventure model, we have a reference to the author model called adventure leader, and that reference constraint, the items metadata is based upon that ID. So when I created the author model, I just have this test in postman, it grabs the location header, splits it out, and then sets a collection variable to the model ID so that when I create the adventure model, that constraint gets created correctly. So I can go ahead and run that. And again, we get a 201 created successfully. So now if I go here, my models are created. And if I go back to the app and refresh it now, I don’t get an error. It’s able to execute the persistent query this app is using, but there’s no data anymore, right? Because I’ve also deleted the data. So now let’s go and create our authors and create our adventures as well.

So I have one example here of creating the author using the fragments API, right? And in this case, we’re posting to API assets and then the folder name is just the folder path where we want it created without content dam. So it ends up being created at content dam, we can share EN contributors, and then the name is EN Provo. And then in terms of the body, we’re just specifying all of those fields that were in the model, as well as the values and the value types we need to get that fragment created successfully.

And I can go ahead and run that. And now we have the author for EN Provo. Rather than go through Postman and use that to create all of these different models and all of the fragments, I’m actually gonna use a tool that something we’re working on that allows us to import content fragments from a CSV file. So I have this Excel file I prepared where I have all of the adventures set up as well as all of the other authors with all of the different data and fields and metadata is all set up here in this CSV file. And I can, let me go here, sorry about that. And I can use this tool, it’s a CLI tool and it allows me to import all of this data from that CSV file. And it’s just gonna call those same APIs I was talking about, but it just is intended to make these types of migration scenarios a lot easier and quicker and more straightforward.

So if I run that, it’ll take 10 seconds or so to go through, validate that all the data is correct. And now we’ve created all of our different contributors and, I’m sorry, all of the different authors and all of our different adventures. And I’m just gonna open up DevTools here so I can bypass the browser cache. And now you can see that all of our adventures, adventure references are all created there. The last thing I need to do is I need to update the persistent queries. So to show that I’m gonna open up the graphical tool just to show you what the persistent queries are before I make these updates via API. So we’re gonna update the adventure by slug query to add in, you’ll notice this doesn’t have that adventure lead in it. And then we’re gonna create a new query that allows us to retrieve the adventures filtered by the author. And then we’re gonna use that for a new view we’ve created that lets us see, click on the author’s name from the adventure and see the adventures that they’re leading. So if I go back to Postman, I have this persistent query set up. And there’s a couple of things I wanna highlight here. First of all, when you pass the query, you do have to encode your line breaks as slash ends. So at least the way Postman works, you can’t have hard returns in the post body. So it’ll be easier to see once I post this and create the query what I’m actually doing. Basically, all of this adventure lead data is being added into the query. But I’m also updating the cache control headers. And this is something that you can do through graphical through the UI, right? And so, easy enough to do through the UI. But if you are using these APIs to create and manage your queries, know that both the cache control and circuit control headers can also be managed through the persistent query API.

So if I go ahead and I send that request, and I’m just gonna refresh graphical here so that I can see my updated query. And now, I clicked on the wrong one, I apologize about that. The adventure lead is here. And you can also see if I click on headers, those cache control and surrogate control headers I set are all there and will be served when this persistent query gets served. So then, I can go in here into one of my adventures. And you can see that the author information along with their profile picture is showing up. And then the last thing I wanna do is, I’m now gonna create a new persistent query. And so for this one, I’m creating it for allowing us to filter adventures by author based upon the name, first name and last name of the adventure lead. So if I do that, and then that query gets created, I can just go here, refresh graphical.

And now we see our adventures by author query got created. And if I click on Jacob Wester here, now I can see the adventures that he is the leader of based upon our data.

So all of that to say that we have these APIs that let you manage your models, your fragments and your persistent queries. And I’m excited to see what the community does with these APIs and how you use these to make your headless solutions and AEM better overall. With that, I will turn it over to Gibran who’s gonna talk to us about performing queries in advanced GraphQL model, Gibran.

Thank you, Sean. So let’s get to know now the newest AEM enhancements for GraphQL. So starting with a quick refresher to set the context. AEM has multiple tenants, also known as multiple configurations. If you look at the diagram here, each tenant, for example, global, tenant zero and so on, has a set of assigned resources. For this discussion, it’s content fragment models, content fragments, persistent queries, and additionally, each endpoint has an assigned endpoint, and which provides access to these resources, for example, while GraphQL queries. Now, graph global tenant is special, it has access to all content fragment models from all tenants and also the all content fragments. Persistent queries actually are not shared between tenants so that global tenant doesn’t capture persistent queries from all other tenants. And each tenant has its own endpoints. So now having this context set, let’s move on to modeling.

AEM now supports nested configurations. This means you can have sub-configuration under a root configuration and define or categorize models the way it suits your scenario. For example, here you see a weekday configuration, let’s call it root configuration, which has a global event model, but then there are also three nested configurations, easy, busy, and funny, which has their own content fragment models. When AEM generates GraphQL schema, all of these models will be very well at the same level, accessible via endpoint of weekday, that is the root configuration. So this feature helps better categorization, organization of your models, yet providing access to those models under a single endpoint.

Now let’s discuss some of the UI enhancements of GraphQL. So you had already a glimpse from Shian’s demo. So we have introduced in GraphQL the endpoint selection so that you can work with the multiple tenants and also the ability to develop and manage a life cycle of persistent queries. So persistent queries, you can now add new persistent queries, manage cache headers for them and publish or unpublished those queries. We shall look at these enhancements shortly in the demo later in this session.

Now let’s explore filtering and serving enhancements. A quick note that all queries on the latest here are using the latest weekend share project contents. First, the generic filtering with variables. For example, in this example that you see, a given list of query for articles, you want to filter articles based on a given value for the property slug. And the value for the slug variable has been defined by a separate JSON, which will be passed along with the query itself. Then let’s look at a more powerful example where we are passing the filter itself as a variable. Not sure the filter type here is article modern filter. At the example, we use the same result as the previous one, but if you look closely, the whole filter is being passed as a variable. This enables you to define filtering fields and their values dynamically and opens up a whole new set of possibilities and flexibility for a variety of use cases. The next, you have possibilities for sorting criteria, and then you get accordingly the other results for your GraphQL query.

Now something about pagination. You get split for two types of pagination for GraphQL results. First, the offset-based pagination. Looking at example, you can specify a starting offset and then the number of results that you want to get after that offset. And the second type is a cursor-based pagination query following GraphQL cursor connections specifications. AM auto-creates a special paginated type for each of the content-flappable model. And using that, you can specify number of results that you want to get after a given cursor. Note that currently we support only forward pagination. In the cursor pagination, you have also the possibility to get start and end cursor of the page being returned as part of page info. And also the information if there is a next page available. And then of course you can combine all these features for more complex queries, but learn that when discussing best practices.

Now let’s look at Persisted queries. A quick refresher on Persisted query API. The API is accessible over get HTTP method and basically it is used as one, two, three. Simply the API is you create the Persisted query, you got the short path that you can use and then you execute the query via get request.

The additional flexibility comes with the ability to pass parameters to a Persisted query. For example, considering the last query that we discussed, we can pass the parameter slug via URL using a get request. Normally we’ll use question mark for passing parameters in a conventional get request, but in case of Persisted query, we shall use a semicolon to mark start of the parameters and also the separator for multiple parameters. Additionally, you shall encode the whole parameter string. The encoding of parameters is currently required due to a slow issue and we are working on improving that. But the result, the parameters pass this way can be used as a suffix to the Persisted query URL. And this therefore can be cached at signal level. Now this is powerful as you can also pass a complete filter as a parameter. With a note that this must be done with a panel with a word of caution as only reusable results benefit from caching.

Let’s discuss now some of the best practices around all these features to get most performant results. First, the path based queries. You can leverage repository structure to narrow down scope of the fragments being returned. Note that the training slash at the end of the feature value, which is required for the best performance as if you would not put it, starts with operator might match many other unnecessary paths as well. Next, to use filter on the top level as JCR level filtering is only for top level filter so that gives the best performance. If you must filter based on nested fragment fields, then you can still leverage JCR filtering by providing another filter on top level field combined with a logical AND. Note that if nested fragments are involved, expressions combined with an OR cannot be optimized.

Then there are certain fields which are generated dynamically as part of GraphQL JSON results. These are mainly text fields. And for those, the values in different formats, for example, HTML, Markdown, plain text, and JSON are always generated on the fly. So the best is to avoid filtering on such kind of fields. But if you must, then at the least consider adding a top level filter expression, for example, based on underscore paths, combined with an AND logical operator to reduce the initial size of the result set. Then comes the pagination, which helps splitting a large result set into multiple paths. Cursor-based paginations might perform better in terms of performance in cases where you need to iterate over the whole result set. When reading only first few pages, offset-based pagination might be sufficient. Please do note that explicit sorting and filtering might add some performance overhead. So while using these together, also consider the earlier recommendation about filtering.

Now we come again to our favorite persistent queries. What makes persistent queries so much appealing is their ability to be cacheable. Therefore, profiles their use whenever possible. When combined with variable-based filtering, and on top of that, filter is a variable-based filtering, these are quite powerful. Being cached, the results are faster to your clients, and thus also you are putting extra load on a region. Combined with a caching strategy, persistent queries can better search your use cases. There are different cache headers that you can tune for each persistent query, and also you can set defaults for all these headers for persistent queries on a global level via cloud environment variables. Apart from the max age, which is the normal expiration of the two results in a browser or a CDN cache, still while we revalidate and still if our headers ensure continued delivery of the available content, even if still in case of cache expiry or due to an error on a region. So utilize and fine-tune these headers according to your particular requirements.

And yes, having all these features, you might find some features more suitable than others specific to your scenario. So you can combine, and you should, where it makes sense to optimize results for your unique use case.

Let me now switch to a quick demo of some of the features that we just discussed.

So here I am on a Cloud instance with two configurations already defined. One configuration, basically, we can share it. Excuse me, that is based on the standard weekend content. And then we have a defined nested configuration events, which has its own set of models, like one model event here, along with the other standard models. Then we have two endpoints defined. We can share endpoints and a global endpoint. And then switching to the graphic URL tool, so here we have the possibility to select different endpoints which are defined. And then accordingly, if there are persistent queries associated with those endpoints, those are visible, and we can manage them. So you also see some of the queries here are already published, and two queries here are not published. So let’s start with one example, which is based on the path query. So here we see an example which retrieves an adventure format based on a given path. So having this query and passing these parameters, we get one item at the moment. Only one result, if we see. And this is related to the slash, important of training slash that we want to include here. So if I remove this slash and I execute the query again, then this query will not match some other paths which are not intended. For example, here it is matching with the BaliSoftCamp1 and simply the BaliSoftCamp path. So just consider that you have slightly different things. You are using a partial name, and then end of the day, both different paths can have, let’s say, 1,000 fragments. And taking care of this little thing, you can have simply results cut into half. So that would be a big performance gain for a particular scenario that you may have. Then let’s check the filtering. So here is a query which returns an article based on title from root fragment and author’s first and last name from nested fragment. So how do we do it here? So we have three parameters defined for the query, title, first and last name. And we are using logical and operator together with the title from the root fragment, and first name and last name from the nested fragment. And then passing these parameters together with values. So we get the results, which is based on this combination. So if we do not have used an end operator on the top, so that means we would have to traverse so many more records because the JCR filtering works only at the top level. So it is important that you use at least, very possible, of course, one logical end operation on the top level so that you can reduce the set that you want to retrieve. Now let’s make it a little bit more interesting. So now here, it’s the similar query that we saw before, but a little twist. But now we are selecting data from the title from root fragment, doing an end with last name or first name from the nested fragment. So that means this will return us results where either last name or the first name is matched from the nested fragment. So we should have here two results. So the first result is coming from Stacy, based on the first name. And the second result is coming from the last name. And what we have done here is use the logical end operator on the top level fragment, plus a logical OR in the nested fragment.

So one of the differentiator here is using operator in the parameter. So here, note that we have a float operator type parameter where we can pass the operator that we want to use. For example, for this query, when we execute, so we get the items which are basically pressed less than 1,200. So here we are using the lower operator passed into the expression here. And switching it with the other one, so we will have the other result set, satisfying the condition. Then let’s move on to the interesting case that we had passing a simple variable as a parameter and passing the whole filter as a parameter. So here we see that this is a very simple query, just for the so that the emphasis is on the filtering itself. So we are filtering over slug value. And the value is Alaska adventures, let’s say. And then next we want to use the same query, same results that we want to have, but passing the filter as a parameter. So I have this new query where we are actually passing the whole filter as a parameter. So if you compare with the previous one, it’s exactly the same, but only that the filter has been reorganized in terms of JSON object. So we have the filter and the slug and expression and operator and everything. And then instead of specifying the values here, rather we are specifying that we are using filter as a parameter. And we see the article model filter type for this type of parameter. So I did just underscore path to differentiate that we get different results here. But yeah, so actually this is only to differentiate the result, but we have the same execution and the same thing, but passing filter as a parameter. Then quickly checking on the pagination. So here is a query where we are just getting five results after a given cursor. Just to execute that, so we see that here each of the edge item has its own cursor. And we see the titles here. And then the page info object, which specifies the first cursor, which is here, and the start cursor, and the end cursor, which is here. So you can simply paginate using the cursors, which are available, and iterate over the whole result set as needed. And offset-based pagination works very similarly. So based on offset zero and having two results, so simply we get the first two results in the whole result set. So checking on time. So I will quickly go through one thing. It’s about passing variable or passing a filter to a persisted query, because that is the case which would be quite interesting. So here I am using the same query, basically, the query that we saved here for the variable, the simple query, and where we passed the value as slug. So what we need to do is we need to pass on these parameters, but after encoding. So here I have an example. So encoded UI component, a Java function I used to encode slug equals value into this parameter. And then later on, I will use also filter equals the whole filter value. So that I can use this frame. And based on these two suffix, I created these URLs, these get URLs, so that after the name of the query, we are passing this thing as it is encoded. So this is basically semicolon slug equals Alaska Adventures. And executing that, we get the same result that we get when doing as a post query. Similarly, going over the filter, so here I’m using the number five filter as a variable persistent query and passing on the whole filter as a parameter. So here semicolon filter equals, and then it’s the slug and expression and value, et cetera, everything together encoded. And then we execute it so we get the results with the added parts that we changed the query a little bit to recognize results as an output. But this thing is now cached. So if I go to publish and I execute the same query, so the first result you see is 292 milliseconds. So if I executed second times, it’s 72 milliseconds. So that means after the first result, it was cached. And if you look at the cache, it was subbed. Look at the headers, it was subbed from the cache. So using persistent queries is highly recommended. And given the flexibility and stuff that it can provide along with all of the facilities or the syntax options, it can add quite a value to your use cases. So now with that, switching back to presentation, next, what we have. So what you can do next? And before that, quickly, we can review what we have covered in this session. So keeping AM at the core as a headless CMS. So today, you have got an overview of how you can create models and content fragments using API first authoring approach. And then how to optimize headless content delivery for your applications and clients using the latest AM headless enhancements, including modeling, filtering, pagination, the power of persistent queries, and CDN optimization. And on top of that, the best practices. So where to go now? And yes, you read it right. You read it correct. You can now register for a free AM headless trial. You will find the link for registration on the next slide so that you can go ahead about that. And feel free to ask questions on the dedicated channel. We should be happy to provide feedbacks there. Most of the features regarding pagination, filtering, sorting, and altering will be available via pre-release channel. So you can enable pre-release channel for your AM cloud environment or local SDK. The documentation is available on experience.aw.com to explain how you can do that. And some of the performance enhancements, in addition to what I just mentioned, will be available in January 2023 release. So do keep a check on AM release schedule for that.

So here are the channel and the trial registration link. So please go ahead to register there. And with that, we thank you very much for your time and attention. We hope you enjoyed the session and it was valuable for you.

And yes, of course, we welcome your feedbacks on the channel. Feel free to drop any questions, your feedbacks, and so on.

With that, Sean, do you see we have some questions that we can follow up on? Yeah, John, we’ll go back to the previous slide and show the QR code again for the discussion channel.

Our intention is to post follow-up links, example queries, things of that nature. I see a question about how these things will be available. We’re gonna post all that stuff to that discussion channel. So it’ll all be available there.

The other questions I saw, I think most of them got answered in the chat by Gilles and Stefan. So unless there’s something else that I missed, I think we’re good on questions. Or if anyone has others, obviously feel free to continue to post them.

Yeah, I just see one question on the top. Where are in JCR the queries are saved? This is saved under the configuration. There is a special note for that under Conf. So you go to slash Conf and talent and there you will see these queries under the GraphQL node.

And yes, I think most of the other questions are already answered as far as I see for now.

Okay.

So with that, I guess- Yeah, well, I think we’re good. Thank you everyone. And we look forward to following up on the event page on any further questions that come up.

Yeah, have a good time, everyone. Thank you.

recommendation-more-help
3c5a5de1-aef4-4536-8764-ec20371a5186