All right. The recording I started handing over to you. Thank you. So, yes, Welcome to our jam session today. About the this this is also mentioned in the in the slides here. He helped prepare and wanted to be here, but he cannot make it today. So it’s only going to be the three of us talking about this. So these are these are around for a little over one and a half years now, I guess. And I’m not 100% sure what the audience is looking like. How many of you already know what ideas are using ideas maybe, and or just a completely new to the topic. So I’m going to spend a couple of minutes just introducing where we’re coming from and what are these are that’s already been done before. So please stay with me. In case you already know, this is only going to be a couple couple of minutes and then we go into more advanced topics and new features and things like that. But yeah, so initially after we did Amazon Cloud Service, we were starting to look into the cloud development velocity that we have. So basically what it means to develop for a MSA cloud service and what we made available at the time was the SDK, and that’s based on the well-known Quickstart technology. In other words, the way we delivered and before the cloud and has a lot of benefits deploys really fast. You get feedback directly and you can see everything on disk. And actually it used to be the real thing that also run on the server, right? So that that had a nice parity with when you were developing. But with the move to the cloud, we kind of lost some of that parity. So in the first place there are certain integrations in the cloud that you cannot get locally. And then there’s also things where we do behind the scenes, certain features differently in the cloud, which are then approximated on the local SDK, but it’s not exactly the same. So that could cause some problems here and there. And just in general, it is then not that nice too to develop. And then in second case, basically maintaining a local SDK is also somewhat problem metric because that can be a time consuming to update it all the time to match the versions. But also if you really want to set up like in the cloud, right where you have a real author and publish and, and the dispatcher and font and things like that, it’s it’s not that that easy to to set up and keep in sync with the cloud. So what we found that I mean the alternative that you had to basically develop or what you were forced to do was basically to deploy to dev environments. Right. And that also had some upsides. I mean, the pipeline is building your code. This foundation being done, the deploys all orchestrated nest testing there and things like that. So that that is good. But at the same time, it took too long right. So you have you have the feedback cycle goes back to kind of 20 to 30 minutes and then because it’s a pipeline, then really more geared towards deploying, you don’t get necessarily the same visibility or insights into your into your running instances that you are used to on the local SDK. So in other words, the deaf environment, what it’s called deaf environment, it really serves more for testing or integration purpose often and not necessarily lend itself that role for daily development and quickly trying things out. So we decided to create a new environment type and the idea was to mostly make it available for everybody. So if you have a credits or license for a solution like assets or sites, you already have this environment available to you. But there’s an option to also buy more in case you have more use cases, more developers and things like that. But the main point is that those environments support changes at one time. So configuration bundled scripts and other things like the web to your config can be manipulated pretty much directly and that hopefully or at least now proven validated, it makes it easier and quicker to validate new features and experiment with the settings and things like that that is made possible via CLIA, which are integrated into our plugin. With that CLIA, you can contact those rapid development environments and short artists that we’re talking about today. And again, I mean the goal is to get your feedback deployed cycle down from 20 minutes to a couple of seconds or minutes if you if you deploy Degassing. So it’s not as good as, as with the SDK, both the times as well as the insights, but it’s getting closer to it. And also that’s going to be part of what you’re talking about today. We of course continue to hopefully make it make it better over time, especially when it comes to the insights and the and the usability and things like that. So let me really quickly zoom into what it means for an IDC, what the what the what does it actually mean? Well, it’s really just a it’s a normal AM cloud service environment similar to a deaf environment. The big difference inside of it is we have only one ISO and one publish, so there’s no scaling out going on, which also means that you potentially have some valentines in there. If for example the author crashes or you have to reset the environment and obviously so use cases where you’re more about load testing or failover testing, that’s not quite what’s meant to be for really is meant for a replacement for the for the local SDK for testing and development. Other than that, the the biggest difference is that the also it’s not running on Mongo it’s funding on second store. We do that so that we can have it behind the scenes, be mutable but just so that that’s clear, it does not mean that your code can get confused in that sense because from, from the code code perspective everything is still immutable at one time and install bundles or change things that you cannot change in a definite diamond. It’s only possible if you come via the document route. And that’s that’s really what we are trying to do. We have the document which you use to deploy bundles, configurations on the packages, all of these things. You can think about it a little bit like getting your local deployment folder back in the SDK right? I don’t know. Probably most people are familiar with the method to just have a deploy folder and then put things in. Or maybe it’s somewhat similar to to the to the way to have the MAVEN plugin configure it and just, you know, install directly to a binding instance behind the scenes. It’s actually a very similar tool because we upload the files for us to the assets you give us to this to the file storage in the cloud, and then from there they get picked up and then we process them a little bit. So we do run some of the analyzes important ones that we actually also run in the pipeline real quick. So you potentially get feedback even before it hits the instance. Like if you can detect that your banner would not resolve or this other things, you get that feedback quick. And then we also do some some light security checks in there. But if everything works, we reinstall install it into the running instances behind the scenes. And that’s that’s the way you get it deployed. So that that really comes down to the command being the, the main entry point to everything. And from from other than that it should behave like a normal environment minus the the scaling and the availability of. Right. So as I said, I don’t want to make it too long here because this is already known. So for the rest of the talk now, we’re going to hopefully have more new features. The first part is going to be used in setting. It’s going to talk about the AOC, where we had quite some improvements in the last couple of weeks and months, both in terms of usability and things like that, but also in the integrations capabilities with other systems, as is going to show. Then we have a pretty recent somewhat exciting feature, the idea logging, which really makes it nice to gain insights into your running instances during development. I think it’s really good. And then Natalia is going to take over, introduce the front end support that we now have for ADC, meaning that we can also support the front end pipeline packages that you normally deploy with a front end pipeline to ADC that’s available now. And then she’s going to touch on two things which technically more and better. Right now we have the configuration pipeline support and we have a UI refresh for the developer console. So now technically the developer console is not really ADC as such. It’s available as you may or may not know in all environments. So it’s going to come to more than this refresh, it’s going to come to more than just Audi’s. But, but obviously it’s meant for developers so I think for the content and purposes from the flow for development, it’s it’s good to have it in here. And then she’s going to conclude we’re showing that you can use this when you use Crosswalk as well for its delivery services if you want to. It’s going mentioned this time for Q&A afterwards. So that’s that’s good. But I also want to take the opportunity and mention that we really are open to feedback and want to hear from you, especially as you can hear. We have those beta features. We have others in the pipeline too, where we actually are working together with customers and outside developers directly. So there will be some point as to how to contact us in the Q&A. So yeah, if you have ideas or want to collaborate on something, we are more than happy to talk even after the game session. Yeah, that’s it for me for now. And without further ado, let me give it over to you. Yeah. Thank you, Carl. So I will first show some AOC I commands. This will be a mixture of things that have already in there and also several improvements we’ve made on that front. So I’ll start with the log in, which is not an idea specific command and so how you can log in, I’m using the no open option to log in, which allows you to copy the log in the URL and paste it into an incognito window, which can be helpful if there are logging issues. And once logging is complete, we’ll see it reports back as logging successful. So for the log in topic, we are working on further improvements at the moment. This was not ready for today’s session but will come in the near future, hopefully. And that is a way to manage multiple logins, which is especially useful for people who work for different and lines of work in different organizations so they can manage different logins without, and that allows them to switch more quickly between environments. So next, go quickly check the help functionality of the seal I or colleague Demo has has worked a lot on the Ally and he’s also improved the output. He put a bit more structure into that so you can get the overall help for RTT where you see a list of commands, but then you can also get help for individual commands, for example, for the install command and that details all the different flags that can be used. Okay. With that, we need to proceed to the setup as a new idea is set up command. You can choose whether the setup information should be stored locally or or globally, and then you can select from a list of options that we can filter by type and strings. Okay, So then we look at installing something on the right hand side. There’s a finished made an execution of the we can object and we’ll now install the all package of that project. That’s generally the best way to get bootstrapped with a project. And once everything has been installed with the old package, it’s possible to install individual artifacts like those bundles of conflicts, individual content packages, etc. We can offer an installation for it. And the other point, check the status of the system. And we see, for example, that the environment says it’s ready, but the instances are still currently deploying and it also lists all the installed bundles and mostly GUI configurations. And with the Idea History Command, we can get a summary of the the changes that have been previously deployed as well. So the next little goodie that we’ve built in is the is the possibility to provide chosen output from pretty much all of the commands. We can check with the help if Jason output is available for a specific command and for example of this status we can just output this Jason which is useful especially for it. Using already is in the ICD kind of scenarios where handling Jason output is more reliable than trying to pass the textual commands. So now just to illustrate, we’ve set up the system and we’re going to implement the small change. And I want to add journalist to the roles of, to the occupations of each of those block authors. In order to do that, I’m going to go into the code, put everything into a tree set in order to get it sorted and have the ability to add an additional occupation. Right? That’s the change. And now we could look out if the journalist occupation has been added for for a writer, because somebody may already have put in the list of occupations and now built this change. But I’ll only give the guy a bundle, which is a little faster than building the entire project. And then I will also deploy just that to bundle again with the install command. For most artifacts, the install command is clever enough to determine what type of artifact it is. If that is not the case, then we will get an error message that informs here that the type needs to be specified for an OCI bundle that is automatically determined. Applying an update takes some time. If you install a big content package with multiple artifacts inside, that can be a lot longer. Of course, for an opener it’s a bit faster. So I’m refreshing this page now on the author and I’ll check if the journalist has been added. And indeed, there it is. Same thing on the Also I could go into the Western Australia article and see if he has whoever is also a journalist and there it is. Okay, so this concludes the chapter on improvement to the AOC alone and brings us to logging with I.T. Is ultimately that is also a command. But if we put it into a separate chapter because it is a fairly big improvement. Now I’ll check the log messages that we have added in the previous step, why we added the journalist occupation. So again, we see the looks of a command which shows we can select all kinds of different log messages. We can even specify our own log in format and we can choose whether we want to take the logs of the. Also, as of this instance, I will first log on deep the level. Some general AM logs just to show that there is output and that we can arbitrarily choose a log level for a Java package. Or we could even use selected from multiple java packages as it is possible to repeat the debug or info etc. flags and that allows adding multiple packages to the logger. And now I’m tailing the weekend logs. And then I realized it was the page we see that are log messages written out of me to investigate to see why it’s locked twice. It seems that particular code is executed twice. When we look at the page and we do the same for the publishing instance, we specify publish within moments as option. And here we need a catch killer in the URL because of the published requests to avoid dispatch. And typically that would be cached and that is the logs command. For now, this is only available for orders. We are thinking about whether and how we can maybe extend that to other Amazon CSA environments as well. But we hope that already on the idea that is a very helpful addition. Thank you very much. This concludes my part of the presentation, and at this point I will hand over to Natalia. Thank you. Julian Awesome. Okay, then let’s continue with a prompt and pipeline support. What are the issues? Well, we don’t have pipelines. What are these? So as you can imagine, this has been integrated into the AOC line. So, yeah, let’s go for it and how to try these front end stuff. So for this example that I’m bringing in here, you can see that I uploading on your side from a template that you can just download from our public repository and then import into again. So we are going to create on the yes samples template call mining site. All right then we created it. We reload and then there it is. Then we can download the source of the call. We’ll look at the template, how it looks like. And then you see this is very beautiful. And for this demo, actually what I’m going to do is to make it work because I’m going to do some changes. But let’s see. So we have the code locally. First thing that we need to do is to run it because we need to install the no dependencies. So we wait a little. So then think too long and then next thing we need to do is to run and PM Brumby. So what it does is actually to build all the installation files are for JavaScript. Actually these are the two steps that I’ve run on the frontend pipeline for all the environments for are these you need to run in locally on, you need to take care that you have at least four buckets. Jason So in order to deploy the code from the code, you just need to point to the folder and then pull it out from inside. But before pushing our content, let’s do other modification. I’m going to change the color of the background. So in order to do that, we go to variables and set up any ugly color of the background just to make sure that we push our changes like the green. And then we say next thing we need to do is to run again. The Build command for NPM. And then we generate again the distribution folder that is folder. Now that is then we can check there that it was updated. We run the command and this would be pretty straightforward because all of that is to keep the power folder. What only the beast folder and the bucket Jason try on them that bloated and it’s it’s so done. Okay so we wait some seconds. Oh gram let’s see how There you go. So now is done and you can actually check the hash that it took to be deployed because actually now with reload debate we can see the very ugly background that we set. And then also if we inspect the network in a call that we need and we filter by the house, we can see actually that that is pointing to our static file with the same block. All right. So that was all for the front then. So that is fully available for all of you. You guys will want to use it. We have more because we also have the configuration pipeline support. These features is still on early access, but I mean, all the features that we are proposing here for every active you can just engage with Adobe and then we can just manage to give you the access so that you can try it out. Let’s see what the configuration pipeline again. But what are these are how are we supposed to do it? Okay, so here I have installed just the website. It’s under 1:00, so I have in my in my laptop this file that I created with the configuration pipeline syntax, what I’m going to block the path. So all I need to do is to run an install point into that folder and then after some minutes it will be done already. So it’s complete. And then now below the date, you can see that it’s blocked. Now let’s say that, hey, we are wrong. We want to make a change in our target so that we deploy. Yeah, different. We deploy our website on a different path because that path is blocked. We did it on purpose. Let’s say that we changed the configuration for our project, the content packets that we deployed, but now instead of deploying the entire content buckets, we are just going to deploy the configuration that we modified. Search for the path, which is a little longer one. Okay, There you go. Let me copies them. We just run again and they are still command pointing to the configuration study. This is our noisy config and then we just wait a little. This would be really, really fast because these days JS on file that we upload and apply, that is actually the advantage of working right with all of these. We don’t need to apply the entire content package. You just ideally you can just apply small pieces, right? I’m excited about the changes. Then we reload. This is still working as expected, but if we go to another boom it looks okay. Then we blocked up the complete pipeline. Now next point is the aim developer console. Do I retrace my what made the car mentioned? This is also re access mode as a conflict pipeline. So for these you don’t it not only affects all of these it also affects there is two states on prod but given that probably you will be quickly operating over these are these then these becomes more attractive price will be used and to see what you have really deployed in response to inspect the company, the components, everything. So let’s go over a quick overview of how this will look like. So we can see on the new this UI, we can search for the one that is this is the one that we installed. Then here we will have some import packages that are clickable and that is actually one of the deals right you can navigate between all the components. That is something that we were meeting on the old this console. You can just go to the configuration that we just deployed on the previous examples. And then as you can see, we can check the code in property slash and other. We can also go to the components, always the components. And then again we just have the list to search for it. If we click on one of them, gives us some examples, we have all the references. If we go to services and also we just decide to click on any of them. So for reference, we can click on the bundle for the service associated. We can also go inspect the importing one, click on it, and then we are navigating through the bundle, said we everything. Still, if you need all information and then you want to process it, you can get it all the bundle or any other type of things, I’m told of the the entire funnel, which is a long file offense to bundle. What am I doing? So we go to the repository browser. This is not new but is linked as always. Here you can inspect all the content on the you something. This case I selected the author and then here you can see the template that we deployed print as when last weekend. All right. Finally last but not least, crosswalk on all ease. I mean, this is just to double check and to show you that you can also use crosswalk for all of the ease on that becomes pretty interesting in the sense that you need to make a name change. Then you can just quickly do it And we saw some examples following the tutorials. All right. So let’s go for Crosswalk on them. How to do. No, I have been following the tutorial stuff you can find out publicly on the a m books website and then configure the GitHub repository template that week up there. So let’s go. We can just go and create aside from a template that again, we can downloaded from the backup repository already somebody pointed into the documentation, so we selected it and then we created a template. Okay, let’s go for it. Then we set a title in this case is there. It’s getting started simply and then the segment, which is important because then we see that we need we need it to be referenced on the repository. So then let’s go to the GitHub repository. This is publicly available so you can check the example that is inside of the name of the site on there. Well, this is just a loan modification of this couple of files, right? Also pointing to my aiming sense. So now this preload closed essentially window. Okay now let’s go on let’s week publish all the code imported template that we uploaded so that it is fully available. So now this is publish and then if we go to the repository again and see if we do it. Okay. And we go to the main branch want to click preview. There you go. This is what we deployed so we can check that this is pointing to our are the E where we deployed this simply and then even more we can edit the content outside any other environment by using its delivery services. So here we go then. Let’s say we want to update these site the loading them. So let’s say it’s on from AM DMS. Okay, then we can just see that it is loaded in life from them. Look, I’m publishing, okay? What happens? I mean, because here there are other things that are pointed to development. They show like always configurations that we need to deploy in order to have the universal editor available. So in that case, for our of for simple because you don’t need to run any pipeline, you just have the two configurations that require and then all you need to do and we’re going to do it here. So that’s your copy. A strip. It ends easily. Install the complete rejects you can see they are multi. So and then we just run a couple of commands first go from the token authentication from the configuration settings it passes to upload the configuration and now it is applying it. All right, that was quick and then we’ll do the same for the other configuration which revealed so pretty fast. There you go. Okay, we can check the status, the environment, just to double check that the configuration is not there and we have it. Okay, that’s cool. Then you can see that you can also take advantage of using XML. Come out of the is is available on you can use it. Okay so now handed it over to my work. Went some final works before cleaning. Yes thank you. So that was a quick rundown of what you can do with all this and what what is what is new. And then again, if you tried them out previously, let’s say before the middle of this, this year, it might be worthwhile to to retry. I mean, there have been lots of improvements, especially in the CLIA, in terms of usability, but also the logging features is pretty pretty great. We had a question I’ve always seen here, this jumping ahead a little bit, asking about support for the attachment and never mind that as a feature for now. But, but all I can say is I mean this this logging, of course it doesn’t replace being able to have to attach in the background, but in lots of cases it is really very, very convenient because you can just dynamically create your logger and the and the filter you want and in the level you want. And that makes it really nice to see what’s going on In the instance. Before we go to the transit press Q&A, I wanted to, as I promised before, point out one more time, we really interested in your feedback, and I know it’s sometimes not always that convenient to to reach somebody, which is unfortunate, but as many hopefully know, as we are working with customers closely together nowadays, often via Slack. So if you have a Slack connection to us, feel free to also try to reach out that to us. But regardless if you do or not, we also are trying to be available on discord. So if you go to the Discord server down there to the AM live Discord server, that’s and am I the channel? But we uh, yeah I am around and and you, you can ask us questions or start discussions again. We have be really interested in feedback but but also in ideas and potentially in things we can work together on. Yeah. So that is the invite here. Come to this quote if, if that’s the only channel available and try to bring us there and only so Yep. Is that I think we can open it up for Q&A. Great. Thanks a lot currently. And Liam. All right. So before going to the questions, I just want to remind you that we’re going to post the recording of this webinar in the contextual thread. I have that in the general chat. It will be available by Friday probably or earlier. And now let’s go to Q&A and let’s see if we have questions. The team has done a great job answering the questions already. So a question from Siva Is there any way that we can debug application deployment deployed in ROV using remote debug from intelligent? Yes, I think I can follow up on that. So basically the short ancestral, unfortunately right now at least you cannot. I think I answered a similar question in the Q&A pod already that we keep thinking about making it somehow possible to attach to progress. For now it’s not possible and might be something that that happens a lot. I’m not sure. So unfortunately, that’s that’s right now, not not an option. But as I mentioned just a moment ago, I do think it’s worthwhile to look into that logging feature because as you could see, I mean, you can basically dynamically just create a logger for, let’s say, your package or the package that you’re interested in and the level that you want to. So at least you should now be able to quickly get information out. So if need be, you can do a full Pullman’s version. Obviously, where you augment your code, update the bundle and then listen to the to the extra information that you want in the logger. Okay. I fully understand it’s not the same as having having the ability to the to attach a debugger. But yeah, for at least right now, that’s the best we can do. Thanks, Carl. And with that, we have answered all questions in the Q&A tab, so please post your questions now. And now is the time to get your answers on any kind of questions regarding rapid development environments. And in the meantime, I would like to ask you to please complete the the end ending poll, which is also a very brief and anonymous. You can read our session here and that and you can suggest topics for future webinars. All right. We’re going to wait another moment. Maybe some more questions appear. Yes. And again, let me let me take the moment to reiterate, because there were the question was asked a couple of times. Right. So, again, I mean, the the licensing model is said that when you have a solution like assets or sites, you get for each one an 80 credit for free, also for the sandboxes. So you can directly go and use one or two, depending on or even more depending on how many solution license you have. If you want more, that’s possible. I cannot talk about the pricing. You have to reach out to your account manager there and ask that. But it is possible and we do have a number of customers that have additional order credits bought. So yeah. Thank you. Girl. Yeah. There was one question I didn’t answer that I gave an answer to which might have been incomplete. That was a question whether it’s possible to use a plug in to set the environment variables. And I said that’s not possible, but it’s a good idea. I was just reminded by a colleague that that that might be possible using another or plug in which which you will have or installed already. I guess if you have our plug in and the cloud manager plug in or the CMC ally that that I think you could use that one to set environment variables but worthwhile to mention to the A or C ally of the plug in itself is open source on GitHub. So you want to collaborate there. That’s also another way to contact us is to see if you can find this on GitHub and we can create an issue there and we can see we can, can do something. Thank you. Go And just for your information, the webinar recordings are archived and you can find on Adobe dot com slash go slash gems. The next question is can you tell a CD and and dispatcher logs? I can probably take that question. I think the answer is no, because these are the e logging is just the A process itself. I mean now for these blogs that we’re presenting, I guess you can still use a cloud plugin in order to take the logs. I think that is something that you can do because you can also get all these logs. I think you might have a little delay on that. But yeah, the yeah, these plug in is just a m logs as they are for a container. Let’s say the process itself and all this clutter. Just but I think it’s a that’s a that’s a good example of of of of what I was hoping for which is again I mean I think this is a potentially good idea to have as an addition for the logging feature to be able to also look at those things. So thanks. I’m going to try to remember it. But yeah, feel free to to see if you can try to take it further by talking with us and in discord. Yeah I think it’s interesting. Yeah. In general, I’m sorry. Go ahead. No, I wasn’t saying definitely that this isn’t that often. And I’m taking notes so that we can also take for a feature. I just wanted to mention that to interact or post general questions you can do after the webinar, you can do that on the contextual with you will find the link to in the general chat, right? And please post your questions coming in in the Q&A tab, not in the general chat. All right. The next question is are these a good fit for fits to run meat eaters ephemeral environments when working in a trunk based development? I think I can I can tell you all you iron into. And this that sounds very much like running continuous integration and continuous development sort of pipeline. And we have in fact several customers that use this exactly for that purpose. So they have integrated the RTC with an automated deployment of their their build artifacts to an r t, and then they run integration tests and tests against those environments and decide, for example, if a pull request on this or not. Does that answer the question? Yeah, please follow up. This is it did not answer your question. Let’s go to the next question. In the meantime, how Dynamic pages will work with this new approach? To be honest, I don’t understand exactly what what the question is here. Please elaborate on your question. Yeah, I’m not sure I understand that either. So. Well, in the meantime, I can probably just also add something to the previous question to the end to end this some documentation. I mean, for sure, we just so here there is some flux that is useful for these purposes, but also that some of our customers, what they do is they take an out of the environment, they deploy all the brand code that they have, they run some entrances on there. And then finally, what they do is to reset that expense because it means they clean up, they start over from the next time that they use a lot of the is right. And then that is a good way and a good practice. At least you’re out of the gray for the next time you run another. That’s the application. Actually we are. These didn’t have that goal. I mean, it was meant to use like for local development engineers, but actually they have been widely accepted and used on selected pipelines. So that works pretty, pretty well. I can say from what I hear from the customers. Thanks, Natalia. What And next question is what is the best way to write some code specific to R&D? Is there an environment variable that I can use to know the environment there? Stage prod, R&D, The code is running. So in the in the first place, I mean, there is the run mode which which is possible to solve. So you can basically if you have in your package, if you want to just put something that you only want an idea is right, you can put it into the either you run mode, the already run mode is somewhat special in that regard that it’s kind of overlaying over the def run mode. So basically, and if you install a package into ADC, it will take both the death and body run mode. Right? But if you wanted to basically have, let’s say, a bundle in your in your package that you only want to be deployed, what it is you can put into the IP run mode. Yeah. So yeah, I mean this is, this is mostly the way that I would, would look at this if you have I’m not 100% if there’s some easy way to to differentiate it at runtime inside your code. I’m not sure you will if you have a better insight and if there’s something available or not. But at a minimum you could use that mechanism to to build something. Yeah, I would have it exactly the same. I’m not aware of another other way to find out that’s happening in an article. So the best probably to deploy a configuration for all targets that can inform about that. Yeah. Thank you. Another friendly reminder that to please complete the ending part, it’s only questions. Thank you. I have pointed to the general chat. We have a couple of minutes left, so please post your questions. The next question is does it supports multiple projects repositories. Take this one or these require no repositories. You only deploy artifacts that you have on your local computer, whether you’ve built them with MAVEN or whether you have that code versioned and it doesn’t matter. You can also download ideas, for example from Maven Central and deploy them. It is even possible to install from a URL in that case, but there’s no direct interaction between it and ideas. Right. So the only the only problem though for multiple projects would be that you have to deploy it every time, right? I mean, so you probably want to reset in between. Thank you, Carl and Julia. Next question is, is there a way to specifically deploy this bunch of configurations to energy besides installing the all package? I can probably think of one for this study. So solar nail type and actually, like even go probably took me let’s say be on the hill. Come on let me go to that slide so we can use so be right here when we say install can there you go you can see that that is a new type of install what it is called this particle. And then you can just upload the dispatcher configuration. Yeah. For convenience, you can even point this install command to the source folder of your dispatch config and it can automatically zip up the contents and deploy them. Thank you. All right. We don’t have any more questions. We have answered all grades the Great Job team, so we will slowly close the session. I would like to thank Julien Carlon Natalia for this great session and for answering all the questions to the audience. Thanks a lot for joining and thanks for your attention. We will announce upcoming and session upcoming the end of September. That will be AMS sites and apps. I’m sorry, it’s going to be on October 9th on AM sites and APIs. You’re going to see an announcement coming out soon. Okay. And with that, I wish everyone a great day or evening. Thank you. Bye bye. Thank you. Go around by. Thank you. Bye. Thank you. Bye bye.