Front-end code pipeline
Learn how to use the new Front-end code pipeline recently introduced in Cloud Manager
Continue the conversation in Experience League Communities.
Transcript
Okay, let’s get started. Hi everyone and welcome. My name is Vlad Beilescu. I’ve been with Adobe for almost seven years working as a software engineer on the site side, working on HDL and the core components and anything that enables people to develop faster with AM. And together with me is Ivry, who’s a cloud software engineer. And we’re going to talk to you about something our teams have built, a way to deploy frontend code easily with AM as a cloud service. Something that’s quick and simple to do. So I’m going to be talking about the way things work currently, what changes with the new frontend code pipeline. I’m going to show you a small demo and then Ivry will take you through some of the technical details and tell you a bit about how everything was implemented in the back. And then we’ll have some time to answer your questions. So right now, if you want to use AM as a cloud service, you have to set up your program, your environment, you have to create a project or get one created from the archetype. And then if you want to do any kind of development, you need to install the local SDK, create your templates, your content policies, customize how your website looks, and then everything needs to be integrated back into the client lips and then deployed to AM as a cloud service, which means pushing to cloud manager Git and waiting for some time, usually over 45 minutes to get your change live. And then whenever you need to do one small change, for example, you need to change a bit the look and feel, then you have to go through all these again and again and again. And if you are a customer and you want to create a website, you need to set up AM as a cloud service, but then you need to hire this backend developer that knows all the wild tools that AM uses like Maven and Java and OSGI and HDL. And they will be the ones that are going to be doing your code deployment for you. And once you have some sort of websites running, if you have your front end developer that knows how to make your site all jazzy and nice, then yeah, they will be working with stuff like HTML and CSS and JavaScript, and they will need to send all their changes back to the backend developer and integrate everything into client lives. And then he will do the deployment again, and hopefully you’ll have your website updated. So our idea was, what if you could just create and edit your website and do that as you start using AM as a cloud service. You don’t need to do any kind of deployment. You just start and create a website. And then the front end developer can see what you’ve been working on, what kind of content you’ve typed in, and she can make some local changes, see them live. And then when she’s happy with the results, she can push the update directly to AM as a cloud service. Your website is updated and everything looks good. And that led us to the new front end code pipeline, which is a modern solution. It’s based on the JAMstack model and it deploys and serves static files from a CDN in order to get this working and be able to create a website without having to deploy a project. You would need to seed your website from a site template. Then the build pipeline is going to be fast and very loosely coupled to AM. It will use cloud manager, but for the beta and for our internal testing, we’ve been using GitHub actions and basically it can be everything because it’s just a front end build. And deployment is just about running the build pipeline and collecting all the output in a tar.gz file and uploading it somewhere, which is then processed and everything that’s in there is being made available on CDN over a static domain as immutable files. And AM is also updated to reference your files from the new location. So if you look at the general model, AM is serving HTML through the CDN. Once you have your front end module sources, you can just use those to have a live preview of how things would change if you change something in the front end module. But you don’t need to have AM running, you don’t need an SDK, you can just do a live preview on top of your existing live instance. And when you’re happy, you just push your changes into a Git repository and then the pipeline will build your front end module. It will collect all the outputs, put them somewhere from there is going to be picked up by the front end code deployment and made available somewhere in a blob store. AM will also be notified about that so that it can start referencing them and then the files will be made available through CDN under those static URLs. When you have your JavaScript, your CSS, fonts and whatever static images you want to include in your front end module, you will have them all served in CDN. And this would take a few minutes just. So yeah, let me just show you a demo. In order to not have something happening like it did for Facebook today, I recorded it. But I’ll walk you through it. So when you have your AM as a cloud service instance, you can just go and create a site based on a template and you can import a template. We have one available, the standard template. It’s on GitHub. And then you type in a name for your website. You click on create and your website is created just like that. Then you can navigate your website. It’s a full fledged multi-language website with some pages. You can edit your content and it has some basic styling. Nothing too fancy, but still offer some options. It uses core components and the style system. You can already start putting in and scaffolding some content, even if you don’t have any kind of programming knowledge. And it looks kind of decent like a website. And when you want to customize it, you can just download the theme sources, which is basically just a zip file containing a front end module. It’s just a Node.js project. You can just npm install and then npm run build and it will generate all the files needed to make your website look nice. It will generate CSS, JavaScript, and whatever other resources you have defined or the template creator has defined. And if we load this into an IDE, we can see that it’s just JavaScript and CSS. Well SaaS in this case. You can also have a live preview. You can just run npm run live and it will build your website or your front end module. It will open a proxy to your original, your live website. And basically it will change whatever JavaScript, CSS, or resources you might have in your original website with those that are coming from your local build. So if you put in the page that you want to see to the proxy, you can just navigate there. It will show you the website and if you want to make any kind of change, for example, you can change the background to something more jazzy like pink or something. Once you’ve saved, it will detect that something has changed. It will rebuild the files. It will trigger a browser sync and you can already see in your proxy that the color has changed and I think it looks great. So yeah, we should just commit this and add it to Git and see how we can get it live. And getting it live is just a few steps. Just add it to Git. You commit it, you push it. And once it’s in there, you can go and trigger your frontend build pipeline. For now, I can’t yet show you the Cloud Manager pipeline because that’s not yet publicly available. But for the beta, like I said, we use GitHub Actions, which are basically running the frontend build and it’s just about running npm audit, npm install. Basically it’s npm CI because that’s not touching the package log JSON file and it’s leading to more repeatable builds. But then at the end, it will run npm run build and when that’s finished, it will collect everything that’s in the dist folder. It will create a turbo file out of it, upload it somewhere and from there it’s going to get picked up, unzipped and yeah, basically it will be made available to AEM. So AEM instead of referencing the default CSS and JavaScript theme, will now reference our new CSS, the one that has the pink and nice background. And yeah, that’s it for the demo. I’m passing over to Ivar. So he will tell you about the technical implementation and give you more details about how that works. Well, hello everyone. Thank you a lot for the wonderful demo. It’s great to see the frontend code pipeline in action. So far we have covered what the frontend code pipeline is and delivers to the customer. So now we can talk a bit about the technical details of how we enabled this feature. And we can start from the user end customer here. Yeah, they will make a request to the AEM over a custom domain or authors publish specific one. So we go through the CDN network and goes to the AEM instance, which will serve the dynamic content in HTML files. As Vlad mentions, we are following JAMstack model here and we are serving static files from a separate domain. In this case, it’s a static prefix with the static here. We call it the static domain. So the browser will automatically go and try to fetch those files. So again, it will go through the CDN network here and reach the Azure blob storage where we actually host those files. And we call it workspace here. So it hosts CSS and JavaScript files that we can reach. So now the task is when the frontend developer makes a code change, we want to get it to the workspace. And the way we can do it is a developer will execute a deployment pipeline much as we just did with the GitHub actions that will run NPM build and generate a tar file and upload it to yet another Azure blob storage that we call repository. So now the task is to get the tar files from the repository to workspace and make them available so we can reference them by individual. And to do this, we introduced a frontend code deployer component that talks with repository workspace blob storages and identifies packages that we have in repository and we want to get to the workspace. So it synchronizes the data between the two and uploads files to workspace. So once FCDC does that and uploads these files to workspace, they become available over static domain. And the last step here is to notify AM about the new availability so it can start referencing them, these files. We can go to the next slide and we can talk a bit more what the FCDC does internally. So FCDC is running a reconciliation loop continuously. And the first thing it does, it reaches to the Kubernetes API server and fetches configurations and secrets. And these resources encapsulate all the information that the CDC needs to identify which frontend code pipelines to enable, which AM instance to talk to and so forth. So it grabs all this data and then the next step is to identify what packages we need to get from repository to workspace. And the way to do it is first to go to workspace and find all the packages that we have installed there. So that’s the first step here. We are fetching state file from workspace, which has all the metadata information that we need. Then we do the similar operation on the repository side where we fetch the packages that we have installed there. Remember we have tar files there, the source package files. So then we have all this information in memory and FCDC has internal logic that decides which packages we need to upload to workspace. And then for each of these package, we process them concurrently. And then for each of these packages, we go ahead and we download them from repository. We unpack them locally and we upload them to the workspace. So yeah, those are the steps here. Once we upload those individual files to workspace, they are already available over the network. We update some state file in the workspace, those are some metadata information that we maintain so it’s easy for the component to identify what we have installed there. And then the final step again is to notify AM about these changes. And we do this through post endpoint. So we run a post endpoint against AM with all the process packages that we did just now. And then at that point, the job of FCDC is done and it will run this same reconciliation loop again. And if there is any changes that needs to be done, it will perform those. Yeah, so it repeats these steps. The system here, as we can see, it’s quite decoupled. We have two separate Azure Blob storages, one for hosting the source files, the TARD versions, and one for hosting workspace, the target files. And then the component that handles the data synchronization between them is also separated. It’s an FCDC component. So we achieve this decoupled system here that is separate from AM. And that was the target that we wanted to achieve. If we go on the next slide here. Yeah, so when designing FCDC, we had some considerations that we wanted to implement within the system. And the first one was, first of all, we wanted the single FCDC instance to handle multiple deployment pipelines. And FCDC achieves this by using dynamic configurations. So the first step in the reconciliation loop is to fetch resources from the Kubernetes server. So we can dynamically add a new pipeline that we want to enable or remove existing one. And that will happen almost instantaneously. The next thing was we wanted to have advanced filtering mechanism to target the packages that we actually want to expose to the internet and upload to the workspace. By default, FCDC will identify the latest versions of the packages from the repository or source Azure Blob storage and upload them to workspace. However, if your source packages are following semantic versioning, you can have semantic version queries to target packages. For example, you can say you want to upload 0.x version of certain package or some range query for targeting packages. Another consideration was that we want our deployment, we want to version our deployments and we want them to be immutable. So when we run deployment pipeline, we want to make sure that the package tar files that we generate and that we upload to the Azure Blob storages are not overwritten and they get to the workspace eventually. So to do this, we adopted a unique version naming strategy. Here we combine package name, version, package version. We combine it with the timestamp that it was generated and the commit ID and that gives us a unique name. Now since we upload the same package to the workspace, we would be exposing the same name in the workspace and we want to avoid this because it has some sensitive information such as timestamp and the versions of the packages that you work with. So to obfuscate that information, we take a hash of that name, SHA256 hash and use that instead. The URL here that you can see, CSS slash theme CSS is the path to your file within your package and then the prefix is the hash of the package name, the one that you actually use. And that’s what you will see if you inspect the HTML content that the AM serves. And then finally, we also wanted to make sure that performance and memory consumption were in line. One of the goals of the front-end code pipeline is to accelerate the deployment of front-end code changes to the production. So FCBC is performant from that aspect. It can process 1000 unique packages within five minutes. So what that means is if you suddenly upload 1000 unique packages in your repository and you want them available right away, it will take less than five minutes for this component to get all these changes into the workspace and make them available over the static domain. And then finally, so far we have been speaking about repository and workspace to Azure Bobstorages. We are only uploading files to it, right? Deployment pipeline is pushing tar files constantly to the repository and the CDC component is constantly moving files to the workspace. At some point we want to clean these containers up from outdated or outdated packages that are not referenced anymore. We have separate systems in place that will do a periodic cleanup of these containers. So the memory consumption of those Azure block storages are tracked. And that is it for the design considerations for a CDC and for the system that enables the front-end code pipeline. Because overall the target of the pipeline is to accelerate the deployment of the code to the production and we enable this through this system. I hope everyone enjoyed the presentation. Thank you everyone for attending and for your attention. And if there’s any questions, we’ll be happy to answer them. There were a couple of questions. Let’s go a bit through them. I saw that Hyman has already answered to a few of them. People are asking if these could be used for on-prem setup. Yeah, but you’d have to create the same kind of architecture yourself. At the moment there are no clear plans to add support for that for on-prem implementation, We’ve seen partners and customers building similar stuff. And now you have the option to have this running for you automatically on AM as a cloud service. Another question from Lamar. This delivery model covers support for static SPAs. Will the front-end pipeline support a workflow where we are seeking to achieve server-side rendering with something like Next.js? At the moment, this covers static assets, front-end assets. I can’t really comment right now on the future, but yeah, it’s going to be interesting. Megan asks if there are documentation to implement this process in self-hosted AM. You would have to figure out how you want to do that based on the stack that you have and the deployment and the infrastructure that you have there. What we’ve built here actually plays nice with the infrastructure that Adobe has in the cloud. Another question from Mir. Does this mean we are getting away from client libs? We could for some of the websites. You could still run some of the front-end stuff from client libs if you have some base client libs that you want to use. For example, the standard site template uses the core components. Those core components already come with the base client lib package that offers some basic styling and then whatever is done in the site template, the theme that you can customize, the front-end theme can add on top of that. Similarly, if you want to create your own site template at some point or use one that’s created by an Adobe partner, you could still have some client libs that get deployed classically, but then you would have just the framework part, some components, some client libs and everything for each website, you would have just a small package that is customizable that changes the overall look and feel. More like a theme. Wondering, finishes. Is wondering if this is an official Adobe development or a third-party tool and he would like to test it today. Is it possible and what are the requirements? Well, it’s an Adobe development. It’s built on top of AM as a cloud service, so you would need to have an AM as a cloud service program and you would need to join the beta program. It’s still not available at large right now. And you’ll need to reach out to your account manager and see how you can join the beta program or reach out to us directly. Join the beta program and you can try it out. Another question from Lamar. If you are using a single deployment model with AMS on AWS or Azure, cloud manager will be available and by extension the frontend code pipeline will be available. Is this correct? Right now it’s only enabled for AM as a cloud service and I don’t, I can’t really comment on plans to extend it further to AMS. Another question from Matias. How is this integrated in the backend AM code deployment pipeline? QA teams will typically want to test in stage and the combined product, the backend plus the frontend, which is then rolled out. Well, right now when you do a frontend deployment, it’s made available on an environment on both author and publish. And you can decide if you want to later on, if you only want to make it available on author, for example, or if you only want to make it available on stage, it’s still going to be a cloud manager pipeline. It will still run in two steps for a stage and prod environments. It will first deploy to stage and you can make it wait for manual approval, or you can have your custom tests, frontend tests, and then allow it to pass on to production. And for production, for example, if you are still unsure that you want to take these changes that quickly to the publish node, you can just configure it to only update your authors. You can have another check there. And then when you think it’s okay, you can publish the change and have it live. Another question from Dale is this pipeline assuming hosting via AM and AM CDN for the public facing pages you’d be deploying. It could work with different pages as well, but it works better if you’re using AM of course because it’s tightly integrated. So you get the updates automatically when a deployment goes through. It also works with the bring your own CDN in case you want to use your own CDN with AM as a cloud service. So you can configure the deployment to use your own domain name or a custom prefix there. You can reference the files and load them from your own CDN if you want. And that CDN will just defer too fastly. But the end will be served from your own CDN. And in the question and answer tab I see from Yuval, are there any links to the configuration? We don’t have the feature available yet for everyone. So there are no public links yet for documentation, but when it will be generally available, you will have documentation that describes how it works. Until then the beta program will give you more information. Cool. I guess that’s it. We also have a dedicated forum as well. If you feel the need to gather more information, send us a message there. Thank you guys. Thanks everyone. Thank you. Thank you. Thank you.
Additional Resources
recommendation-more-help
3c5a5de1-aef4-4536-8764-ec20371a5186