Adobe Experience Manager Champion Office Hours - Sites Focus

Focus on AEM Sites.

Transcript
We are good to go. So I’m going to pass it over. There we go. I’m going to pass it over to you, Jessica, and I’ll let you take it from here. Yeah. Hi, everyone. Welcome to the 8 a.m. first ever office hours presented by the 2022 8 a.m. champions. So first, I’m just going to go over the agenda really quick. Just a brief touch place on what is the Adobe Champion program. And then we’ll get into our panelists introductions and then we’ll start by answering some pre-submitted questions and then go into answering live questions. And you can ask that through the chat or out loud, if you like. And then we’ll wrap it up after once those questions are through. So just a brief overview of a champion program at the experience manager champion program recognizes practitioners that play key roles as thought and industry leaders and product influencers. We do this by sharing our technical experience, expertise, best practices, strategies with the broader customer and partner community. Kind of like something today, like we’re doing today and then collaborating with we get to collaborate with Adobe product leaders to support the future vision of a.m. And then the benefits to this are just knowledge sharing and networking with a.m. users and developers from around the world. I think Robert said it earlier, but today the panelists today are actually from all over. So we have some in the U.S., U.K., Qatar. So all over, which is really exciting. And then collaborating with Adobe product leaders to provide feedback for future visions of experience manager and get some sneak peeks. I can attest to this. There’s been some wonderful conversations with the Adobe product leaders. So that’s been a really great benefit and then engage in exclusive speaking, content creation and personal branding opportunities to showcase our expertise. So today is just a great example of that. Our first ever office hours where we get to you guys get to ask questions to us and we get to answer those questions. And then I’ll start by introducing the panelists for today. Greg, if you want to introduce yourself and we could just go down the row. All right, everyone, I’m Greg Demers, I’m a product owner for Web content at zero price, managing a team of content authors, basically, and working with the rest of the champions here to advance experience manager sites and all solutions that experience manager has to offer for all of you. Nice to join and nice to see everyone. Looking forward to the session and future sessions. I’m Brett Ryschbach, SVP of the Adobe practice at Bounteous, which is a full service partner of Adobe. Done a lot of the hands on implementation of Adobe, Adobe platforms, AEM, as well as connected to the other Adobe experience cloud projects. I’ve done a lot of the hands on work myself, but now I manage the global team that does that work for Bounteous. Meghadesh. Hi, this is Meghadesh. I am from Qatar Airways. I am a senior technical architect holding the Adobe technologies within Qatar Airways as a brand. Managing all the B2C websites, right from loyalty to trade portals to corporate travels, different websites we’re managing. And also managing the Adobe stack like Adobe analytics, Adobe target ecosystems together. I guess that’s my cue. So Rami El Gamal, you can see the name under the picture. But I am a senior solution architect, a lot of focus on Adobe stack. So anywhere from AEM to audience manager, target analytics, AP, even Adobe commerce. Anything under sort of the umbrella. I run a little consulting agency called Crony Consulting and I am here to answer your questions. Martin. Yes, I’m Martin. I’m CTO of XIO in the UK. So we’re an IBM company. We were acquired by IBM about seven years ago now. And so, yeah, working on previously been by AEM developer and still very focused on some hands on development solution and technical architecture similar to Brett kind of also managing that sort of our team of developers and architects. Great. Thank you. And then this is just me, the voice you’ve been hearing before. So my name is Jessica’s wife. I’m a senior web designer. I work at Insight and part of the 2022 AEM Champions. So I’ll be monitoring today’s session. So we’ll start off with questions that were submitted and then we’ll dive into any questions that you guys have that you want to ask. So live questions. Let me go ahead and ask that first initial question. So this was just like maybe a high level overview of GraphQL with experience fragments. All right. So GraphQL experience fragments, since we got this one yesterday, had a little bit of time to think on it. It’s interesting because GraphQL is largely meant for structured data and hard types. Right. And which an experience fragment really isn’t that right. An experience fragment is a little bit of a mixture of data and presentation. And one one experience fragment might be a hero and a teaser and a couple of value propositions. Another one might be a video and, you know, an image and a text block. I mean, so it’s really just kind of there’s no structure to the experience fragments. However, there are cases. It’s interesting because this question came in as more just a straight up topic. Didn’t have a specific question. So you’d want to dig into like what are the use cases that you’d need it for. We definitely thought through a few use cases where we would use this in terms of finding experience fragments. So there’s there’s definitely if you need to use GraphQL to find an experience, right. And one way to do it would be to kind of store some of the metadata of your experience fragments in content fragments. So you could then query the content fragments and the content fragments itself can have a reference field, which would then point to the experience fragment. So it’d be kind of a two step process where you would call the GraphQL, fetch the relevant. The relevant content fragment that references the experience fragment, and then you’d have direct URL to then pull that experience fragment. That would be one way of doing it. Another way, if there’s pieces of an experience fragment. So let’s just say, hey, you know what? It’s actually I don’t want the experience, right? The reason why I want to get an experience fragments is because I’ve got let’s say I’ve got store locations buried in my experience fragments. Let’s say I make an experience fragment for every single one of my 20 store locations. And now I want to query my store information. Well, what you could do there is you could abstract your location data into a content fragment. And then in your experience fragment, instead of actually authoring the data directly in there, have your experience fragment have a component that’s referencing the data and pulling it from that content fragment. So when you want to pull it from GraphQL, you just graph, you just pull your location data from GraphQL. But then your components can still use that same data. So you’re not duplicating your system. So that’s that’s kind of a couple of ways that that we’ve structured it. I don’t know if I got any other thoughts on that. Yeah, I was going to say that very similar to kind of the ways in which we thought about it is that kind of the differentiation between the sort of data, which is what the GraphQL and content fragments are kind of designed to the sort of structured data and the sort of searchable stuff and experience fragment with being experience based is going to be much less structured. And you kind of do have to think about the data structure and to make it so that you can query it. You’ve got to have it in that more structured form. And it does make you kind of think to, as you were saying, like with your store locator stuff, actually, it makes sense to look at how do you store the data? If you want to be able to access it via GraphQL and through experience fragments, you kind of have to sort of almost flip it over and say, OK, we’re going to store it in a structured way where we can query it. And then we’re going to use it in different ways. And that might be directly through GraphQL for one front end or might be then manually content authored within site pages or within experience fragments the other way around. Yeah, I think that. OK, go ahead. OK, just one more point I just want to add is like when you are using a GraphQL based content fragments data and inject into a functionality like an experience fragment, you have to also be a little bit careful about how you are fleshing that experience fragment whenever that’s a change in the content. Right. So already, like if it is an only content fragment is engaged with the GraphQL content fragment is already handling of like whenever you change the content, it will automatically start fleshing its references. But when you start putting content fragment into an experience fragment, then your data is already into the HTML. So then you will have to find a way in order to flesh the cache explicitly those experience fragment one by one. So it is customization and it is a heavy customization and definitely not the right way to do when it is you are using it with the websites, right? So you do lie down overhead whenever there is a change in the content fragment and you have to find an explicit way to refresh this cache. So that’s one of my thoughts. Yeah, that makes perfect sense. And I think sort of to flip the question a little bit, I would challenge the business case here, right? Because the true question is why, right? And I’ve seen I’ve had full sites. So think pre-content fragments, you know, like we’re talking about experience fragments were still fresh and exciting. I’ve had clients where they would literally because they want the author ability within an application, right? They still drag and drop and make it all pretty, but they want to inject it into a different application that’s under something else, whether it’s a single page application or just a typical job application with like JSPs on the front end, etc. And because they wanted to do this mix and match, what we ended up doing is we would create experience fragments. They would make a call to it. At this point, it’s a typical HTTPS call. Pull in the HTML and render it within their sort of the scope of their application. If that’s what we’re trying to get to, then it’s not GraphQL, right? You’re doing one for one. The moment you start talking about HTML and experience fragments, as is the data for the lack of better expression, it’s not a spear. It’s a little dirty now because you have all this HTML structure into it. So I think the idea of understanding the why will make a lot more difference in the how, because truly, just like everybody said, if it’s about structured data, I think you need to drop that level in between. Another piece of advice is the more layers you add in between, and this is not just AM, that’s structurally in general. If you have the data being processed multiple times to get to your destination, guaranteed, by the time you get to your destination, it’s a lot more work to deal with. So one of the great points that came up is, well, if you want the content fragment to be rendered in an HTML way, so you can use node or resource directly from the content fragment within AM in order to render that experience fragment component. But then if you’re using it outside with a different application, you can use an API using GraphQL right into the content fragment. My main point here is don’t double hop, right? So make sure, and I think caching is a very, very good issue too, as soon as you start hopping between applications. So don’t go content fragment, experience fragment, then make a call to get the HTML to share it into like a single page application. As is, just doing that you have three layers of content or entry point, three intersections essentially where the data could be corrupt one way or another. So I would challenge the why first, I think is sort of the theme of my point here. Yeah, and I guess the big advantage of, and the reason why you would ever go for GraphQL anyway, is that it has the advantage of being a query language that your end application can actually make a specific query for the data it wants and effectively filter the data that you don’t have like you had in a JSON form where like your store locator. Previously, your JSON store locator would be a feed of all of the stores and then you’re going to have to client side go and do all your filtering or whatever. Whereas with GraphQL, the beauty is you can say, okay, give me all of the stores that match these criteria, and then have it actually just get the data you want. And yeah, if you’re adding all those extra hops in the way, you’re potentially just removing all of the benefit of it being GraphQL in the end anyway, because you’re kind of pre filtering the data and have no control over at the front end. Yeah. That makes sense. Yeah. We’re always thinking about schemas and types, I guess that’s the bottom line. Yeah, structured, borderline relational. I’m going to call that. Yeah. See that look. Sorry, Jessica, back to you. No, you’re fine. Thanks. I’m just making sure I had all your talking points. So the second question was, how do you publish context aware configurations? I see that if the configuring field config fields are collection type, they’re stored as child of the node but are not published when publishing through the editor. Yeah, so, um, there’s two parts of that question. Question number one, how do you publish it? And the simple fact is you can use, again, whether you’re like on prem, AMS or AML cloud server. You can use a publication agent, right? You can use even a custom workflow that would go in. So we’re thinking of things that are under slash conf. So typically what I’ve seen in the past is one of two things. You can either, like I said, do a three publication to that node and everything underneath it, and then it goes into your pub instance and then you’ll have whatever. There is no code out of box mechanism that would go in for the context aware in order to publish. So it’s not like a cloud configuration where you can go through the interface selected or, you know, a target configuration, etc, etc. That’s one way of doing it. You go into the second part, which could open a little bit of a bigger topic. We need to find what is content. So publishing and what’s code, something that you would want into your code base because it’s consistent between environments. My personal opinion from experience is to me, because you’re moving that same exact configuration between tiers, that should not be content. That’s something that should be in your code base, that should be maintained there. So as you’re going through the process, you have consistency. Let’s look at editable templates, for example. A lot of the times editable template is right in that line in between black and white. It’s truly that gray zone where you want to have the flexibility of people going in and offering the templates, but at the same time, when you have three tiers, you have a minimum of three tiers, right? Your dev, your stage and your prod. Typically, because you want to test it, you want to test it in lower tiers, except if it’s truly something that you need right away. But then as you’re going up, you almost have to follow the same steps, which is a place where errors could happen. And also, you have to regression test every single time. So again, depends on the frequency of it, depends on how effective the change is. My personal opinion, I would put it in the code base and just push it up that way. But let me know your thoughts as well. Yeah, I think I definitely agree. On the sort of code versus content, there’s definitely a decision to make around, yeah, actually, which of these things are things that you’re ever going to change without doing some kind of delivery or deployment? There are actually things that you need to be able to do that instantaneous, I’m going to go change this option and push it out. And actually, most of the time, it’s even a lot of those kind of things that are under conf. Most of that stuff, actually, a lot of the time, you’re going to be so much more controlled over that process of making those changes and delivering them that actually building that into your code base and saying, actually, we’re just going to have environment specific variations on that if we need to make sense. And then yeah, and I think the editable templates thing as well. I think it’s potentially it’s rare to have a setup where you need to have really, really experienced kind of template editor type authors to have the benefit of leaving it as not being in some way controlled, because the danger is that it’s really easy to just go into an editable template, change something, save it, publish it, and then destroy all of your pages because it is instantaneously made live. And so there is kind of that control and like experience that unless you’ve got like really top tier authors who actually need and want that level of sort of control and like, you know, that it’s, they need to really understand exactly the dangers of what they potentially could do. Before you want to give them that level of control. And it’s a lot of the time we find even if they’re given that they don’t use it. And what they actually tend to do is they will make those changes in a lower environment, test it really detailed, and then they’ll do it in the higher environment. So they end up doing the thing of effectively they’re delivering the codes themselves, and that they’re just writing down, okay, this is what we do. And then we don’t deliver it in another environment. So it’s effectively you’re kind of doing a per environment release cycle just manually. And as you said, there’s the danger there of someone could just miss a step or one little policy change that doesn’t quite get done right in one environment, then you spend ages doing the investigation of well, why did it work in pre prod and not in prod? And because and it’s because well, actually, there’s this one tiny thing that got missed or that kind of thing. It works on my local. So I’ll flip it on the head just one one final way here. So you said separating the code and content. Well, if it is something that actually is content, something that you do want your authors to be able to modify. Another option that you do have in ACS Commons is a feature called shared component properties. And those what those do is those work where you on your components, you can actually have not only your component properties, but you can have a shared component properties. So you can have a shared property, which is specific to a component across an entire site, but it could be different on different sites still. So similar to context aware configs. And then there’s also the concept of a global config as well, which is just it works across all components if you want to use the same property for whatever reason. So those are ways where that it is actually handled in the page authoring experience, it is treated as full on content. And then when you publish it, you publish the homepage to get out, which then flushes the cache because a lot of times if you’re changing this type of a configuration, there’s things that you can do. There’s probably some things on your site that then need to flush the cache to actually demonstrate that that has been updated. So just another option that’s out there depending on your use case. Yeah, just adding to Noble’s point as well, like, whenever you designing a website or designing and authoring, you have to also understand your authorings, your authors maturity level as well. Right, if your authors are like, well advanced, they know the product, they understand what they’re doing it, yes, accordingly, you can modernize the code and do much more advanced features to them. But if your authoring communities like have scattered across the globe, you have to also understand what level of complexity that you want to bring in, or how what level of modernization that you want to bring in. So that is the view clear when designing a website, you should take consider that call as well. It’s very critical. Yeah, I think it’s a key distinction and then from my end, I wonder we have one more question coming up, but I see it a lot where what you guys were saying about content authors, having, you know, having a good idea of what they want to do on a page or any content. It’s good to have that global configuration that Brett was talking about. Because I’ve seen that a lot with content authors, where they’ll do, well, try to find creative ways to use certain components and everything, and then you might end up breaking the whole templating reference, like Martin mentioned. So good points there by everyone. Perfect. So we actually had another question come through the form and then I did see a question in the chat. So I’m going to do the form question first and then I’ll do that one after. And then also I’m just I’m going to paste the questions I’m asking. So if anyone missed it, they can like reread it. This is the question. So disk usage report on clients author indicates they have almost two terabyte of data in var slash replication and their actual content size in slash content is around .5 TB. Currently, none of the agents on author have pending queues. I would like some insights how to interpret the data in slash var slash replication and what is possible, what is the possible reason for this data not being cleared. And then I’m going to paste that same question back in here. Awesome. I can, I can start. I’ve seen, I’ve seen this happen in the past, actually a couple of times. So var by nature, it’s, will be a good way of positioning it. It’s a bit of a garbage collector, right? Because a lot of, a lot of things that are happened or mid process, etc. will stay, right? Which means you can actually, even if you look at your replication queue, because eventually your replication queue is going to timeout, somebody can clear it. A lot of things can happen. That does not mean that slash var slash replication is actually being cleared. The things that I would look at is, so you’ve already looked through your distribution. I’m assuming that’s either AMS or on-prem just because we’re using replication and distribution. If your queues are all clear, my next option would be look under your replication, just see exactly what’s taken the majority of that space. The second thing is, are we running all the jobs for cleaning? So there is online and offline maintenance, are these being done? Because a lot of the times these sort of go in and clean those loose ends. And again, it’s, in all honesty, without us going in and just sort of going through the notes to see what’s in there and it’s not connected to anything else. I would say, look at your queues, right? Look at your, even your workflow queues, not just your replication queues, because you can have a whole bunch of workflows just chilling there doing nothing. Once those are cleared, make sure that you’re doing your compaction and your maintenance online and offline. If that doesn’t work, that’s when you sort of have to put the gloves on and go through these nodes and see where they’re coming from, assuming they have access to CRXD. And again, it depends. Last but not least, if you’re on AMS or AMS cloud service, I would buy a ticket, just put it in there because it could be something that requires the SkyOps team as well to take a look. And I’ll pass it on to the folks on the call. I have one insight on that. One time we faced this kind of problem where we identified that it is nothing. I mean, you shouldn’t focus only on the replication agents, right? There are multiple workflows we are having. For example, like have a rendition workflows, which is running, which is necessarily not a replication agent, right? So you uploaded heavy images and it is going through the rendition workflows and those may still running and it is occupying your space for the processing, right? So you should also focus on which are all running jobs. Maybe you can use a JMX console to understand the workflow maintenance. You have to go through your queues, which is currently running and which is archived. And in that one, you can clearly see saying that like something not going via replication agent, but something is processing as a job and it is cutting the renditions, right? So you have to understand if the image size is heavy, then you can consider a mechanism of how to offload them, not to disturb your author so that the performance may not impact. Okay, great. Okay, this is a question from the chat. Has anyone experienced issues with using experience fragments in single page application from target? The problem seems to be when the XF is injected from target, the XF is not rendered because it is not part of model JSON, which AEM uses to render content. Experience fragments. Go ahead, Rami. Sorry, I have actually recently read into this and that is expected, right? So if you look at the flow of the request, you can request AEM, right? AEM is going to go through and generate your model JSON because everything is eventually, it needs to be passed to your front end application or your spot application. Let’s assume it’s React in this case. You read in the JSON, you find the right resource types, you pull that information from there and eventually you render the page. All that is happening client side though, right? So you make one request to AEM to pull in your data and everything else is happening client side. If your inbox, if target is coming in, in sort of, it’s almost like a timing issue. If target is coming in earlier, before that model JSON, before that DOM is rendered, target is lost, right? Target won’t be able to render anything. It does not inject anything into that model that JSON. So we’ve, I’ve had to play around with it and I haven’t seen, honestly, I haven’t seen anything about it. So a lot of the times we actually had delay the asynchronous call for launch to make sure that target is coming in a little later. So again, you need the page to be painted. If the page is not painted, target can’t do anything, right? It will never be part of that model that JSON. So you need to delay the target. I mean, one of the things, either increase the performance of the JSON to make sure that it’s painted earlier or have, which is, would be my recommendation, have that target script render after the fact. That’s honestly really it in a nutshell. One more possible solution I can able to think of is whenever you export an experience fragment to the target, you can export it as a JSON itself rather than export it as a experience content. And then trying to resolve the wire.model.json where you have an out of box functionality available that you can export it as a JSON itself. Once you export that JSON itself, then it is immediately rendered from the target. So you don’t need to do an extra thing to write a model to JSON and then get the content. So that is one of the solutions that you can think of. Yeah, I guess the head of the engine where there is almost doing like a tighter coupling there where it kind of, because the default way of doing it is kind of target over the top where target doesn’t really care what it is that renders the page. It’s just going to then effectively just go and try and stick some content in there and just modify the DOM on top of whatever you’ve done. And that, as Rami said, you end up with this kind of issue that if particularly with a client side rendered React app, that that’s only going to work if it actually happens at the right point. And it’s going to happen after the React render, otherwise the React render is just going to write over the top of it again. And effectively you’ve got the two things fighting over the DOM. Whereas actually if you potentially have a more tighter coupling there, where the React app effectively is target aware, so it’s able to understand the fact that actually there may be targeted content that’s going to replace some piece of content in there and do a more sort of tighter coupling. Right. I mean, you can go so far as to, and we’ve done it in some very specialized cases where we wanted to fill in very complex components on the front end where target just couldn’t do it from its side completely just looking at like, so take like for instance, a carousel where we want to modify not only the contents of the slides, but actually the number of the slides. So it’s like completely rewriting the HTML of this carousel and you don’t want to put all that JavaScript and CSS into target. We’ve had cases where we’ve actually from the application, instead of waiting for target to inject its stuff, the application can also ask, hey target, do you want to give me any personalization information? And you can put it in that way. So it’s not always the best way and you’re definitely doing a much tighter coupling of your application and your personalization. But in particular situations where you have a high profile experience that you know you want to personalize, that is another approach where then you don’t have to worry about the coordination of trying to make sure that this is done painting before this thing acts. And maybe even help avoid some flicker issues as well that way. So just another. Yeah, and I guess that’s like if it was your hero carousel on the front page at the top of the page that actually you always want to know, you’re always going to personalize that per effectively like per user. It might make sense there to say actually we should do like almost a server side, much tighter coupling to say actually at the point of sending the content out, we’re going to make it already personalized rather than doing the after the fact client side because then you get all the flicker issues. And there’s always going to be cases where you might end up seeing the wrong content for a period of time in the browser or it doesn’t or the timing doesn’t quite work and things like that. So you’re kind of fighting against the two things trying to do was actually if it’s really super important that you get the correct content to those users, then doing a much tighter thing, then you you’ve got much more control and guarantee over that they’re going to see the right version of the content. Sure. I’d be curious how many people are actually using server side target that I haven’t seen as much, though that would be an option. The way that that we’ve done it would be we actually still did it client side because if you think about it, your React app is still getting JSON and going to be rendering it. And so you have an opportunity in your code that says, OK, here’s my JSON from AM, my default CMS content that I want to render. However, I’m going to go quick, make call over to target, see if it’s got anything different for me. And if you structure that JSON the same way, because as Ben just said, you can get JSON out of it out of target as well. If you structure it the same way, you can actually have the exact same component render both default content that came from AM as well as targeted content that’s coming from target. And then it’s just doing one single render because you have normally that little bit of pause on any single page app where you’ve gotten kind of a blank page and now you got to get the content and render it. Yeah, definitely. We don’t believe in blank pages. That never happens. But yeah, I’ve done one server side on target. It wasn’t a spot application that was a single page application. It was a server side typical HCL. And there are a lot of cases in there. The problem with personalization is it’s opposite of caching. It’s the opposite of making it perform. You can’t do both. You can’t have it both ways. So a lot of the times when you’re doing that server side personalization, know that you’re either going to have to do like a dynamics link include to that specific portion of the page to make sure that you’re getting fresh content. And then you have to deal with the CDN and what the CDN does from caching perspective or do a client side. Path of least resistance, at least from experiences has been client side. So to Brett’s point, I think intercepting. However, what would be interesting is using the spot editor where it automatically picks up that JSON specifically and trying to intercept that to alter it. I have not done it before, but now I want to look into it and see if it’s actually doable. If you have a separate spot application, you’re still in control, right, because you can make multiple requests and eventually make them into one JSON and spit it out to your application. So there’s just something to keep in mind as you’re going through the process for sure. I think we’re good. Jessica. Yeah. Yeah. So we got another question in the chat. What is the best way to store a EM site web form submission data in a EM if the client is not interested in spending money on an external database. If we store the form submission inside the EM. How can we ensure that the content is synced among the publishers. Who wants to dump on that one. I’m pretty sure there’s a lot of thoughts happening right now. I’ll, I’ll, I’ll start and then we’ll go around. So the best way to store form data into a EM is not to store data form that forms data into a EM. I’m slowing my words as I’m going through it. Sorry, it took the easy answer. Tell it the Adobe line. What you’re going to do is don’t store your form data in a EM. Yeah. And that’s why they tell them the most important cases. I’ll go through the lines. Yeah, I’ll go through the why and the challenges that you’re going to face as you go through this, this process and it’s on multiple levels. Right. And then, and then we can figure out sort of the hacking way of doing it, and it’s going to get more challenging when you go from on prem where you control to AMS where now you’re on the cloud within sort of the Adobe thought. Go into AMS cloud service which makes this close impossible. Just, you know, because the containers function a little differently. So a EM is designed for people to enter content and render as a class right you have that, that sort of the idea of drag and dropping components which gives you that unstructured data into the DCR. In order to store form data with an AM and we do something very similar when it comes to content tracking is that right because we have structured data into a EM that could be pulled out and put back in. So it’s it’s easy that you have the same source when you submit it, it will go and be stored in a EM, however, even something like a reverse replication just to give you a hint is no longer a valid approach to just even take content from published author. So from a product perspective, Adobe doesn’t want that. Right, you should be submitting and that’s why it’s sort of my initial answer was, it’s not the right approach at all. However, let’s say that this is the only way. Now we have to consider that you might be storing PI data as well to a EM, so sort of it elevates everything now. Right, instead of it being a marketer source of fast time to market and going to dub dub dub. Now we have PI data, you’re the security on a EM is going to be totally different. People having access to those instances is going to be totally different. So you have to consider all of that. And then once he passed all of the security issues and again I’m hoping over a whole bunch of stuff now. Let’s look at what you would have done so sure you’re going to submit a form it’s going to go to a servlet of some sort that servlet is probably going to take that information create JCR information now, if you are on a static set underlying static set of instances so again, You can go through AMS or on prem, which would be the easiest you have the power of moving that content between them right because you can simply. Fire up a whole bunch of calls to the rest of the publishers directly again, you’re going to be going through a lot of red tape here with PII. Right, because every time you make it that request you’re taking information that you should not and you’re moving it between different instances. And if you assume that you do so, even with AMS you still have a static set of IPs on your published instances with of course on premise the same way. Now going to a cloud service, things do change quite drastically, because you’re not in control of IPs actually there’s no static IP except if you do like an egress and that takes you through a whole bunch of other issues containers get destroyed and rebuilt and So, due to this complexity when it comes to implementation or PII my strong recommendation is to go back and say, buy a cheap database, it’s not expensive, but have a singular source of truth and do it that way. Like I said, there are ways I would be just very careful. I don’t, I don’t, I don’t, I can’t find implementation I’ll pass it on to the team as well. I can’t find a clean implementation to this. I really can’t. It kind of comes back to the reason why all the sort of reverse replication and things were deprecated and then basically removed is like the reason that happened is because Even when it was a supported way of doing things, it never really worked properly anyway, because you would always get cases where stuff was submitted on one publisher reverse replicates the author forward replicates back out to the publishers, but it doesn’t end up on one of them, or ends out of sync or you get sync issues and you’ve got this constant battle of trying to make the These processes work properly to synchronize your data between the different servers and effectively it’s all solved by not doing it and just saying, actually, what we need is a system that’s designed to receive that data and store it securely. So, actually just have the form submission go to a dedicated endpoint that is going to take that data, stick it in a database, which is PII secure and locked down and then all of the security issues come down to, okay, we just have to secure that one endpoint and that one system. And we don’t have to worry about the fact that we might have either PII data or we might have someone trying to do, you know, crazy injection attacks on the system and things like that because you’ve got to think about if someone’s able to submit data that gets stored into your JCR. And you’re then going to shift that back onto your author instance and out to all of your published instances. What happens if something then executes that takes that content and does something with it that you don’t expect. Effectively, you’re opening your whole cluster up to the potential security and PII implications, all that sort of stuff. And it’s basically ends up being all of the complexity and all of the sort of dangers and things end up basically saying that’s going to cost more in the long run than just having a dedicated database to store that data in. Is that the prevalence of what you’re seeing in the industry in general, like mostly external databases then with clients and then creating APIs or how do you see as a norm? I mean, what we usually, I mean, there’s two main solutions that we usually see. Number one is just kick out an email because the data is meant to be, it’s like a contact us form or whatever. It doesn’t need to be in any sort of a permanent state to keep for all of time. They just want to get it out to somebody to like have somebody action upon it. So if it’s just, hey, we don’t need to store it, then don’t store it. Just pass through, send out the email. If you do need to store it, then it’s reasonable to believe that you then want to use it in some way. And so even though it’s like, well, I don’t want to invest in a database. Well, I’m not even saying maybe you want to invest in database because if you invest in a database, it’s not just that. It’s now I’ve got to build a bunch of interfaces to that database to then get at that data and use that data in some sort of business fashion. Do you have a CRM? Do you have some sort of other platform that has interfaces already that, you know, so like a lot of times we’ll just plug this data straight over to like Salesforce, for instance, like a CRM, you know, not not market or anything like that. Just to see, because it’s already got forms for like if you’re trying to get leads for like if you’re doing gating for a white paper or anything like that, you probably want to do more with that data anyway. So that’s kind of the solutions that we’ve seen. I think, you know, it’s tempting for a business say, well, we won’t put PII data. We just want this simple form. But like that three years from now, nobody’s going to remember that decision was made. You’re like, so you just can’t give them this option because there’s no way to prevent them from doing bad things. So, yeah, I mean, I know it’s terrible to have the answer of you can’t do it, but sometimes that’s actually very refreshing, right? There’s no debate to be had here. You’re not supposed to do it. And so doing so would be a violation of security practices. One instance we had such kind of a requirement that as Brett mentioned, saying that like we can look for sending an email and then we can configure a CRM to look for an email and create a case if you want. I mean, there are many CRMs or have a mechanism where listen to an email and you send an email to a certain email box. It will just take it and create a record. Right. So you don’t need an external database. And if you already have some kind of a CRM, you can do that connection from the back end, from an AM, you can just send an email. So that would be a thing rather than like you create an overhead and you will fight to do firefighting on a daily basis, which make everyone lives. Just there’s a follow up question in the chat, which I think you guys are touching on here. Is there an open source CRM that we can connect and store data in? But I feel like that’s kind of an answer. Yeah. And just give me I mean, the short answer is yes. Keep in mind that you don’t you don’t have to do if let me put it this way. If the concept is we’re trying to lower costs or minimize costs, right? You do not need a fully functional CRM. What you need is a secure database. I know one of the answers was MongoDB. What you truly need is a tabulation that’s outside of AM that’s accessible through AM in order for you to come back. Because I’m guessing at some point you want to re-render that information as well. I’m not sure if that’s true or not, whether you want to render it to maybe the same person that submitted the form or to a business user. Now, this is purely meant for automation. That’s a really good point that came up. This is just a lead submission because you want somebody to pick up the phone and make a phone call after. Like I’ve seen this happen in automotive quite a bit. Then, yeah, SMTP. Let’s just send that information in email. You don’t need to persist it to persist that information. If you do need to persist it, you do not need a CRM, right? What you need is a database for persistence. That’s it. Hence, I think MongoDB would be a good example. It’s different, but it’s functional. Yeah, sorry for not being clear there. It was more or less, are there systems that exist today? We’re not seeing customers go buy a new CRM so that they could submit their AM forms, right? It’s literally, hey, we have this thing already. Can we just pump some extra data into there? That’s a way that you can double up and you’re not actually adding extra costs. Yeah, because it kind of makes sense to do this sort of audit of what system to be able to have that could contain the sort of data that we’re going to be storing. Then is there a way we can just effectively add this to that existing data store rather than necessarily thinking we need to have, if we don’t already have an actual CRM, do we need to make a CRM just for this? Actually, you may already have some other system that could be extended or could just have this data added into it because it’s already doing similar kind of tasks. All right. We don’t have any more questions in the chat. If there are any, feel free to chat them now. We do have, give or take, I think we have like five extra minutes to spare before nearing the end. We only have like 10 minutes. But is there any, I guess for the panelists, is there any questions that you commonly see in your day-to-day or current issues that you’d like to address at all? I want to go back to the GraphQL one, but I want to expand on it a little bit. So, okay. So here is a question that I get all the time, right? Because a lot of the times, keep in mind that the folks that you talk to are not the engineers and not the developers. They’re really the business owners, the marketeers, the people who want to simplify their life and have time to market. The question that I get is, which way should we go? Do we go 100% headless and have AEM just do GraphQL on top of content fragments? And everybody smiles because everybody goes through that process. Or do we go with Squaw editor, right? So it’s sort of a hybrid. You get the reactor next to JS or Angular, the all-hip language for front-end developers, but then still have the ability to drag and drop components. Or do we stick to the typical Java Slinged HTML? And in all honesty, a lot of the times, my answer is, eh, they all work. It really depends on what you need it for. So again, I’ll start really quickly and then I’ll open it up because I’m pretty sure there’s a lot of thoughts here. So I think if you are templating your site very strictly, right? So I’ve seen this in healthcare. Healthcare is the biggest piece I’ve seen with this where it’s like, I want to change content because there is legal implications so it has to be done quickly. However, this page ain’t going to change how it looks for the next five to 10 years. In which case it makes sense to look into this rapid, into the rapid development of having a React application or whatever, because it’s a templated page and then use content fragments that sort of replicate the structure of the page. And we’re good, right? So you have the fast time to market when it comes to content, not necessarily design. Going into the single page application, SPA Editor versus HTL, I honestly look at these and I look at the staff. So I’ve had clients that came in and they had a ton of React developers. They know it, they’re good with it. They just speak SPA, right? In which case I was like, well, then let’s do SPA and AEM because you still need that fast time to market as well as you want the ability to change the design of the page. So, you know, teaser hero carousel versus carousel teaser hero, right? So if you want the ability to move things back and forth, I think that’s a valid approach, right? There are things they have to consider, you know, how are you going to deal with SEO? And that comes up with its own set of questions and single page application in general. And last but not least, when you’re looking at HTL, to me, that’s sort of the legacy, the most stable out of the bunch when it comes to development, especially for folks like us that have done this for a while. But again, what it comes down to is the comfort zone. You have a ton of Java developers that have dealt with sort of JSPs, but we don’t do JSPs, just to clarify. But they’re comfortable with this whole tagging within HTML. So the basic concept of JSPs and the ability to have truly MVC where you have your strictly have a model which is Java based and you have your view, which is HTML, and then you have your SaaS or CSS. That’s honestly the best answer I can give to a client when they ask me that question. But I’m pretty sure you’ve all gone for experiences with your clients. So I’ll open it up and let me know your thoughts. Yeah, I think very similar to your kind of experiences, a similar thing where we have cases where actually one of the reasons going down the Sparaj to root is they still want, for specific sites, they want that complete flexibility so that every single page can be completely different. They can have complete control over the content, so they don’t want to be locked down, but they have either existing sets of componentry that’s already built for other uses, like they might have like that, almost like the mixture of having both the like fully headless and the hybrid headless, being able to use the same components in both, then kind of makes sense going down that React route in the Sparaj. Because then for the places where you need the flexibility of content also, you can give it. But if you’ve got much more locked down things like in like health care or things like that, where actually all the structure is always going to be same, really it’s only bits of content we’re going to be changing, then it maybe makes sense to manage that through content fragments. But then you can still use the same, potentially the same componentry and the same libraries and things like that, that you’re not basically rebuilding things twice and things like that. You can use all the same styling and have it all consistent across, but you’ve got the flexibility to do the authoring the way you need to do for the different use case. Normally one thought when it comes with SPA versus HTL, that kind of an judgment, I want to take it. One thing normally I used to consider is like, if you see saying that like you have very interactive websites you’re building, where you have a two way interactions, you click on it, and that is a response comes in. You can go for an SPA based on the skill set, as like Rami mentioned, you have 10 of them, then you can choose within HTL as itself, there are multiple ways you to do. Right, SPA editor, that having its own learning curve plus, or you can go as a content fragment based way. But if it is like a one way interaction where you just hit the page, you just load an informational site, but not like Rami said, like five to 10 years, it’s not going to change, then content fragment based approach is the best way. But if it is changing, but it is like a one way interaction, then you can look for a HTL based website. Yeah, I think what we’re going to ask is like, I question whenever somebody says some piece of technology is just fundamentally better. It’s just not true. I’m sorry. Like this industry, it’s so funny being in this industry for close to 20 years where you just see the pendulum swing back and forth. And we’re on this kick right now, we’re like, head on over to the next level. Like that’s just the approach in the industry right now. But it’s better for something. It’s got to deliver something. Technology exists for a purpose and a value. It’s not value in and of itself. It’s the Legos, but you’re looking for the end result. What are you building with those Legos? Because if you just have a couple of Legos and you put them together, like who cares if it doesn’t change? So like, in my opinion, the React GraphQL approach is really good for applications, as Vikadesh was saying, like where you’re interacting back and forth. And what happens on the application is based on your state, what you’ve done so far. It’s not a navigational page to page thing, but rather what have you done and what do you still need left to do? And it’s very business logic driven. And I think that’s a really good thing. And so having that is more in the hands of developers makes a lot of sense because you have a lot of business logic code. The Spy Editor actually sits in this weird middle area where it gives a much better authoring experience to your authors, but it reduces your ability to drive the path through the application to be based on state and it’s a little bit more page driven. So where we’ve seen it be very useful is actually like a wizard’s guide to the application. And so you can get the benefits of a quick loading application that’s reacting to your interactions, but still get full author ability. And then there’s still the traditional for marketing sites, like don’t let somebody bully you into saying that a traditional website delivered from AM or AMP is not a good thing. Don’t let somebody bully you into saying that a traditional website delivered from AM or any CMS can’t be performing. AM is actually quite performing. I’ve worked on other CMSs, like of all CMSs, it renders uncached content pretty quick. But even beyond that, if you’re doing a marketing site, it’s not even relevant because 95% of your traffic is hitting the CDN. It’s going to be actually even faster sometimes in your spas where there is a little bit of a white page for your fetching the content. So I’m not saying that it’s better or worse. It’s your use case. I have actually joked with a guy in the industry, we want to do a podcast on the benefits of the monolith because it’s just so counterculture right now. And I’m not saying monoliths are always better. But there are some benefits to the server side rendered pages that are everything that you need and give that full authoring capability or others. So I usually tell people it’s not an either or when you’re choosing headless versus headfull. It’s a both and because you’re going to have use cases that fit either one. And definitely like people should stop that myth of traditional versus the new way. So both works and both are kind of its own use cases. We shouldn’t always think that like, okay, take headless into headfull or headfull into headless. So it’s a clear segregation that we have to have and understand. Yeah, it’s got to be some reason other than just it’s the shiny new thing to make the leak to say we should make it headless. There’s got to be some other reason you can articulate to say that it would be better to do a headless implementation of this site because XYZ. You can’t rather than just saying, well, we want to do it in react because react is shining a new and that kind of thing. There should be some other reason you can articulate to make that decision over doing something else. And then if you kind of then at least you can say, well, yeah, we did it this way because it’s got these benefits. And there might be some negative like the thing of it is never going to be as instantaneous and things like that as if you did it server side and really forward cached it and things like that. But you’re weighing those potential small negatives against the massive benefit of like super interactivity or the control you get over the way the application can kind of interact with the user and those kind of things. All right, well, we’re just at time, so I’m going to share my screen again really quickly. So just in general, just some last talking points. Thanks so much, panelists, champions, for answering all those questions. Real quick, if you’re interested in learning anything more about AEM, the program, there’s additional resources and a QR code to scan. I know Robert will be posting the recording will be on the Champion Office Hours page, but I think the slides will be shared. I could be wrong about that, but this way you’ll be able to have these resources. And then just real quick, a lot of us will actually be at Adobe Summit. So if you’re coming to Adobe Summit, we’ll be there. Come check us out at the lounge. This is kind of like a screenshot of the layout, but AEM will be at the lounge number two and will be part with Adobe Workfront as well. You can see a little map here, but I think we’re just at time. So let me pull back my screen here. Thank you so much, everyone, for joining. We really appreciate it. It was just our first Office Hours. So mark one of the books, hopefully to have a lot more and more panelists as well. Thank you all. That was great. Nice to see everyone. And if you have any more questions, let us know. Thank you. Thank you. Thank you. Thank you.
recommendation-more-help
64e1e206-d5d2-48d5-a004-8acf094317a4