Extending Adobe Commerce using Adobe I/O
Learn how Adobe Commerce enables developers and merchants to enrich commerce experience with custom business logic, immersive content, and integrations with other points of interaction.
Transcript
Thank you everyone and welcome to the extensibility session for Adobe Commerce. My name is Nishant Kapoor. I’m Director of Product Management for Commerce at Adobe. And today I’m joined with one of my colleagues, Igor Minyado, who is the Chief Architect and has graciously taken time out from his paternity leave to join us and share his wisdom with us. So I’ll kick off with just a few words about developer ecosystem. As you all know, Magento has always been about developers and then we continue to see that at Adobe Commerce as well with our extensions marketplace just continuing to grow and thrive with 4,000 plus extensions and integrations. The contributions have been pretty good this year with over 12,000 contributions from over 400 individual contributors and 70 plus partners. And then Magento remains very, very popular with merchants due to its flexibility. We continue to hear from our partners that flexibility is the number one reason merchants actually pick Magento Adobe Commerce. So we just want to continue to make sure we thrive across all these dimensions. And this session is specifically talking about how we actually better the future around customizations and extensibility of commerce. Because in addition to hearing from our SI partners and community and merchants that flexibility is the reason where they choose Adobe Commerce slash Magento, it is also a pretty big reason of concerns when it comes to upgrades. And because in process customizations make the core very different from the core that’s actually shipped out of the box and that makes upgrades really, really difficult. So in this session, Igor and I will cover what we are thinking in terms of extensibility. And we’ll talk about an alternate way to customize and extend out of the box functionality that’s part of the Adobe Commerce today. So from an extensibility standpoint, we are looking at four key areas in the short term in 2022. And first one, it begins with experience. So how do we continue to actually allow same kind of flexibility on storefront as well as back office without actually touching the core itself? So how do you extend storefronts? How do you extend back office UIs and how do those extensions actually merge into the core without actually changing or writing any PHP code itself? Second one is API. So inevitably when you are extending your storefront, you will definitely need API to support that functionality. So we are looking at ways to extend and build extension points on our API layer that would allow you to integrate with other APIs, third party APIs or even first party Adobe APIs from other applications. Without writing much code. We are also working on introducing ways to extend our GraphQL and REST APIs that we ship out of the box and extended outside of the code. The third area is middleware. So more often than not, I think it’s majority of our customers actually have a need to take commerce data and synchronize it with any other third party system, whether it’s ERP or PIM or CRM. And these integrations currently do not follow a standard framework. So some extensions are integrations are written in PHP, some in different languages, and they run across a variety of ways. Some with core, some outside of core. There’s no standardization. So this is an area we are looking at where we will make commerce back office data available as events. And that data will be available for integration developers to build standard commerce integrations to synchronize data with other systems. And then the fourth area is developer experience. We want to make sure we provide you all with a centralized developer experience, not just for commerce, but for everything that we have to offer at Adobe. So you are not doing something different for commerce. And you have access to more applications and more APIs beyond commerce. So let’s talk more about storefront extensibility. This is an area we will be exploring in 22. So how do you build custom UI and integrate them with your PWA seamlessly? How do we support internal and external integrations seamlessly without actually changing the core PWA application? How do we extend API to support those custom UIs? And then what does the developer experience and CI CD pipeline look like around this extensibility? So this is the core area that we are exploring right now. And we expect to share something with you in 22. Next area is API platform. So this is an area where we have made a lot of progress. Actually we are about to kick off a preview program. And if this is something that interests you, please feel free to contact me and I can get you enrolled in the Early Access program. So what this is doing is we are building an API gateway, which is going to act as a reverse proxy across all the services, whether it’s a monolith or SaaS services that we are building or customizations that you are going to create in IO or any other Adobe or third party service. So it will allow you to seamlessly integrate different APIs at this layer. It will allow you to bring your own API. So if you are using a commerce version, you can actually preserve your APIs and then create and integrate with additional APIs that we have to offer. And this solution is going to be compatible with open source and on-prem as well. So as again, I think Igor is going to double click on this and talk about it in much more detail and we have a demo to show you as well. Once again, if you are interested in this, please feel free to contact me. You can see my email at the bottom of the screen. All right, Igor. Sim? Sure. Thank you, Nyshan. And let me talk more about the idea of API gateway, as you mentioned. So as you guys know, the GraphQL is a query language on top of the schema and the schema describes the data from different sources. One of the main benefits of GraphQL is that we can query for all data in a single request to one schema. As that schema grows, so it may become preferable to break it up into separate modules or micro services that can develop independently. And which is especially true for Magenta because you know one of the biggest problems that Monolith just started to grow and we can’t stop this growth, right? So we may also want to integrate the schemas which we own with third-party schemas, allowing them to mesh up with external data, which is also very cool, especially taking into account that majority of merchants already have some integrations in place with third-party services. And it’s usually also a good idea to treat GraphQL as a thin API and routing layer, which means that you actually put the business logic permission and all other similar concerns aside from the GraphQL schema. So it’s supposed to be as thin as possible. So we, working with GraphQL, used to consider that it’s kind of a tool to make communication between a client or a head and the headless server more efficient, right? It saves a lot of work. It makes a client agnostic to the fact that it works with Monolith microservices. That’s actually why we are seeing that current integration with Magenta Monolith is supposed to be compatible with the future e-commerce services, right? It helps us also to address slow mobile networking issue between the server and the client, decreasing the communication between them. For example, in comparison with REST, where you’re supposed to issue a lot of queries to retrieve all needed data to render some specific page. So the idea we are pursuing is to provide for third-party developers an experience of basically having one graph. Using all the data from all the services they need to where the execution burden is on the API gateway, but not on the services themselves, which is scalable and distributed way of doing things. So let me tell you in a couple of words how the GraphQL mesh works. So initially we collect API schemas specification from all the services. As you may see here, those API schema specification may be not only GraphQL. So that it may be Swagger or OpenAPI, it may be gRPC, it may be GraphQL, it may be even SQL or so. Then we convert API of each service to GraphQL schema. And this is actually the responsibility of the GraphQL mesh. And then apply custom schema transformation and schema extension on top. So that we may even combine different services, we make them hierarchical and so on and so forth. So aside from just dropping the service API with GraphQL layer, GraphQL mesh allows to easily extend and modify converted schema along with fully type SDK. And then this resulting ubiquitous schema is provided to all the consumers. The positive side effect of that, we are getting full separation between presentation layer and the layer of business logic. And some of the even dev teams which potentially might be working on the services and those dev teams choose specific protocol to work with those services like gRPC, for example, they may even not know that this service may be used through GraphQL, which is pretty cool and no additional infrastructure need to be added. So this method actually brings the GraphQL schema management closer to the developer and allows to modify and manage GraphQL schema according to the application needs. So let’s go to the next slide and I will show you a quick example how this approach might be incorporated in Magenta. Because in Magenta and not only in Magenta, but specifically in the commerce, the graph is so interlinked and you have so many mutual relationship between different domains and the specifically catalog and product entities is a really great example. So that product entity is the biggest aggregate route probably we may have in the system, which embraces concerns relating to different domains. Like here on this slide, you may see that there are two services like catalog and product representation from the catalog perspective and product representation from the pricing perspective. Potentially, you may consider also product representation from inventory perspective and so on and so forth. While for simplicity and taking into account that currently we are working on pricing design, I put here catalog and pricing. So you may see that there are two different schemas. And besides only just merging the schemas, you may notice here that even types are being merged. So the product type is represented in both services and those services don’t know anything about one another. While all this knowledge about how to couple those services actually belongs to the API gateway and the schemas teaching process taking place. And you may see in the resulting product object, there are attributes which belong to both of the domains, to catalog domain and to pricing domain. And on your right, you may see a request submitted to the gateway schema that selects fields from multiple sub-schemas. A gateway fetches the resource that was requested. So the product query known as the original object. And the original object returns with fields requested by the user and those fields which will be necessary later on to other sub-schemas. So that we initially request the data from the catalog service and then we need to refine this with additional data for belonging to pricing. And this is how the additional sub-query for merger object is issued to the pricing service. Then we get this data and merge them together. The interesting part here is that automated query execution plan, which is pretty great feature because here we have only one service. But if you will just imagine that we need to refine data for both pricing service and inventory service, the automated query execution plan will help us to do it simultaneously at the same time making the request to those services, which is pretty cool. And so once again, the type extension is defined on the level of gateway, not on the level of services so that those services might be fully isolated and we don’t have dependencies regarding releases between those services into the work on these specific domains. So it’s pretty coupled and only coupling is done on the level of API gateway. So here we may proceed to the next slide. And here I just wanted to emphasize that you may see that with this approach, so many different stuff which was previous implementation language, whether we are doing this on PHP or Java is not really important anymore. So it becomes kind of implementation detail and maybe protocol which initially API uses, whether it’s REST or gRPC may not be important as well because we may embrace that with the protocol even so that the real call over the wire will be done on the initial protocol, which the initial service provides like gRPC, for example, because we know that gRPC is extremely fast and it’s pretty natural for service to service communication. So here we come to the API first and the emphasizing and the fact that API is very, very crucial and especially taking into account that we will end up having separate services as a part of the composable architecture and release cycles of those services are going to be also independent, that we are going away from the big bang releases of monoliths, which you guys used to have until now. And I believe that most of you guys don’t really like as we do as well. So that independence release processes for services is definitely a big relief for both you and us. But the important part here is to guarantee the consistency of API and that release of one service actually doesn’t break dependencies on the API of other services which rely on the service. And that’s why the API is open, API is distributed and APIs are public. So we probably, we may even say that we are building kind of distributed public data graph and community may contribute to the data graph and especially taking into account that we with Adobe AEO, we provide both replaceability, customization and extension of this graph. The matter of implementation of the services and what specific language being used is not that important right now. And in general, I would not even afraid to mention that this gets closer to the semantic web because it’s all defined the same schema for e-commerce. And now let us proceed to the demo. Hello and welcome to the demo of the next generation of the API platform for Adobe Commerce. Headless Commerce has moved from a buzzword in the industry to an established strategy, completely decoupling the experience layer from the underlying commerce operations provides limitless flexibility and the ability to make the ideal technology decisions independently for backend and frontend. What’s more, with the extreme growth of connected devices, a commerce platform that fully embraces Headless allows for very purpose-built commerce UIs that can be embedded transparently into the larger customer experience. Whether it’s a progressive web app or experience driven commerce via Adobe Experience Manager or any custom UI, the new API platform provides maximum flexibility for your storefront via our state of the art Headless architecture and comprehensive coverage for GraphQL APIs, allowing you to choose the best solution for your needs. This demo showcases a few capabilities of our next generation API platform that brings unmatched flexibility, scalability and maintainability for commerce developers. Before we look at the demo, let’s look at the architecture and process flow for the API platform. A key component of the new architecture is a multi-tenant GraphQL service that will serve much in specific schema mesh stored in a database and in memory. This service is deployed at adobe.io gateway and has an entry point commerce.adobe.io slash API. All storefront API traffic for all merchants will flow through this entry point. In addition to providing enhanced security, an API gateway will insulate commerce developers from how the commerce functionality is partitioned into the commerce monolith, distributed microservices and other adobe experience cloud services. It will also insulate commerce developers from the problem of determining the locations of all services and then doing multiple integrations. The next key component of the architecture is a schema management service. This service is responsible for asynchronously creating and storing a GraphQL schema mesh across the commerce monolith, commerce SaaS services, customizations and extensions running on adobe.io and other adobe experience cloud services. Decoupling the schema creation from schema serving allows the gateway GraphQL service to be lightweight and scale efficiently. For the demo, we are using the Chrome IQL plugin. In this example, we have created a GraphQL schema mesh for a merchant using three sources, commerce monolith, SaaS live search and custom services running on adobe.io. The top three queries are coming from adobe.io, the next one from live search and the remaining schema from the commerce monolith. Let’s see the service in action. I will make a request to the GraphQL service to retrieve store configuration data from the commerce monolith. I can expand the request to get the category data from the same source. Further, I can expand the request to perform a search action on the live search SaaS service and get data back from custom services running on adobe.io, all using a single API. The service provides unlimited flexibility to create a GraphQL mesh with any adobe or third party services and acts as a single entry point across all those services. I hope you enjoyed the demo and are excited about the new commerce API platform. Thank you, Igor. I have a couple more slides to cover. The next point that we discussed at the beginning of the session was middleware. This is where we are going to make commerce data, the back office data available in adobe.io. From there, integration developers can come in and build FireFly or adobe apps as integration apps to sync data between commerce and third party systems. It will also allow you to forward that data to any existing middleware that you already have invested in or forwarded to other third party EAIs so you can preserve your investments that you have already done. This would standardize the way that the data is kept in sync between adobe commerce as well as any third party systems. So this is something that we are going to be working on in 22. The last piece, but by no means the least, is extensible, the developer experience. We are working on centralizing all developer experience on adobe developer console. This would be the place where you go to manage your APIs, manage your projects, manage your keys, give access to different developers, to different projects, download SDKs, have your CI CD. So this is, look for commerce integration into adobe developer console. So that is all we have for you all in this session. Let me see if there are any questions. If you have any questions, please feel free to put them in the Q&A section under chat. Yeah, I think there is a question from Gary. Igor, you may be able to answer this question. How is security integrated in the API Gateway? So it’s actually a pretty broad question because there are different aspects of security. Right now we are specifically talking about storefront API Gateway and we are not talking about store management API. And for storefront API, the majority of APIs are accessible for unregistered users. And right now we are in the middle of the process of getting PCI compliant for our storefront API Gateway. While still taking into account that this is multi-tenant service, we need to make the separation between different merchants, between different instances, especially taking into account that eventual graphical schema is going to be kind of a snowflake. It’s going to be pretty unique for each of the merchants. All right. So this one, next question is from David. How does this impact on-prem merchants? Will this be available to only those on Adobe’s cloud offering? So I can take this. So this solution is actually built with the intent that it’s compatible with all customers. So whether it’s on-prem or cloud, it doesn’t matter. The only concern there would be if you have an on-prem installation, the API SLAs may not be predictable. So that’s something that we won’t be able to control. Depending on where you are hosting your commerce instance, it may be a longer hop from the gateway to access the URLs. Whereas if the application is hosted on Adobe infrastructure, we will ensure that the gateway and the actual commerce instance are actually at least in the same region, if not very closely deployed. So that is the concern. Outside of that, no solution will be compatible for all customers. Okay. So we have maybe time for one more question, Igor. Go ahead. Yeah. The question regarding the… So I will just read it because for recording. How would the graphical performance response time be ensured, especially in the inventory? So I don’t understand regarding specifically inventory, but in general, internally, all the customization will take place inside Adobe infrastructure so that we can measure time for our serverless VPN vacation and how much time we need to launch the specific IOU runtime function so that we can measure the time and we can provide some guarantees regarding that time. As soon as we are dealing with some mashup when we are integrating schema coming from the external sources, it may be harder, especially if the schema on that side being updated frequently. So that taking into account that those sources are external, it may introduce some delays, maybe. Okay. All right. So we have a couple of sessions. I highly encourage you to join the hackathon on Wednesday where Chris will be talking about specifically GraphQL, what’s coming next. And then Valerie will be talking about some of the use cases for commerce that you can solve using Firefly. So I highly encourage you to join these sessions on Wednesday. Sorry. Probably just two more cents to add to the initial question regarding the inventory domain. I believe the main essence of the question is related to real-time inventory queries which may need to be issued from the front end to make sure that we are not overselling. If I understood this correctly, this request, there are several options. So we do it through Adobe infrastructure. But for example, for this specific case, we still may do this request directly with the service. Like if you have the integration of this ERP, you can do it from your PWA application or from your IAM application directly if you just check to the ERP. All right. I think we are out of time. Thank you, everyone. If you have any questions, please feel free to reach out to either me or Igor. Thank you. Thank you, guys. Bye-bye. Bye.
Additional Resources
recommendation-more-help
3c5a5de1-aef4-4536-8764-ec20371a5186