Screens Cloud Service

Learn about the differences between delivering content for the digital signage channel versus delivering a website and how Screens as a Cloud Service offers a channel-specific delivery mechanism for addressing the unique requirements of digital signage.

Continue the conversation in Experience League Communities.

Transcript
Hi, welcome to this session. My name is Jim Stoklosa and I’m the Senior Product Manager for AEM Screens. This session that I’m excited to share with you is called Screens Cloud, Creating Hybrid Experiences Using Web Content. Just a little bit on my background before we get started. I’ve been with Adobe for five years. I was brought on to help lead the Screens Cloud product. Prior to joining Adobe, I spent 15 years in the digital signage industry in a variety of areas. I’m very familiar with both the software side as well as the hardware side. In addition, the experience side, working with various integrators and design agencies. To start with, Screens Cloud is a solution that enables an amazing variety of digital signage use cases. It can be used to provide scheduled content, dynamic data-triggered content, and it’s typically going to involve looping images in videos, typically in full screen mode. There’s a variety of use cases that Screens Cloud out of the box can be used for and is a great solution for. But what we have found is that more complicated experiences can occasionally be required. We’re going to talk about those specifically here in a minute. A lot of developers would prefer more of a hybrid approach that utilizes both screens and sites. In many deployments past, we have seen situations where developers will focus almost exclusively on using sites to try to enable this content. They bypass unintentionally a lot of the really amazing things that Screens has to offer. In this session, what we’re going to do is take a deep dive into what those features and functions are that Screens enables that really can allow for this hybrid approach. Some of the potential use cases that we see that involved sites or web content are interactive product catalogs, dispensing, or ticketing kiosks that use a lot of touch interface, and are typically delivered on a wide variety of device types from tablets all the way up to large screen displays with touch interfaces on them. Additionally, other use cases that could potentially use sites content are corporate or employee communication experiences where a variety of different types of information is being put on the screen at the same time. In hospitality or even in retail, we see wayfinding applications where consumers are looking to get assistance in how they should navigate a particular location. A lot of this is animated and sites is a great way to build out that content and then deliver it through Screens. Then finally, digital menu boards. This is a use case that in many cases incorporates dynamic pricing. It can incorporate data triggers for things like out of inventory situations for particular food items and just a wide variety of different types of experiences that can leverage the large format displays that are typically used in a digital menu board deployment. Some of the issues that we typically see are that the functionality for sites in the CMS that was built for web delivery, and the CMS that we built for digital signage are very different. There’s a separation of concerns there that can oftentimes be a disadvantage if you’re unaware of the things that Screens can bring to the table. Additionally, web is based on an open architecture and follows a standard specification and there’s a decoupling of the backend CMS and the browser applications that are actually rendering those experiences. Digital signage is more of a closed architecture. We have fixed devices and fixed locations and it uses a different type of technology to be able to loop those experiences 24 hours a day, seven days a week. Then finally, there’s an issue around what content should be sent to the player and what content should be kept in the Cloud. For digital signage, we are sometimes encumbered by restrictions on both bandwidth and connectivity as well as local storage. All of these are potential issues that you face when you’re building out a digital signage deployments. But why would we want to use, I mentioned some of those use cases. To dive a little bit deeper, it usually involves compound layouts where I’m providing a variety of elements within that experience at the same time and or touch kiosks. If I’m looking at that interactivity, if I’m looking at layout structures and potentially being able to leverage layout structures that have already been utilized on my web page, I’ve already introduced that level of familiarity with the consumer who’s navigating to my web page. Now they want an in-venue experience. Maintaining that level of consistency is very important. Also shared media, in many cases, we find that our customers are actually sharing the same media, videos, and images, and even text-based content from their web page and delivering that out to their in-venue experiences. They can use a variety of mechanisms for that, experience fragments, content fragments, and custom components can all be things that are leveraged from the site’s deployment and then brought over to the screen’s deployment. What are some of the differences between web and digital signage? I mentioned fixed device, that is a hallmark of digital signage. We typically know the exact location of a device. Additionally, we know other properties about that device as well. We know the size, which never changes, and the orientation of that screen, which also never changes. There may be other attributes and other things that are relevant to that specific location where that device is, such as the size of the location or things that are available at that location that are very specific to that device. That information can all be captured and is made available to the platform. Other differences include the download payload and the frequency of download. This is important again because of bandwidth constraints and considerations, and also the frequency because we have a constant communication provided that the Internet connection is stable. We are constantly pinging on predefined intervals, typically of one minute between the player device and the screen’s Cloud service. We’ve also decoupled the connection dependency for digital signage through screens Cloud, which means that we poured all of the elements and all of the assets and instructions and everything that’s required is all ported down to the player. In this way, we are able to eliminate that dependency on a network connection virtually forever so that that player can continue to deliver experiences flawlessly 24 hours a day, seven days a week, even in the case where a connection is never re-established. You cannot update the content to that player. If there’s no connection, but at least that player will continue to deliver experiences. Another important difference is response speed. This means that the player needs to respond very quickly to environmental conditions. This could be touch-based, so we want a very app-like performance. But beyond that, we want the ability for the player to independently be able to communicate through an API layer to a back-end service that can provide data about an out of inventory condition, for example, or some other trigger event where the experienced author wants content to either start being played or stop being played. Then finally, a major difference is that consumers of experiences in public spaces, in venue experiences, are expecting broadcast quality, gapless playback. This is very important because it really enhances the experience. Any jittery playback or playback where there is a blank black screen in between two assets can be very discerning to the consumer. The major areas of differentiation are that we want to focus on the authoring piece, where we want to seamlessly blend AEM sites and use screens Cloud service for the delivery of those experiences. We want to be able to package up and distribute those packages efficiently and optimize that downloads who are taking advantage of the bandwidth that’s available to us and not using more than we need. On the playback piece, we want to be able to very quickly scale up and provide a player that is able to be independent and completely decoupled from the back-end. If you take deeper focus into an example of a typical web page, where we see a variety of components on the screen, typically making up a web page, then you’ll immediately notice a couple of things. First of all, this experience is targeted to a consumer that is typically going to be fairly close to the screen. The expected viewing distance for this content is probably going to be no more than maybe 12-18 inches. We would also expect a higher level of dwell time, meaning that the experience is typically going to be consumed over a matter of minutes, if not more, as opposed to some other experiences. Finally, the consumer knows the device. It’s their device. It could be their tablet or their laptop or their phone, but they know their device and it’s very comfortable for them. They’re probably in a very comfortable environment when they’re consuming this experience. You’ll notice that the fonts tend to be much smaller, and there’s a lot of information on this screen, and it’s going to take a while to consume all of this information. Now, let’s take a look at something like a digital menu board. It has more of a horizontal type focus versus the vertical type focus for web. It utilizes the same kinds of components, however. We have the highlighted banner, maybe a hero banner, lots of images, lots of text. But you’ll notice that we use a lot of contrast here. The consumer is typically further away from the screen. The average viewing distance here might be measured in feet. We would also expect that our dwell time would probably be less. We would not expect a consumer to stand here for the same amount of time that they would experiencing a web page. In many cases, it’s in seconds or at most one minute, probably not beyond that. The consumer doesn’t know this device. If it’s a device that’s hanging on a wall or a tablet, it’s something that they’re not familiar with. All of these things are building up and we’re looking at the various areas where we might want to utilize sites content, screens delivery, leveraged for digital signage experiences. One of the things that really makes that easier for sure is the fact that our authoring platform shares a common UI with sites. You’ll notice, for example, here I have an image of all the components. These are the out-of-the-box components that are available for screens. Custom components are definitely available and encouraged. However, there are some considerations that we’re going to talk about in a minute. As far as the actual edit mode and the other modes that are available, you’ll notice targeting, for example. This is where I would utilize data triggers and utilize the ability to create a custom offer in an experience. It’s a common interface and one where I can manage both screens, content, out-of-the-box content, perhaps an idle state or a tracked state channel that has some looping full-screen content in it, and then an engaged state, which might actually be based on a sites page or a variety of sites page. This could be an experience fragment, for example. I can manage all of that within this interface. Now let’s talk about how are we going to get these experiences out to the player. We know that players in the field don’t have a guaranteed stable Internet connection. In many cases, we find that player technologies, hardware devices could be on a shared public Wi-Fi where the bandwidth is suspect and the signal strength is very weak. Additionally, we have customers that are using their own 4G, 5G cell modem to connect to a tower with a data plan on a mobile carrier. But even in those cases, a stale connection can be put to the side. In many cases, it just simply isn’t a reliable connection for periods of time. Satellite connections can often be very expensive and are typically only utilized on cruises or planes or places like that. What is the solution around this? We are not dependent on any type of streaming technology. All of the HTML, the CSS and JavaScript, all the assets, all the instructions, everything is ported down to the player. Now if you’re using standard out-of-the-box screens, functionality, and all of the channel objects that we provide inside screens, we’re handling offline mode for you. That’s done automatically out-of-the-box. However, with sites content, custom handlers are required. If you’re doing a single-page application or you’re doing a custom component, it’s very important that you take offline mode into consideration. In the same way that we talked about being able to port all of these elements of this experience down to the player device, it’s important to note that that will not be done automatically if you’re using some of these other pieces of content. That is something that we want to take into consideration. Offline configs in something called Smart Sync. This is what allows us to create a manifest and download the full experience, all the content, all the instructions are going to be played by the device via a manifest file and it actually points to the content that is resident on the device itself. We’re not dependent on temporary web cache. If we lose a connection, all of that content is still there to be executed on, including in the case of interactive experiences where someone is constantly navigating through page after page, this content is all there to be rendered and available. Additionally, when it comes to updating the experience, Smart Sync allows for the player to only download the content that it needs. If it has content in the manifest that it already has, it’s not going to download that content again, thereby eliminating the need for utilizing more bandwidth. The manifest file contains entries for all of those assets as well as the instructions. Those are essentially the building blocks for defining the experience that’s going to be played on the device. Graphically, what that looks like is that all of the screens objects, the display object, the channel object, are utilizing an offline content configuration. Traditionally, when we first introduced the product many years ago, we were utilizing content sync exclusively throughout. We found that that had some issues. One of those is the fact that we would continue to download assets that the player already had, thereby using up a lot of unnecessary bandwidth. A few years ago, we transitioned to Screens Cloud, and we’ve been optimizing it now. With our V3 manifest, we’ve provided a solution where we have reduced redundancies and replications, and we’ve really optimized and reduced the amount of time that it takes to actually create the packages that are sent down to the player. It’s a amazing solution that we built with Screens Cloud. As we go through and think about the experiences that we’re building, and the fact that we’re enabling offline mode, we want to start now talking about that delivery mechanism, and specifically what we’ve done with Screens Cloud. We’ve actually broken it out into the services provider and the content provider. The content provider talking to the player is utilizing the traditional elements that you would normally associate with AEM. Again, a channel, it’s a page. Those are the same element essentially. But all those assets, including experience fragments, content fragments, live copy, multi-site management, that’s available from the content provider. But now what we’ve done is we’ve enabled the Screens services provider with Screens Cloud, and all of the object management, the registration process, the player itself, that player application, the assignments, the scheduling, all of that is occurring through a separate layer, an API layer that I’m going to show you here in a second, thus optimizing how we’re communicating down to the player device. You see that the player actually has these two entities in the Cloud that it’s actually communicating with, that being the content provider as well as the services provider. The services provider is taking care for player registration, player management, monitoring. It allows us to easily create and manage those registration codes. We’re able to auto-populate metadata, for example, something like the name of the player. Typically, people like to employ a particular nomenclature, a naming convention for how they name their players. What this allows us to do is actually write that to a JSON file on the player itself and communicate that back to AEM, so we’re able to act upon it. Additional elements like those attributes that we talked about earlier, things potentially like the size of the store can also be incorporated here. We’ve allowed for bulk registration and bulk assignment to a display to make it easier for our customers to be able to deploy hundreds, thousands, tens of thousands devices. In many cases, we have customers that are deploying many hundreds of devices per day, for example, on a project. Auto registration and auto assignment are very important, as is the ability to actively monitor that device connection and the overall player health. We want to be able to convert and react playback events, download events, and look for any anomalies. Screens Cloud allows us to do that because we’re able to really track everything that the player is doing, and we’re able to communicate it in a bidirectional fashion. On the content side, this is a traditional layout for an author publish type of scenario, where we’re having a dispatch layer as well as a load balancer and then potentially a CDN in between the device groupings. This is fully deployable and can be scaled to any number of devices. Typically, a device group can handle as many as 10,000 or even more player devices. In this way, we’re able to really scale up the solution and provide for just almost an unlimited number of players on the content side. As far as Screens Cloud and the overall architecture overview, what you’ll see here on the right-hand side is that the players are actually connecting to both that API gateway that I mentioned that leverages Adobe IO, as well as an AEM instance. This could be AEM as a Cloud service, but it can also be a managed service deployment. But additionally, we’re utilizing microservices. We have a variety of microservices you see depicted here. What that allows us to do is to monitor in real-time and track and log, all of the events that are occurring. If we see an event that is out of bounds, for example, or cannot be executed on, or we have a download problem, or we have a blank screen event, all of those things can not only be tracked, but we can trace back what potentially led up to that scenario, because we are writing all of those events and are able to run a traceability on those events. We also do alerting, and we have various metrics that we can look at, like the download speed, for example. We know that in certain cases, we may have players that historically have been able to download very quickly, and others that cannot. We can modify that download activity to accommodate for that. If downloads occur in the middle of the night, for example, early morning hours, and that content needs to be playing by 9 AM for any given player in any given location, what this allows us to do is to monitor those download speeds and increase that if we know that the player is going to take a long time to actually download that content. In terms of the player functions itself, we talked about screens Cloud and the registration service, as well as the device service where we’re managing the ping and the configurations, that JSON file that I talked about. We have a screens UI and device control where we can look at an inventory list and see the devices in real-time, how they’re reacting, if there’s any issues with them. We’re able to look at bulk offline update services. That’s what I was just referring to when I was talking about making modifications to that. It’s not a one-size-fits-all. Certain players can take more time, certain players can take less time. We can also do a lot of network management and throttling with this particular service so that not all players are hitting the back-end at exactly the same time and trying to download content and resulting in a lot of failures that can be leveraged out so that different players are downloading at different times. The e-mail learning is very important because it helps our network administrators understand exactly what’s going on in the field, and tracking what all those downloads are, are they occurring, and are we properly using all of the technologies available for Smart Sync and Content Sync. On the player side, we’re utilizing AEM components in an iframe. It’s basically the player is a shell that utilizes a firmware service, essentially local host and local storage, and it allows us to do things that a normal browser would not normally do. Things like take a remote screenshot, have access to the local file system, be able to do all of that logging that we talked about, be able to do cleanup for assets that are no longer necessary or being utilized. We have a package manager there, and we are able to track all that metadata and preferences for that player. Potentially things like a transition effect and always applying a transition effect. To summarize, the key takeaways that we want to leave you with is that we fully endorse a hybrid approach to experience creation, as well as distribution and management, that uses the power of sites blended with the power of screens. You can do 100 percent screens for many applications, but for those applications where you’d like to incorporate web content via a site’s page, we absolutely support that and leverage various aspects to make sure that those experiences are displayed properly. The other takeaway is to use offline handlers to ensure proper downloads. This again cannot be overemphasized. This is our ability to make sure that we have experiences, all the assets and instructions, so that we can deliver those experiences regardless of the connected state of the device. Finally, we want to leverage screens Cloud for the distribution and playback. All of that monitoring, all the alerting, everything that screens Cloud is able to accomplish, can be brought to bear on content that has been either simply uploaded to assets and placed in a screens channel, or it could be content that is built inside of sites, and it could be interactive content, for example, that leverages the spot editor, and all of this content can be made available for distribution and playback through screens Cloud. That concludes my session today. I’d like to remind you to use Experience League. Continue the conversation there. You’re able to access the developers live session replays. Hopefully, you found this valuable and can refer back to it. There’s also a wealth of information for courses, tutorials, and documentation, and you can connect with Adobe experts and other developers for product specific communities. Don’t forget about our giveaways. They learn, earn, and win with Experience League game. Please take advantage of that. There’s some really great prizes there, and we appreciate everything you do for the Adobe community. Thank you. Have a good day.

Additional Resources

recommendation-more-help
3c5a5de1-aef4-4536-8764-ec20371a5186