Learn how to understand the different performance and functionalities across various voice-assisted devices to help prioritize product roadmap.
Hi, I’m Justin Grover. I’m a product manager for Adobe Analytics. Today we’re gonna look at the differences in capabilities of the different devices that people use to interact with your voice skill. With the explosion of different devices that can interface with voices systems, they each have different capabilities that can be used in different ways. In Adobe Analytics we can list out those capabilities. We can show which ones have an audio player, a microphone, or a touch screen, and look at the differences in user behavior between those. So here I have the device capabilities listed out by unique visitors. I can look at the intent for each of the devices with these capabilities. One of the things that we wanna look at is how people use devices with different capabilities in different ways. So, I’m gonna look at the intent, and look at the number of visitors to that intent. And then I’m gonna compare that with two segments that I’ve created earlier. First is devices that have a screen, and the second will be devices without a screen. So, as you can see, devices that have a screen are much more likely to have the list item’s intent. Whereas, devices without a screen are much more likely to be used to add an item. One possibility for this is that users like to see their shopping list while they’re shopping in the store and use their voices system that sits on their countertop at home to add items to the list. One of the things that we can use this information to do is we can use it to start to optimize the list item’s intent by including product pictures and other helpful in-store hints that would help a user find the items on their list. This gives us some clear next steps that we can start to incorporate in our product roadmap.