Alexa Multi-Modal with Alexa Presentation Language
Okay, hey guys Mark Phillips with SimpliSpoken back again for another SimpliSpoken on Voice Daily Flash Briefing. And today we’ll turn our attention to someone a lot of people have started to think about with a lot of different screens coming out on voice smart speakers and other devices as How can you have kind of a multi modal solution. And I think Amazon has gone at that in an interesting way. And they call that capability Alexa Presentation Language. So developers can use this Alexa Presentation Language to do something that is useful in a multimodal voice experience. So this doesn’t replace the voice experience that you have, you still you should still develop in a kind of a voice first way where you don’t require that you have a screen to present to. But if you do have a screen and you want to augment the experience for the user, you can use this APL and so that’s Alexa Presentation Language. And what it basically does is it takes it allows you to define how a screen will look in a document called an APL document. And then it will also allow you to define the data that goes to the screen. So there’s layout, the way the screen looks. And then there’s data, what the screen shows in that look. So for example, you could lay out a three images and some text on the screen. And then you could pass data at runtime dynamically to the device to update what the screen says. So it’s an interesting technology. Again, it’s a great technology to augment the voice experience. We’ve used it, we like it, and we think it is a promising way to go at it. So once again, SimpliSpoken on Voice Daily Flash Briefing, please do hop over to the skills store and give us a good review and we’ll talk again tomorrow. Thanks, guys.