 So, hi everyone. Thank you for having us here today. We are going to talk about Alexa's skill design patterns and principles, but actually not only because we made it on time to do a Drupal contribution about Alexa, of course, that we think it might be very interesting for the community. So, we are also going to cover that during this talk. So, I'm Alessandra Petromili and I am Chief Experience Officer at Evidence Italia, meaning that I'm also a user experience designer. And here, there's my colleague Rafael, who is a software engineer and, of course, also at Drupalistas as I am too. But, first of all, restarts about talking about Alexa. So, I guess probably everyone of you already know what it is, but just to summarize a little bit, it is Amazon Virtual Assistant. So, you can basically ask to Alexa any kind of questions. You can also use Alexa to, let's say, say to your intelligence on appliances to switch on or switch off and so on. And, of course, you can also use any application that has been made for Alexa. Those applications are called skills, and they don't rely, of course, on the device, but they are on the Amazon Cloud. But I don't want to go into much technicalities because, actually, I want to talk to you about design. So, what I was saying is that I want to introduce you to design with Alexa, of course. And, regarding challenges, what I see was quite difficult is that, actually, vocal interfaces create a series of expectations that raise the level of quality that our users are expecting. This is happening, for instance, if you think about when you use a graphical user interface and something goes badly with it, you usually think that it's your fault. But when the same happens with a vocal user interface, you usually think that it's the system fault. Well, or, at least, that's what, on average, is happening. But this is, unfortunately, the reason why. So, we all think that we are quite expert in having a conversation. So, of course, in speaking with each other and in using this kind of vocal communication, well, we all learn to speak when we are around two or three years old, and that makes sense that we think we are expert in that. So, what should we do in order to create a skill that has also a good retention rate? Because, you know, here we have another problem. So, if our application does not meet the criteria of reliability, utility and easy abuse, what's happening is that the week after our user are downloading the app, the 6% of them only are actually continuing to using it. These are data coming from voice labs that are, like, 2018 data, so quite recent. Why is happening that? Because most of the time, we are not able to create an application that our user feels really useful. So, how are we trying to do it? So, in the skill that we have designed and developed till now, we are always starting from the user, of course. And we are following the process that you see here. So, we start with the goal definition. And to do that, we use a framework called Customer Development Interviews that has been developed by Cindy Alvarez, that you maybe know. It's a very good tool that helps us understand what our customer really needs and what the final customer also needs. I really suggest you to deep dive into it because it's really useful. After that, of course, you have to define the personas. But since we have also a system involved here, of course, you need also to define the personas and the system. You can think that it's really simple, but actually, it's not. Because when you are designing a skill, you also have to think about which kind of language the skill should use. So, which kind of words, if those words are going to be very simple, or maybe if your target user is, I don't know, a librarian, maybe you want to use a different kind of language. And also, you have to define the use cases in which your skill is going to be used. This is really important, because often the context of your product is not okay with an interaction like the voice one. So, often you need to use voice and graphical interfaces, or maybe just the context in which your user is not suitable for voice. For instance, if a client asks you for a skill that is used in a shop, maybe that's not really the best place, because it could be noisy. Or also, if you need to create a skill, for instance, we are working on something related to emergency stuff. And in that case, we really needed to study the context and see in which case actually the user is able to be close to the device, and if really that's something that could be useful for that. And then you have to define the script. The script is the real interface of the system. And here there's a very big difference between designing for graphical user interfaces, because when we create a graphical user interface, we always look for consistency. But when we create a voice user interface, we have to look for repetition. Sorry, we have not to look for repetition, they are the opposite actually. For instance, if you talk with a friend of yours and he's always repeating the same sentences and the same phrases, you find it very boring. And that's the same that is happening with a voice user interface. So you should create a dialogue between Alexa and the system that it's entertaining for the user. And also, you should create a dialogue that helps the user to come to the goal in different ways. So basically you have to define different paths to reach the same goal. So you start from the dialogues, but these are not enough, because after the dialogue you have actually to create the conversation flow. That's an example of conversation flow. So that's a diagram that's starting from your dialogue is helping you to understand where the conversation could actually fail. And that's really, really important, because if you know where the conversation failed, you can also create a fallback that makes the conversation still enjoyable for the user. And also, this is really what helps you to create a different path to reach the same goal. Because it's not something you can do only with the script, otherwise you would have to write tons and tons of them and it's simply not possible. I want also to suggest to you the use of these tools. It's called Archerances Generator. Again, we are humans, so we cannot create tons and tons and tons of sentences, but we can at least create the best one and then use this tool to create all the synonyms. It's working in many languages, and I find it working actually pretty well. It's helping us a lot with the skill we are developing. And last but not least, it's also really important to do usability testing on the first draft of your skill. So we usually use the Wizard of Oz technique, meaning that the skill is not developed yet, but still we can check if the language is working, if the fallback are working, if, well, we always discover something new that we didn't know, so that's why I think you should really do it. Well, before giving Raphael some time to talk about the Drupal contribution, I also want to allite some data about Alexa and eCommerce because the idea of the contribution came really actually from a situation we had with the client. So he has a Drupal eCommerce website, and we thought that would be really nice to integrate this eCommerce website with Alexa. In Italy the possibility to pay through Alexa's skill is coming very soon, so probably in the next couple of weeks. So we wanted to be ready, also because from the data that you see here, from UK and USA, you see that half of the people that own a smart speaker are actually buying something through it. These are data from OCNC, and also from these other data, you see that people are buying mainly send-alonged items in the categories of electronics, for instance, or omelware, and so on. So that's also very important because, of course, you have to think again about the use case and the context. So you have to imagine that people are, at the moment, usually buying only single items. But then what we asked ourselves was, okay, that's all very nice, okay, it's really cool to have a skill, but what if our customer, and that's the case most of the time, of course, is not a developer and he actually wants to update the products or items or sentences inside the skill, and he cannot do it because he is not a coder. So that's how we are integrating Drupal into it. So I want to do a first overview of the Alexa skill generation process. These are more or less the steps to perform to create a new Alexa skill. So you need to insert the skill metadata, the metadata that will appear on the Alexa store when you install the skill. So you need to define an invocation name for the skill because per language, because the invocation name is the name that the user will say to call your skill. So you need to add samples, add transits, and intents. So basically the phrases provided by the user and the intents of the user. You need to define slot types. Slot types are basically, you can consider slot types basically tokens inside your rat transits. So for instance, if I want to buy a product, you can do Alexa, how much does a product cost? So product is the slot type, is the token. At the moment, you need to manually add some of those values because for predictable inputs, you need to insert into the interaction model all the possible values. So Alexa knows what are the possible values that can trigger the specific intent. So you need to add synonyms related to your Alexa skill. You need to configure an endpoint in AWS Lambda function in most of the cases. You need to write the code of your Lambda function. You need to write your server, your server-side code, basically the Dupal controller, for instance, and you need to test your skill. There are some downsides into this process. Repetitive operation for each customer. We've got different customers that need an Alexa skill that does the same for any commerce. Each of the customer wants to know how much does a product cost. So also this is not versionable. It cannot stay on git, for instance, you can track the changes. You can track the changes all over the time that are done to your skill. So what we tried is to integrate Drupal into this process. We wanted Drupal's talks with the interaction model of Alexa. We didn't want Drupal and Alexa to be separate. So for instance, we want to avoid the .5 or the slide manually. We want the custom slot types are generated from Drupal. Ask CLI is a common line interface that allows you to manage your Alexa skill and related resources. So for instance, you can use all the documentation on Amazon.com. For instance, you can run the Ask New command to create a skill that lives into code. So you can track changes to your skill. So we developed a first version of Alexa skill manager that basically integrates the interaction model of Alexa into Drupal. So interaction model is treated as a Drupal configuration entities. So add transits, intents, slots, synonyms are generated by Drupal and defined and can be changed by the content editor, by an editor. We developed a predefined lambda function so we can reuse the lambda function across the multiple installations of Drupal. You can also override the changes to your lambda function. You can also extend. So answers provided by Alexa can be changed without writing a line of code. So you can go to the interface of Drupal, change the answers and change the response provided by Alexa. Custom slots are generated by Drupal and skills are exportable and reusable across multiple Drupal instances. So we tried to eliminate the repetitive operations behind that. Are you crazy? Yeah. So, yeah, it's possible. So this is a quick demo, a very quick demo of all this. Basically, we start from a fresh installation of Drupal and we create an Alexa skill in a few minutes. So a simple Alexa skill with no account linking, with no Amazon Pay integration, but we create a very skilled in 10, 15 minutes. So we start. Okay, this is a fresh installation of Drupal. We enable the module. So we go to Alexa skill manager. And first of all, we add the intent because we need to define what the intent of the Alexa skill are. So we add an intent that is treated as configuration entity. So we add the label is fresh pizza. We are Italian. Fresh pizza, surprise intent. This is an internal label. The intent name that will be provided to Alexa, the interface could be, it's an initial interface. So we define the swap name and we use Drupal.custom.commerce product variation title. So the Drupal knows that the product that we receive from Alexa is a title of the product of the Drupal commerce. So it's to associate the product to the commerce entity. The product entity on Drupal commerce. The same we define a Drupal variable name, that is the one that is returned by Drupal. And we connect them to a simple entity part. We're working on the interface. We define samples, utterances. So the possible inputs that could be provided by the Alexa user. How much does a product cost? We need to separate it by line. So we can insert an utterances per line. The answer provided by Drupal always using the tokens. So a product costs $3, for instance. We save and we create the first intent. So now we need to create our skill data. The first skill data. So we add an Alexa skill. This is an internal label. So the language of the Alexa skill, at the moment you have a subset of languages. We have a summary. These are the data that the user will see on the Amazon store when you download the skill. So we have a summary. We have example phrases. So how much does a pizza cost? How much the other type of pizza costs? And so we give a name to the skill. We set the invocation name from Drupal. We give a description always for the Amazon store. And we associate the skill with the previously created intent. So we have our first skill. So we now add synonyms because the input is predictable. So we want that, for instance, when you say Margherita, that is a type of pizza. You don't understand that Margherita is a cocktail, but it's a pizza. And then I had this problem while working on it. And so when Alexa receives Margherita, it will associate to the pizza Margherita, not the cocktail, creating the synonym. So now we can download a skill. We are developing a rush command to do this, also in Jenkins with continuous integration to update our skill. So it downloads an archive with all the stuff inside. So this is the content of the archive. We got a lambda function, a predefined lambda function. We got the interaction models that are generated by Drupal. And we got the skill.json with the metadata. So we insert the downloaded files into a previously created project with the ask new command. This is a fresh product that you can create with one command using the ask CLI. This will create a fresh product. So you put these files inside the project, inside the skill project. Okay. Now you only need to deploy the project. Okay. Switch to this. We deployed the project. Now this is very important because in our e-commerce we got two products. In this instance of Drupal commerce we got two products, the marina and the margarita pizza. The margarita has a price of $3. So now we want to test the skills. And now we're going to ask to Alexa, how much does the margarita cost? It works. This is the Alexa developer console. Here you can simulate the behavior of your skill. So you can provide text input. And Alexa answers the margarita costs $3. So we developed the skill without writing a line of code. This is the final destination that I wanted to achieve. So now we're working on it. We're developing the rush command. We're developing account linking with OAuth and the Amazon Pay integration. These are the three milestones that we are trying to achieve. But at the moment you can use this for simple skills that require no account linking, but you can use this. It's in development version, but it's working. So you can join us for contribution to Alexa and finish this integration with account linking. The next days I'll be here trying to work on the integration and the account linking and also to write functional tests about it. And you can fill the survey if you want to give feedback to GoogleCon. And thank you.