 All right, we'll go ahead and get started. For those of you who don't know, you are in the STEM class and you are about to hear the smooth jazz. I'm just kidding. Speaker Patty Reeves teaching us how to make your website talk, how to use WordPress to power an Alexa skill. It's pretty cool. Take it away, Patty. Thank you, Claudia. Yay! All right. I know we've had some internet issues, so I'm really hoping I don't jinx myself because what I have planned for you today is we're gonna go from zero to hello world making an Alexa skill. So who here has an Alexa device? Cool. Who here has made a skill before? Ooh. Who here has any interest in making a skill? Oh, great. So, who am I? I'm Patty Reeves. I am very new to Arizona. I moved here a year ago and this is my first WordCamp Phoenix. Yay! I am super nervous because it is way bigger than WordCamp main, which is where I'm from. I am a senior user and experienced developer at Ally Interactive. We are a web development agency. We're a WordPress VIP partner. We often work with large publishers and nonprofits to bring their business online. I am a contributor of VoiceWP. I can say that as of yesterday. I have my pull request merged, but I really am a novice Alexa skill developer. I pitched this talk in November and at that point I was a little bit farther than where I'm gonna take you guys today, but I still feel like I have a lot to learn. So, I hope you came with your questions. I don't know how many of them I can answer, but I'm really excited about the potential of this technology and I feel like in working with it, it's gonna be the smartphones of the next decade where 10 years ago we were talking about making our websites mobile ready and mobile first and responsive and do they work on phones? And that's like a ridiculous question in 2019. And I think that in a couple of years from now we're gonna talk about like, is your website multimodal? Like is it gonna work with text to speech? Is it gonna work on someone's refrigerator or in someone's car? And this is like a way to get into that and learn about it. But the technology is still really new and we're figuring out as we go along. So it's like a really exciting place to be right now. So what is VoiceWP? It's one of a couple different plugins out there that can create a link between your WordPress site and Amazon Alexa services. It was made by my former colleague, Tom Harrigan. There's this picture. He was a partner at Ali Interactive and just at the beginning of this year he left with Ali's support. He's spinning up his own company around VoiceWP. So I'm gonna show you guys how to use it. How I started making a skill is that in my work at Ali Interactive, we got a grant from the Knight Foundation to work with the Cooper Hewitt Smithsonian Design Museum to make an Alexa skill that allows a person using their Alexa device walk through a gallery at the museum. And it's gonna come out in April and it's really cool. And we use VoiceWP to build it. So I went from like, I'm a user experience developer. I'm most often working with CSS and JavaScript and React and diving into the WordPress REST API that was all pretty new to me. But it surprised me how the thinking about user experience on a website and visual user experience, like all those kind of skills still translate to how you think about voice user experience design. And in some ways it's very, very, very, very different because voice is like a linear experience. You have to hear me talking now to hear what I'm gonna talk about later where as visual design your eyes can go to where they're interested in. But I wanna talk now about what makes a good Alexa skill if you have an idea. Here's three kind of points that I've been thinking about. One is that it's something that works really well in a hands-free environment. So if someone was driving a car or if someone was making dinner, that's a good time to have a voice experience where they don't necessarily have to see something going on. Another thing to think about is if it's part of a habit loop, like a person has an event through the course of their day and your skill is kicked off because of that event. So one skill that I have installed on my Alexa that I really like is for my dog. There's three adults who live in my house. Me, my husband, and my mother-in-law. And I have two pugs and one pug is very fat and he's always scratching at the food bowl. And so we could just feed him every time he scratches and that's how he's getting fat. And so I have a skill that like every time he scratches at the food bowl I say, computer, because that's what I call my Alexa, is a dog hungry. And then Alexa has like, there's like a database that recorded whether or not I've asked that question the last six hours or someone else has. And if the dog has not, if no one has asked Alexa in the last six hours, she says, yes, the dog is hungry, do you wanna feed it now? And I say yes, and then I feed the dog. And then hopefully the next person that is going to feed the dog asks the Alexa if the dog is hungry. So that's an example of what I mean about like being part of a habit loop. And then the third point is that it doesn't take a lot of back and forth. So who here has tried to like make a flight reservation over the phone? Like those are like painful, awful experiences or like if you like call your credit card company and you have to talk to a computer for a long time, that's like really where like the history of voice user interfaces are like in these phone systems. And what we know from that research is that people are gonna be really impatient if they have to talk back and forth with the computer for a long time unless you're really careful about making it like a delightful experience. So those are some things to keep in mind as you're thinking about what kind of skill would I like to make? But that being said, this is not a talk about what makes a great voice user interface. And if you are interested in that topic, this is a really great book Designing Voice User Interfaces by Kathy Pearl and she goes into the history of voice user interfaces and the first voice user interface was in the 50s which is crazy and like the technological constraints about it. So this is my chart, my simplistic chart that I made to illustrate how a skill works. So the person talks to the echo and the echo has some machine learning capabilities to translate intent that the woman has into like basically like a big array of different requests that you can make. And so the echo sends that request to Amazon and you have a developer dashboard where you're gonna like configure all the things about your skill. And then Amazon is going to send that request to an API on your server where you then can then interpret like what the user wants from that request and you can send it back to Amazon and then or you send the response you want the echo to say back to Amazon then Amazon sends it back to the echo and then it goes to the user which is why I am very scared about the wifi because you can see it has to go through a lot of paths between the echo and you but hopefully it will work today. Any questions so far? Cool. So what you need in order to get started is you need to have a Amazon developer account. It's free. It's the same one that you buy like dog biscuits with. You need to have a WordPress installation with the required plugins which I will get into and a local development environment and I use VVV which stands for Varying Vagrant. Varying Vagrant Vagrant, thank you Chad. Which it like allows you to spin up a local development server. So some of the stuff that I'm gonna talk about today is specific to that. You might not need to do it if your local development setup is not set up that way. But if you do, then this will be helpful. So the very first thing you're gonna do is you're going to go to the Alexa Developer Console and this is developer.alexa.com and don't worry, I have links to the slides and everything after this. And when you do that, there's like all sorts of things you can do from developer.amazon.com and you're gonna go to the Alexa one and you're gonna create a skill and if you haven't created any skills this is what your dashboard looks like and you're gonna go ahead and you're gonna hit one of those blue buttons. So there, the next page you get at the very top it's gonna say what's your skill name and you can change this as many times as you want in the development process but once you've submitted it to the Amazon store that is the skill name. Pick the language is English. Then you can choose a model to add to your skill and in this example we're just gonna go custom but if you're curious and you wanna play around with us they have some models that are created already that kind of get you further along than nothing. And then at the bottom you can choose a method to host your skills back in resources and we're gonna pick self-hosted. Okay, so once you've done that it's gonna ask you for a template and I believe these are new. We're gonna go ahead and start from scratch. So then once you get to that point you're gonna load this page and as you're developing a skill you're gonna get very, very familiar with this dashboard. This is like your console dashboard and you can see just like we specified the skill invocation name when we created it you can change it right here. So some requirements for your skills invocation name is that it cannot have the word computer or Alexa or whatever the wake word is for a device. Who here know what a wake word is? Anyone wanna shout it out? Like what do you use? Alexa, computer, Echo or Amazon. I personally use computer any Star Trek fans out there. So you can't use that in your skill name. So in this example I called the skill hello world so you would say computer open hello world and that would be how you get it to open. So once you have done all of that you have a skill ID. This is important. We're gonna copy it now. We're gonna go set up a bunch of the other stuff and we're gonna come back to Amazon and the skill ID looks something like this amz1.ask.skill.bunch of stuff. And the reason why you need this is because you're gonna plug this skill ID into your WordPress installation later. Now, let's say you have, I'm developing this site on my computer and then I wanna like actually publish it into the world you're gonna need to create two skills like this. One for your local development environment and one for your production environment. And that's where that invocation name is important. You really want them to be different. They absolutely have to be different. If they are the same you are going to have problems. They are not going to work like you expect them to. I'm gonna keep drilling this over and over again because it is a problem that I constantly run into when I'm not developing carefully. So we got the skill ID. Now let's play with WordPress. I have a repo. I will show you the link again at the end. It's github.com.pattyreeves. Hello, Alexa. It is a WP content repo. So what that means is all of your plugins and your themes are in this repo. This is what the plugins folder looks like. It's gonna, you're gonna need voice WP as a plugin. You're gonna need WordPress field manager as a plugin. And then I put the code for our Alexa scale in a plugin. This is what Hello Alexa looks like. Does anyone remember why I couldn't call it Hello Alexa in the invocation name? Because it was a trigger word. That's why it's Hello World. But I called it Hello Alexa here. So, helloalexa.php is the plugin file. And then I have a package.json file. There's no JavaScript in this project but the package.json is helpful because it's how I'm going to run my proxy service which is going to help give me some debugging information between working on my local computer and sending that information to Alexa. So, I'm gonna go over like, this is like the very bare bones what is in this plugin. I just like yanked it out of the construct method where we're creating a field to save that Alexa skill ID. That's what the add action submenu Hello Alexa settings is. So, there's gonna be a page Hello Alexa settings. And then I'm adding a REST API endpoint for my, basically for the endpoint that we're gonna send to Alexa. From there, too fast, too far. This is what that package.json file looked like. So, there was like really only two files in this plugin. And what is in here is I made an alias for bespoken tools which is a really sweet service that will give me debugging information between my computer and Alexa so that I can see like what Amazon is sending to me and what my computer is sending to Amazon as I'm working on it. So, if you've checked this out and you've set up your WordPress site, you have to run NPM install and that's going to install the service. And then after you run NPM install, sorry, getting out of myself, then you'll have it and it'll be ready to go. Okay, so everything's configured. Everything is checked out from Git. WordPress is running. I'm gonna head to the WordPress admin. I'm gonna paste in that Alexa skill ID into this field. And you can see here just because of where I configured it, it's in under tools, hello, Alexa settings, if you end up trying this at home later. Just a note, I tried configuring a skill where I saved the skill ID to an environmental file that you could have one for your local, one for your dev, but I had a lot of problems with caching and it not working if it wasn't in the database. All right, so this is terminal and I know it's very small, but I'm gonna try to walk through what the screen says. What I did is from that plugins folder, I ran NPM start so that was the alias for the proxy server. And the spoken tools gives me a nice little message. Your public URL for accessing your local service is random URL. And then the URL for the dashboard is that second orange random URL. So we're gonna go ahead and we're gonna go to that dashboard and configure it. You only have to do this once. This is what bespoken tools looks like. You hit that hit create a new skill or action and you fill out, you need to fill out the name of the skill and how you open the skill. So the wake word right there and input. To be honest with you, I've had a lot of things where I've like tweaked the invocation name or changed it and I never had to update my settings for the skill after I did it once. So I don't know that they actually do map to anything in particular. Okay, so if you are using Vagrant in virtual box, you will need to make sure that you are proxying the port that you're proxying through is open. So this is a sites configuration file for the site I'm working on where what it's doing is it's listening on that port and it's saying, hey server, anytime something hits localhost 9994, please send it to this endpoint. So you're gonna save that. You're going to restart the engine X service and you're not quite done yet. There is a couple more things you have to do in order to make it work because you also have to go into virtual box and you have to, this is like a GIF, so it'll go back to where it was, but there's, you go to the network option and then it's covered up right now but you go to port forwarding and then you might have some port forwarding in there already and you need to add one for the port that you're working on. So another little gotcha is that if vagrant goes down and you restart it again, all that port forwarding is going to go away. So you might want to edit the vagrant file and then it'll always be there. Okay, so ports are forwarded, plugin is set up, the skill ID is pasted in, we've created the skill on Amazon. Now we have to go back and we have to put that endpoint that is on our local environment and we need to put it in Amazon so it knows where to send the requests to. So you do that, we're back at that same console we saw when we first created the skill and you're gonna go to the build, we're in the build section and we're going to end points which is highlighted in blue on the left side of the screen. We're gonna choose HTCPS which was the option we chose way back when we created it and you don't need to worry about filling out every region because while you're developing you just need to do the default region but if you had a skill and you wanted to have one skill for North America and one skill for Europe and one skill for Asia you can do that or different versions and what you're gonna do is you're pasting in that proxy link that we got when we were in terminal earlier. So instead of saying to Alexa like visit like my IP which be like 172.56, blah, blah, blah which might change every time I like go to a different location. I have this proxy URL, it never changes. You never have to touch this again and you paste it in there and it's always gonna work when you're running the proxy from your computer. So, next step. This is like the magic of Alexa. It's the intent model. So what you have here, I'm too short. There's a list of all of the things that you can tell your Alexa and it's going to send that request to the server and it gives you like a whole bunch for free. You don't have to configure them. The one that's like cancel, I don't wanna do that anymore. Help, Alexa help me. Stop. Those all come, you don't have to configure them. But then you can create your own. So you could be like, like in my Feed the Dog skill example, you'd have like Alexa is the dog hungry and the intent might be something like dog hungry. It's like a computer readable term that you'll understand. So in this example here, I called it Hello World. And then once you create it, you give Alexa like some examples of what someone would say when they mean that. And I said, say hello here. And Alexa's kind of, it's like fuzzy matching. It's supposed to be smart where you can say like some variation of something that's like that and it should work. You can also have a slot and a slot is like a variable. So like in my example of the gallery skill I'm building where the user is walking through a gallery virtually, I have go forward, go backward, go right, go left, where the second word is variable and that gets passed to my API and I can like conditionally, you know, send different responses based on what they tell me. Any questions so far? We're almost there, I promise. So after you do that, there's another slide. It's, so the last screen I showed you is like a GUI to do that. But it always is gonna create this JSON blob that you can access from the JSON editor option. And this is very helpful for like, I have my local skill that I'm working on and I have my production skill and you don't have to like update through a GUI every time you add a new intent. You can just copy this blob back and forth. Fair warning. The invocation name is in this blob. So if you copy your local development invocation name, be careful you don't overwrite your production invocation name and have lots of problems. Once you do that, you're gonna hit the save model button up at the top and then the build model button. And you have to do that every time you make changes. And then once you have done that, you can head over to the test tab. And what test is, is it gives you like a virtualization environment for an Alexa device where all you have to do is you just hit that check box up or I'm sorry, that drop down menu up at the top development. And that will enable it. And what you can do is you can hold down the microphone and you can ask it just like you would if you were using an Alexa device or you can type in your requests. And if you work in a coworking space where you don't wanna be like open and skill, blah blah like every couple of minutes or an office, like it is really nice to use this tool. It's also nice because if you recall using your own Alexa device, if you stop talking to it, then Alexa will stop, it's supposed to stop listening to you and it will turn off. But you can keep this open and it never will close your skill. So that's good for like when you're working back and forth, this is ideal. However, I strongly recommend if you are interested in developing an Alexa skill that you actually get one because then you'll get a real feel for like what words it's going to recognize and if a phrase sounds natural and also like how it sounds when it reads it back to you. It's like this is not really a great substitute for the real thing. So what is happening on the other side of the equation is in our terminal where we had run npm start, the orange blob is what Alexa is sending to your computer and the blue blob down at the bottom is what your computer is sending to Alexa. Pretty cool. And that is Hello World. So I thought that together we could live code an example of we can add an actual intent to it. So right now if you say hello, I believe it does not do anything. Can you guys hear me okay? If I go here and go to edit, I have that intent and if I go to test, I'm sure I'm in my plugin, I'm gonna do npm, that runs my proxy. So if I type open Hello World camp, send the request and I don't have any audio but it said Hello World camp 2019. But now if I say hello, it won't do anything because I haven't programmed it to do anything and you can see it's not even saying anything. So what you wanna do is I'm gonna go through more of what's in this plugin. I showed you what the constructor function was already. So this is the root that we called and what it does is it will then run the scale function once that root is hit. So in scale all this stuff is kind of boilerplate like making sure the request is coming from the scale with the right ID and then it gets a request and a response. And so then what we do here as we run this function, request response and I already have Hello World camp Phoenix 2019, goodbye World Camp Phoenix 2019. I can change it right now and it will automatically update. So if I do open Hello World camp, now the danger is that this is going, it's getting proxied through my tethered phone so it might not always work. What I'm gonna do is I'm gonna reload so I get a fresh session and we get the response CI automatically update. So I'm gonna go back to then I did say hello which was that intent that we created but then we never finished and doesn't do anything. And you can see here, I wanna make this bigger. It sent a request of the type intent request and the intent name is Hello World. And my skill didn't have any logic on how to handle that so it didn't do anything. But what we can do here is I can say if it's dangerous, do I have this on the side here? I think I'm just gonna piece this in. Okay, I'm gonna walk through what this is. So the request is intent request which I know I got grabbed from this type right here. And intent equals the request intent name so it's intent and then the name is Hello World. And then you can just have a big switch statement that says like handle this request this way, handle this request this way. So in the case of Hello World, and I also concluded the default just in case I've created an intent and didn't match it. The response object you send the use the method respond and then you can just write a text string. So this is coming out of my plugin. It should say now I hit save. I go back here and I'm gonna reload not because you have to but because I'm a little worried about this internet connection open and it should say something like see I automatically update, automatically update. And then I'm gonna say hello and this is coming out of my plugin. Any questions? That was what I had for you. This is like I really wanted to let you know that it really is once you configure all of these things, it really is very easy to develop an Alexa skill this way. It might not be the most straightforward way as opposed to like using like no other JavaScript but if you have a big WordPress site with all your different configurations of what you wanna do, you can make this be a plugin that sits with any other plugin and it has all of the power of the WordPress API. So you can like query posts, query settings and it's a really cool kind of interface like making this gallery skill. Like the editors are like editing their scripts for what they say at different gallery stops right in WordPress. It's very cool. Anyone have a cool skill idea? Any questions at all? Yes. What do you mean the voice WP plugin? What I was just working on. So this is what is available to you as part of the voice WP plugin and I kind of glossed over it up here but you create a voice WP instance right here on line 72 and then you're able to take from the request to the rest endpoint right here voice WP then wraps it with all of the methods so that you can send the responses the way that Alexa wants it. What's your question? Yes, Ali has developed a couple of Alexa skills. We did one for People Magazine. We've done a couple small ones and I'm working on the one that I was talking about with Cooper Hewitt Smithsonian Design Museum but it's not published yet. So when someone uses the skill they're using their Echo Dot and they're not using a computer at all. Am I getting your question? Like, so you don't need to use the Echo Dot or an Echo or any or an Echo Show to develop it exactly but you could because the cool thing is that you have your account is associated with your device. So once you start developing it and you register it all in Amazon then it's already available on your devices that develop. You don't have to proxy it. So it's like, yes. But I mean the proxy as you can see gives you all the information about what is going back and forth between the servers. How it, yeah. And so VoiceWP does have like some default fields out of the box that you could have a skill that's like a blog reader like reading my last five posts. Can you read me like the third of the fifth post? It doesn't but that would be pretty cool. Now what you would have to do if you wanted to do something like to manage your site via Alexa is you would have to in Amazon you have the, you can create a skill that requires like permissions where like the user would have to link their Amazon account with an account on your WordPress site so that like anyone couldn't just go and like delete comments on your site. So is there a way to talk, target any particular piece of content within a page like a block for Gutenberg? You mean like that Alexa would only respond with a particular? So the way I would think about doing that would be if you like you'd query for the posts and then you'd query for that particular content within the post just like you would if you were like writing a template to display it. But then you'd have the rest endpoint here and it would read that back as a response. So there, I don't believe that Siri has like an app development environment like this. Though if it does, and anyone knows about it please correct me. I know that Google Home allows you to create a Google Home Assistant and you can ask for a Google Home Assistant and VoiceWP only works with Alexa right now but that might be something that, right, right, not really any standards right now. You had a question all the way in the back. You could, now I am at my last minute but come talk to me later and I will, I can point you in the right direction. All right, thank you very much.