 Good morning everyone welcome to Drupal Alexa and big mouth Billy Bass walk into a bar My name is Amber Matz, and I'm a production manager and trainer at Drupalize me Good morning. My name is Blake Hall. I'm a senior developer and trainer also at Drupalize me So here's what we're going to cover in this session We'll look at Alexa custom skills Specifically concepts you'll want to understand as well as the process of creating an Alexa skill And then we'll walk through three different example approaches of Alexa skills Two of those will include integrating data from a Drupal site Into the response that Alexa reads back And finally we'll wrap up with a demo of two of our Alexa skills with a bit of a fun hardware twist Featuring a big mouth Billy Bass animatronic fish and Arduino and an Echo Dot There are three types of Alexa skills custom skills Smart home skills and flash briefings. We'll be covering custom skills But we'll also demo our Drupalize me flash briefing at the end of the session By the end of this presentation You should feel empowered and ready to create your own custom Alexa skill with or without Drupal To get the most out of this presentation You should be an intermediate coder and comfortable tinkering with code But you don't have to be a node expert or a web services expert to create an Alexa skill It's a pretty accessible development experience If you've already created a custom Alexa skill stick around as we'll go beyond the basics in the session And maybe you'll pick up a new trick or two The first thing to consider when building a skill is how users will interact with the skill and the types of Phrases your code will support Alexa's style of integrate of interaction is very direct and full of commands like ask tell Open launch For example in our fish jokes skill to get Alexa to say a joke from fish jokes You would say Alexa ask fish jokes for a silly joke We can simplify this into a very basic Workflow diagram where you ask Alexa to get a silly joke from fish jokes And that's exactly what you get in this particular case We're hearing Alexa say a joke tagged with the term silly from our Drupal site fish jokes for life There are four terms. We're going to be throwing around a lot in this session that are key to designing an Alexa skill They are activation invocation utterance and intent Let's look at our example command Alexa Ask fish jokes for a silly joke in more detail Alexa is the activation or wake word. This is how you start a conversation with your device activation words can be Alexa Echo Amazon or computer they are fixed by Amazon and can't be customized at this time But they are configurable by device So if you have two echo dots in your house You could configure one to use Alexa as an activation word and the other one to use echo Fish jokes is the invocation This tells Alexa where to send your request You can you can use the words open launch or ask Plus the name of your skill to enable the user to access your skill For a silly joke is the utterance and intent This is passed along to your skill and determines the response Utterances are the phrases your skill will recognize This is your opportunity to think about the variety of phrases users might say to interact with your skill utterances can use placeholder words or slots to make requests more dynamic Intents are a behind-the-scenes way for your skill to support multiple kinds of requests Some Amazon intents are built in such as help stop and cancel and we'll see how this works later on Put together these three concepts are the tools you will use to design the interaction users will have with your skill So with that in mind and some of the kind of basic vocabulary out of the way Let's take a look at how you go about Designing the interaction model that goes into creating a skill in more detail So this departs a little bit from the docs that Amazon has on their site But we've sort of found through doing this that it makes the most sense to start with utterances Those are the things the users will actually be saying to your skill So figuring out what the utterance model looks like and the different types of things people will say to your skill to trigger different behavior Seems like the best place to start to sort of help figure out What your app will actually be doing So today we're mostly going to look at custom skills out of those three types that amber mentioned before they're sort of The the most useful and the most typical unless you're dealing with like a internet appliance or smart appliance or The flash briefing which we'll demo again at the end So from a kind of a high level what's going on here Once you figure out the interaction model you need to figure out where the data is going to come from that Alexa responds back with Amazon has a service called lambda that sort of makes this process Pretty easy and pretty simple to get started with Lambda is basically a way that you can execute code when a request comes in on demand So you don't have to have a server sitting around idle most of the time you just pay per interaction There's a really good free kind of developer tier So you won't have to pay while you're working on the skill or have a server sitting around kind of listening waiting for requests to come in So you can actually get started Amazon has a github repository that has a whole bunch of example code called blueprints We shamelessly copied and pasted some of these which we'll see in the examples in a little bit Blueprints are available any any of the code that runs on lambdas available in three different languages You can use Java Python or node.js I'm most familiar with node out of those three. So we'll be looking at JavaScript examples in more detail so Since lambda is executing No JS code in our case you can sort of make that as complex as you want or as simple So the sample blueprint we started with is just a list of hard-coded values in an array that we pull a random one out of and then Respond back with that But then the second example we'll look at replacing that hard-coded array with the results of a web service call That could pull data from anywhere including a Drupal site So like I said starting with we shamelessly copied one of the blueprints called space facts So it's a hard-coded list of Facts about space we swapped those out with fish facts and fish jokes and then respond back with one of those The actual JavaScript that makes up that blueprint it's Not necessarily incredibly straightforward, but at this point We're basically just talking about finding and replacing some text without necessarily having to understand all of the functionality that's involved The the second example as I mentioned we'll use the same exact blueprint as a starting point But replace that hard-coded array with a call to our Drupal site, and then we'll take a look at how Easy it was in Drupal to expose the JSON that then comes back in that Alexa response And then the last example we'll walk through we completely get rid of AWS Lambda as a tool at all and we have Alexa talking directly to our Drupal site and Drupal responding back So it simplifies the stack a little bit, but it's some extra code that needs to be written There are feet a few things you need to do before you can get started creating a custom Alexa skill First you need to create an Amazon developer account. You'll also use this account to access AWS if you want to use Lambda Next you'll need to sign in to developer dot Amazon comm and navigate to the Alexa tab click on get started under the Alexa skills kit and And Locate the docs There's a link to getting started with the Alexa skills kit in the paragraph above the little dashboard there And you can click on that link to get started to find the Documentation specifically for the custom skills Find that in the sidebar and expand the custom skills menu item to find the specific documentation for custom skills then go ahead and add a new skill click that button and There's a lot of inline Documentation in the configuration to form so you can dive right into configuration and access the docs as you need You don't need to necessarily memorize the manual before you even get started with configuration Alexa skill development consists of both configuration and code So we'll start the process with configuration that our code needs to run We'll do some coding Not in this session, but in general in the development process you'll then go and do your code and then you'll we'll come back to the configuration finalize that after our endpoint is ready to go in This first phase of configuration will define our invocation name intense and utterances The invocation name is the unique identifier for your skill and is part of what users will say to Access and interact with your skill So there are two names on this form for the the skill information tab The name just plain name is what will be displayed in the Alexa app where people find and enable new skills for their device and The invocation name is what users will say to interact with your skill and there's some specific guidelines that you can Take a look at for how that can be formatted The next configuration Section is where you define your interaction model which consists of intense slots and utterances So this form which may be changing we notice that there's a beta Available to some users, but this is where you're you'll define your intense schema your custom slot types and your sample utterances sample utterances Are where we're going to start regardless of the order of the form which you know it might change and it changed in the beta We think it's helpful to start with sample utterances and With sample utterances you will specify words or phrases that users say to invoke intense This can be a variety of phrases You should think about all the different kinds of ways that a user would ask for this information And so it can include quite an extensive list of different phrasing So that your user isn't going to get frustrated because they're not saying like the one exact phrase that you Programmed in so you want to really have a variety. There's some good ideas for different types of phrasing in the documentation But and that I found really helpful. So it's important to include a variety You map utterances to intense and this map forms the interaction model for your skill Here's a basic example of sample utterances the first word get new fact intent represents a specific intent This is what creates the map between the user's request and the functionality of our skill We'll see this again when we fill in our intent schema Notice there's no extra. There's no activation word. The name of our skill isn't in there It's the intent name Which will be in our schema followed by some phrasing Here's an example that uses slots slots you can think of of Like tokens that represent dynamic values that will be passed along to our skill This is useful for asking for things like whether in a particular city or a Category of joke as we have in this example and you can see the slot in the curly bracket there for category utterances your list of utterances can include more than one intent and You'll include you'll need to include a set of utterances for each intent in your schema Your code can then use these different intents to return different responses based on the intent type utterances allow users to say a variety of phrases as they interact with your skill Don't include activation words or the name of your skill and sample utterances I found that to be a little bit confusing at first as I was trying to just breeze through a tutorial and No, you just include the intent name plus the phrasing without any extra information Do include a variety of phrases as it will make it easier for your users to interact with your skill And if you're using slots then make sure to include the slot in your utterances and in the phrasing And you'll put it in different places in the phrase and you use the curly bracket syntax for that Next in this configuration, we need to define our intents Intents are the map between utterances we created and the code that will execute You define intense with an intense schema, which is a j-song structure That declares the set of intents that your skill can accept and process It's best practice to include Amazon's built-in intents for common actions like stop help and cancel And you'll see these included in the example blueprints available on github and it's a matter of copying and pasting that in So here's a look at an intent schema for our basic example Get new fact intent at the top there is Our custom intent and it's the placeholder word that we use in our sample utterances in the different part of the form And then this is followed by Amazon's built-in intents So it's pretty straightforward to get a custom intent mapped plus help stop and cancel with just this simple json array Slots are the variable words in our utterances and they're optional You can think of them as optional arguments like in views for example You need to configure slots so that Alexa knows how to pass them along to your to your code You will include slots in your intent schema if you're using them So here's an example of our fish joke skill that uses a slot that allows us to respond with categorized jokes So we can see there's an intent for get categorized jokes And then the slot name and type and then after that is our other intent get fish jokes Which returns a fish joke without any category You want to consider that if you're using slots you also want to include an intent that doesn't use slots So that you know you don't want to assume that your users know What terms or categories you might have for your? skill and So that completes like the basic interaction model configuration You'll get to this point in the form where you have to configure an endpoint And you'll realize you can't go any further until you do some coding and complete your end point your end point So it's time to write some code. We need an end point to actually provide the functionality to our skill So like I said before we're going to take a look at three different examples that sort of grow in complexity as we go So the first one we'll look at Once you've sort of figured out your The sample utterances and the things that users are going to say to interact with your app The the next step if you're going through this process without copying from the blueprint is to figure out where the actual data That Alexa will respond back to Come where it's coming from So our tips are at least when you're getting started use lambda The free tier is really great. I discovered Yesterday, I think it was or a couple days ago. They sent me an email that since I have a published skill now They gave me a free hundred dollar credit for Lambda usage and every month that I have an active skill. I get another hundred dollar credit So for as popular as I think the fish joke skill will be it will probably always be free Which is pretty nice Also sort of the open-source tradition. I would recommend shamelessly copying and pasting from whatever code examples You can find at least to get started There are blueprints available for all different types of skills light switches and garage door openers and all kinds of stuff beyond just sort of the basic examples will Dig into here. There's an Alexa organization. You can find on github that has all those those sample code snippets you can use That stuff is also available right in the lambda dashboard itself when you're getting started with it So I said before but just to sort of reiterate lambda supports three different languages So if you're familiar with either node or python or Java, there are examples in Each skill type for each language so you can sort of dig in where you're the most comfortable I would imagine as Drupal developers node is probably something you're either Already familiar with or at least have been exposed to maybe more recently than like a Java class in college or something So we're gonna look at node examples So again the simplest example It uses the space facts node blueprint Which you'll find if you dig into lambda or look on github the response values come from an array That's just hard-coded in the lambda function itself And you can sort of go in and replace the space facts with whatever information it is You actually want to return to users instead of information about space So just sort of for completeness Here's the intent schema that we set up with this skill that uses the fish facts It has one single custom intent, which is that get new fact intent Even though we're talking about fish jokes. I was really lazy doing the copying and pasting So I left it as get new fact intent rather than get new joke or get something else Another important note like amber mentioned. It's good practice to sort of include these Amazon Built-in help stop and cancel intents the first time I tried this I didn't do that and I also goofed up the JavaScript a little bit. So my Alexa responded with a fish joke and kept repeating the fish joke over And over and over and over and over and over again until I unplugged it so We we sort of talked about why it's important to start with utterances and kind of figure out the whole Interaction model your user will have with a skill Generally speaking you'll want something that's more complete and complex with this I think it's important to consider all of the different types of things a user might say to try to accomplish their goals with your skill So they don't have to try to remember the exact phrasing that you have in this example. We only have these three Tell me a joke give me a joke make me laugh But I think if you're building something more robust you probably want to consider You know ways to not leave someone frustrated remember it trying to remember the exact sentence They should use when they're interacting with your skill So I'm sure this is really hard to read in the back. We've I've committed all of these code samples They're available in a fish jokes repository on my github page at play call if you want to check them out in more detail This one in particular is not super important to look at because it's just Find in replace from the blueprint, but you can see there's sort of a little data structure that's being built up with language strings The blueprint that comes with space facts actually supports translation across languages So if your Alexa user is in Germany, it'll pull from an array of German facts and phrases instead of English us in this case But basically all all we did here is to rip out the space information and replace it with some fish information I think was pulled from Wikipedia. So we've got this jokes array with just a list of Values and this is the data source that we're using in the first example So these are the only things that Alexa would be able to respond back with at this point Like I had used the space facts blueprint And then I ripped that out and replaced it with fish facts from Wikipedia And then we just use that to refactor it for fish jokes So that's probably like an in-between stage of copying and pasting, but that's basically we're copying and pasting blueprints at this point And these aren't actually jokes. They're still facts, but we're calling them jokes So here's the actual JavaScript that's below that hard-coded array it's a Handler object that receives the intent coming in from your Alexa skill and then matches that to a Function or method so in this case when that get new fact intent comes in We're emitting a get joke event that will then trigger our get joke function That's grabbing that jokes array It's pulling out a random value And it's emitting a tell card event With the value of the joke that it pulled out from the array So actually understanding what's going on here is sort of dependent on Digging into the Amazon node SDK library a little bit But since this blueprints available you don't necessarily have to invest the time in doing that to get started You can just sort of take the example and and try it out and tweak it To be honest, I haven't actually dug into that library much at all I know that if I if I can emit event Tell with card and I pass it a string my Alexa reads back what I wanted to and that's sort of the Extent of how much I poked around with it I did dig into it a little bit more to do some fancier metadata stuff that we'll look at later on but You really don't need Sophisticated knowledge of of what's going on with the code to sort of get this Started which I think is pretty cool So with that one out of the way Let's sort of take a look at how we could actually get our Drupal site involved if we just Published a skill that had a list of hard coded 20 fish jokes. I'm sure People would get sick of that pretty quickly So in the in the web service example, like I said, we're going to use the same lambda function the same code that we looked at before But we'll basically swap out that hard-coded jokes array with a web service call to get data This lets us use Drupal site or really any other public API as our data source so we can pull from a larger library of material So again kind of for completeness The intent schema is the exact same that we saw before There's the one simple intent, which is get new fact intent again We're giving a joke, but I didn't want to copy and paste everything and goop something up Again, we have the same really simple sample three utterances You'd probably want this to be more rich in a real example, but for these purposes this works just fine and Then in the code example This is the get joke function that we saw before so instead of pulling from a jokes array We're setting up a URL with our API endpoint and this is live So you can go and poke at it and see the JSON in your browser if you want Fish jokes for dot life is the name of the site and if you go to slash joke dash me dash, please It will return one particular joke node at random So we grab that URL we use nodes HTTPS library to make a get request After the data from that request has come back And we received all of the JSON. We parse the response out We pull out the title and we pull out the punch line We concatenate those two things and then we Run that Alexa emit event with the speech output of the joke contents. Does that make sense? It's relatively straightforward. It's just an API get call. We get some data We send the same type of event back So on the Drupal side of things the only thing I had to do to make this available was to create a really simple view in this case, it's There are two different displays. I just created a new view a content type view Restricted to the joke content type jokes that had been published. I added a new display type Called rest export and then I put a sort criteria on it to just grab one at random So Drupal 8 really makes it Dead simple to export JSON, especially if what you're looking for is a random node and not something more complicated But there's nothing that would stop me from adding contextual filters to allow passing in a category here or Swapping out this views example with something like the rest server that's built into core or JSON API You can sort of make this as simple or complicated as you want But one of the things especially with Drupal 8 is it's really made web services and exposing the data in your Drupal database Easy and straightforward and if you're interested in that kind of stuff You can come up and we can talk about that for the rest of the conference because it's something I'm pretty passionate about So since it's Drupal con we figured we should have an example that just completely gets rid of lambda and node and doesn't require Anything other than Drupal and Alexa So we've got an example here that completely gets rid of the lambda dependency and the you know the requirement to know Know JS or Python or Java So from a high level what's going on is the user interacts with their device. They ask Alexa something Alexa Will be invoked by the invocation name so your app gets triggered the data from that gets sent to your Drupal website Drupal does some magic responds back with a response and that gets sent back to Alexa So we're completely removing the the lambda piece like I mentioned When I when Amber and I first talked about this session and I was thinking about having to do this I went and looked at the spec and saw the the actual JSON that Alexa sends and Was not exactly looking forward to figuring out how to parse that mostly because there's some HTTPS handshake stuff that has to go on There's a Amazon Alexa skill ID that you have to verify to make sure that people aren't just randomly hitting your Your callback endpoint, but you know, thankfully this is Drupal. So there's already a module for all of this Which it was it is a huge help So the Alexa module actually makes use of another PHP library I actually learned when I registered on Monday that this was put together for Dries's keynote demo last year Which is probably why it's not covered under the security policy I don't know necessarily that it's up to date and I haven't really dug into the security holes of what's going on But it's really easy to use and you can do some pretty cool stuff with it If anybody wants to hack the fish jokes for dot life site go right ahead. It's not super sensitive So you download the module you turn it on and then you sort of wonder what next there? Basically only a couple of different things that Alexa module does One it provides a configuration form where you give Drupal the application ID for your Alexa skill This is used in that sort of handshake thing I talked about before so when a response comes in the first thing it does is say is this actually coming from a skill I should be responding to or not The second thing you do is you give Alexa on the configuration form that amber showed before the URL for the callback of your endpoint And I think we have another slide of the actual endpoint later on and then the third step is you write a little bit of code And when I say a little bit of code I think the final example for this is like 80 lines and most of it's kind of boilerplate Object-oriented stuff. So let's take a look and see what that looks like So the module uses events if you played around with your blade development at all You've probably seen the event pattern already So in order to actually respond to one of these Alexa requests we create a request subscriber class and We basically register an event subscriber for the Alexa event dot request event type And then we tell it what method we're going to call when that event is triggered After that our on request method is triggered. It's past both the request and the response JSON structure that will need Coming from Alexa and then sending back out to Alexa Here there's a switch statement so you can check for that intent name that we saw in the intent schema So if I actually cared about people that were using fish jokes The help intent should probably respond with like a list of categories that people could use to get categorized jokes I didn't bother implementing that in the example, but You could certainly do something a little more helpful there if you wanted to in our case We do have two intents one with categorized jokes and one without categorizing jokes But instead of switching that on intent I just decided to handle that in code for the default case so I could sort of ignore what was going on there With the the slot here you can see that that's being passed in Kind of in the middle it's part of that request that we're getting from Amazon So I'm pulling that out and setting that to a variable called term name since we're using kind of the drew built-in Drupal field tags taxonomy I'm Starting an entity query for nodes that are published and of the joke type And then if we have a value for that term name field that came in a slot with the Amazon request We're adding a query condition that will look for just jokes that have been tagged with that term name We execute the query if it comes back empty This actually broke the skill the first time I tried publishing it because I didn't have jokes in every category so I decided that If it's empty, I'm just gonna hard code my favorite one that we that we put on the site Most of the jokes that make up the site now actually come from Amber's niece I solicited a few more on Twitter. So Mark Drummond and Steve Perch. Thanks for having a lame sense of humor and humoring me So we execute the query if we can't find something we use node ID 7 Otherwise, we just pull a random node from what came back in our query we load that node and then We build up this card data structure So this card thing is a little bit new and we haven't really seen this before But if you have an echo dot or an Alexa in the app itself Every time you interact with the Alexa device it will have a card In the app on your phone that sort of says is this what you actually meant when you're talking to Alexa or if you're Developing a skill you can respond with more rich metadata that has information about what was going on like if it's a if you're playing a Song for example, you could send back album art or if it's a news briefing you could send back some photos from the news story So this actually isn't supported by the Alexa module that's on Drupal org So I had to sort of hack around it in a Probably not correct way So I could send back the image URLs, but like I said that codes on github and you know pull requests welcome So now we're sort of back to where we started with the whole Alexa ask fish jokes for a silly joke And When we do that the app on our phone will send back the actual joke and an image that's been uploaded With the joke itself. I'm sort of contemplating opening up the fish joke site for user submitted jokes with images, but I decided not to do that before Drupal con just for the Interest of interest of safety That said the skill went live kind of late last week and there are already 67 people that have enabled it I don't understand why I think I You know it is what it is So now that we've got the the code kind of working and Alexa actually responding back to Drupal responding back to Alexa we can actually go back to the configuration and finish up getting things set up So when you go back to that configuration form on the developer dashboard You'll want to test that things are actually working Again, I broke this while working on it after I decided to add the image stuff Added all of those jokes from Twitter when I got here this week on Monday And my code didn't handle originally the case where there was no image file attached to the node So it just horribly broke and for about a day all of the Alexa responses that were coming back just Didn't do anything if I would have used the tester. I would have noticed that a lot sooner on It's basically a form where you type in the sample utterance that you'll actually want to say to the app You hit a little button the JSON stuff happens You can look at the JSON for both the request and the response And then there's a button where you can play it to actually hear what it says one of the things we didn't include in the demo and What sort of a last-minute discovery we didn't have time for is there's a special markup syntax You can use in your response that will tell Alexa how to sort of emphasize or pronounce different words So it doesn't do the weird spacing kind of robot thing to make things a little more natural We haven't I don't think either one of us is dug into that in great detail yet But it's something I'm gonna look at when I go back and kind of keep working on this So once you've sort of once you have things tested and configured properly and you're sort of comfortable that you're ready to go There's a certification process much like with an iPhone app or an Android app Amazon will want to test things and sort of make sure that it's good to go. It seems to be a fairly automated Standard process that happens on non u.s. Hours it's the kind of thing where regardless of when you hit the certification button You'll wake up to an email the next morning saying it either passed or failed or not The finishing the app configuration for the certification process requires things like uploading an image That'll be used as the logo for your skill in the skill store Giving users some sample phrases that they can use when they're browsing the app listing and that sort of thing And then after the skill is actually live. There's a really handy sort of dev test environment thing that goes on So they automatically create a dev version for you that you can keep working on and keep refining and then when you're ready You can resubmit that it goes through the certification process again and then replaces the live skill you've got So we've got a screenshot here of both the live and development versions as well as the metrics dashboard The metrics are actually really neat you can look at on an hourly basis How many people are hitting your skill what intense they're executing you can't see the actual phrases They're using but you could measure sort of which method is being invoked and how often and and that sort of thing And like I said, I was I'm shocked that there are 67 people that decided they wanted to ask Their home device fish jokes, especially given the quality of the jokes Okay Had a bit of an overheating problem Exactly five minutes before this presentation. So we'll see how this goes Let's get to the fish. That's why you're all here, right? So here's this a bit of a schematic ish sort of a thing. So what I have going here Well, I'll show you I'll show the overhead cam but looking at this you can see what is going on So the fish has two motors inside So there's a motor for the mouth and then there's a motor for the head and the tail. It's spring-loaded. So it Can work like that. I'm only using one for the mouth The motor shield The fish the fish's motors. I basically stripped everything out of the fish. So except for the motors so I took everything out and I've got the wires for the motors and I've got them Connected to this motor shield the terminal blocks on the motor shield the motor shield is stacked a shield in Arduino is It's like a stackable unit so that you can expand the functionality of your board So the motor shield is stacked on the Arduino Uno That circle the vinn jumper sleeve is on which enables me to use 12 volt power through a barrel jack And so the power of 12 volts runs through the barrel jack and then the vinn jumper and it actually powers the motors there is a headphone out from the Echo Dot and my husband helped make me a Cable that splits out into two places So I've got a headphone out from the Echo Dot and then it goes to an external speaker So we can actually hear the echo and then the other end is soldered to the analog zero and the ground inputs on the motor shield The way this works It's this is actually the the fish will move its mouth to any audio input above a certain threshold So if you wanted you could use any audio source not just an Alexa This is if the live demo doesn't work Okay One moment I unplugged things because there was overheating I Know enough electronics to like recognize when something is smoking or about to set on fire in my lab That's about it. I Had a nice chat with the fire department when I melted a pack of eight batteries a few weeks ago They're really nice If you pray you can pray oh Alexa ask fish jokes for a silly joke Also internet might help if I turn the volume on Bear with sorry your Echo Dot lost its connection All right. Who's on the internet? Got a couple minutes here. Let's do this So Blake, do you know any fish jokes? This is what I get for taunting the the fact that the Alexa module has a security It's glowing this like interesting teal color Alexa Ask fish jokes for a silly joke. I just want you know I'm I'm feeling the pain for all of you who know That you're not supposed to do live demos and then this is let this be a lesson to you the Alexa app also tends to Get a little crashy sometimes It's like flashing red now your Echo Dot lost its connection Yeah, the fish should be moving its mouth for that. Let me show you the video while I'm trying to get this figured out fail a I Think I I thought I uploaded a video, but oh it's moving. Here we go I know so that's what it's supposed to do So yeah, that was working up until literally 10 minutes before so Yeah, that's basically it. I'll try and Find a table somewhere and get it plugged in and working again and you can come and find me, but sorry All of this was I use the tutorial from instructables.com and To it's called animate a Billy Bass mouth with any audio source by Donald Bell, so go check that out. There's code and instructions for the hardware and We also have a drip lays me flash briefing that since we have internet problems We also cannot demo but you can add the drip lays me flash briefing skill to Your own device if you go into settings and flash briefings you can search for drip lays me and basically It's an RSS reader that literally reads the RSS feed So you can add NPR and other new sources and so we decided for fun to add the drip lays me one You can win an echo dot. We're giving one away Go to our Twitter comm slash drip lays me page and Retweet our tweet about the drip lays me flash briefing skill that was just published this morning We'll pick in a random winner the contest will close at 1 p.m. And you must be present here at Drupal con to win There are sprints happening tomorrow this is for everyone you don't have to be just a core contributor Developers doc writers project managers bug reporters QA testers and you and there's there will be mentors available So you can find out more Are on the signs around here and on the website? So to quickly recap in this session we went through the Alexa skill creation process and how you can integrate that with your Drupal site We hope you're feeling inspired and energized to go and create your own custom Alexa skills and not do live demos with animatronic fish and integrate them with Drupal Please let us know how we did so that we can tell more bad jokes and do more Hardware demos in the future at Drupal con I think we're about Are we out of time? Do we have time for questions? Oh, yeah, let me demo the electronics here Okay, move this camera back over So There's the motor see it. Sorry. There's the motor shield and You can see that the wires There we go. The wires from the fish are coming out from here into the terminal block and then here's the Up top here is the analog zero and ground and so that's the headphone jack and that goes around to the Echo Dot and The speaker so it and then here's the barrel jack for the power The 12-volt power so it's it's not too bad there was just a little bit of soldering involved for the The headphone and the main gotcha was the vinn jumper. So you have to put You have to put on this vinn jumper sleeved in order to be able to pass through 12 volts of power and use the the vinn power for that so it's a pretty basic thing and Relatively speaking if you have some some soldering skills and you can always go to your local maker space and they can help you without if you want to learn how to use a soldering iron and And like I said, there's instructions for putting together this With this exact board the tutorials so that it's the aid of fruit motor shield the version 2 But you can use any motor shield you could I've used the version one You just have to use a different library and kind of refactor the the code a little bit and the the code is is fairly straightforward We're just We create a motor a motor object and then oh if the sensor value is above a certain threshold move the motor so as long as there's not like some kind of Problem with your motor because you've been trying it out too many times. So yeah, that's basically the The hardware of it. Um, so yeah, if there's any questions you can come and find us after or you can use the mic And we'll be happy to try and answer them. I'm not sure it's working. It's working. Yeah, if you get speaking to the mic Yeah, yeah question. How could we personalize Alexa responses? So for example, I If I want to get my score Do I need to register Alexa? So there are different types of slots you can use but when we looked at the when you're defining your sample utterances You can put in a slot which is essentially a variable So one of the slot types you can use is something that will just be completely custom when it comes back So you can take a look at the documentation for slot types and that'll kind of walk you through how you might set that up That makes sense. Yeah Does the user needs to enter their Alexa ID on a website if they want to get the responses They yeah, so your Alexa app will have a unique identifier that you can actually also make use of in the skill interaction Process. There are a couple extra flags in code. You need to set if you're doing something like that It's it's in the docs I haven't worked with it yet, but I've seen it It's available and it's there and there's a unique identifier that'll come back and forth It'll allow you to map the Alexa ID of a person to Something on your site. Yeah, but you'll need some way to know what their like Alexa user ID is So I haven't I know it's possible, but I haven't dug into it yet. Okay. Thank you Hey, I've got two questions First of all, I've written a couple of Alexa skills to this point using Drupal 7 APIs nothing really to the wild yet But I have part of the reason I've been like hesitant to release them is because I didn't know about the whole live And then you get a dev version Once you have the live skill and the dev skill. How do you invoke the dev skill on the hardware? You'll have to use either that's the test simulator that's on just the web interface or On your own device you can enable the dev skill alongside the live skill. Okay, so I can have two different versions The I assume the invocation name changes if you that's what I was wondering I published this last week, and I haven't needed to change it yet. So I don't know the details The other question was the module Does it support persistent sessions? I Would guess not okay. I had to add support just to add the multimedia card response So it's it's pretty bare bones for now, but well, maybe I just found a module to contribute to Hi, I just love the presentation and the question I have is authentication So I think the guy before asked about different users accessing the site, but would they be you know, is there a way to authenticate and then Yeah, it's the same answer. There definitely is I haven't dug into the docs for it. So I don't know the details, but There is like an option in the configuration that asks if there's user authentication So I know it's like also a configuration option and there's documentation for it. So it's supported. I think thank you I was gonna have the same question is really in terms of often like other like Facebook Kind of often stuff like that. But yeah, one of the things we noticed putting this together is that There's sort of an overwhelming amount of documentation on the Amazon developer site for how to go about building all of this stuff So rather than dig into nitty-gritty details, we wanted to try to be inspiring about how easy it is to get get started But once you understand kind of the basic vocabulary and the basic pieces that go into doing this It's a lot easier to Google for the right type of thing in the developer docs, too Sure, right Yeah, so to repeat just for the recording the Amazon term for this whole user authentication thing is account linking I've been to a bunch of sessions and I just had to say this is truly inspirational. Thank you Thank you How how I personally got started was I found a tutorial but the read me on the The Alex on Alexa GitHub for the space facts is basically a tutorial in and of itself I just dove in and like copied and pasted the hell out of it and Just to get started like and I got that thing public like developed and published in two days and it was just like there were a lot of things that I missed and I didn't understand and That I could have done better, but just get it just do it and like get it done And then you once you understand like the basics you can the documentation starts to make more sense because there's a lot of Particular terminology and concepts that we tried to clarify today But it makes it easier once you've done it and then once you're testing it out and you're playing with it And you realize like how limited You know the basic examples basic and but you can learn from there and just keep going and I found the certification process to be Really quick and and friendly so They're also really promoting Skill development right now. So if you release one in the next I think three days You can get a free t-shirt although I have not received my hoodie and I'm a little disappointed about that But yeah, they're like giving away t-shirts and hoodies and all sorts of stuff. So Anyway, any other questions? Thank you very much