 talking to your dog with Ember. I'm the CEO of ShipShape. We're a Ember consulting company that also does other various JavaScript things as well. Any kind of apps you might need. We're always around to talk to Ember if you want. You can see my email there is robby at shipshape.io. Feel free to shoot me a line, I'm always happy to chat. Also on the Ember Learning Core team, and I've written several Ember Adelands you might have used. Ember Math Helper is an Ember Shepherd or some of the more popular ones. Shepherd is a site tour and app tour library that lets you have users check out all your features and show them how to use your app. And a huge part of my life is I love all dogs. You can see me here on the couch with Odie and his uncle Sox just chilling. Odie has an Instagram, Odie LaFrenchie. So if you're so inclined to follow him, check it out, some good stuff on there. And I'm R.W. Wagner 90, pretty much everywhere across the internet. So Twitter, GitHub, anything like that. Feel free to drop me a line of any of those places as well. So the motivation behind this talk is I always thought it would be cool to understand what Odie is barking about. And he barks a lot. He barks that people coming in the house, he barks at his ball, which you'll see here playfully. And all of them are just crazy random barks sometimes and you don't know what he's doing. So here's an example. So as you can see, he squeaks his toy and barks along with it like he's singing or playing or something here. So gonna give you a brief demo of our prototype here. It's built in Ember on Octane, of course using Glimmer components, all that good stuff. So the site is wolf.plus and it supports both file uploads and microphone input. So you can go to file upload and you can upload the same video that I just showed you with the squeaking of the toy. And you can see it results in a playful result. So what about something that's less playful? How about Odie barking at the mailman dropping mail through the slot? So this one says alert, your dog may be alerting you to a potential problem or intruder nearby. So you can also use microphone input because it's not super practical to have an audio or video file handy all the time. So you can hit start recording here. You can see that it says this is a greeting. If there's one spike bark instead of prolonged barks, it's frequently a greeting. And the code for this is all open source. It's on GitHub. Would love to have some people contribute to it with us, help us make it better. It's all Ember, all web audio API as well. Check it out, give us some feedback. Hope to work with you. So back to the idea for this talk. I was sitting around with Chuck, our COO, and we were talking about the EmberConf CFP that had just opened up and what ideas we could submit for talks. And this immediately came to mind because Chuck had just met Odie that day and he'd had lots of different barking encounters with him, both aggressive and playful and saw all the different sides of Odie's barks. And we were just thinking, wow, it would be nice to be able to decode these and kind of talk to him and know what he's thinking. So that was the idea for the app. So the process was to first investigate web audio API which is an API for working with audio and video and JavaScript and then determine different dog bark types and studies have shown that both dogs and humans can tell the difference between several different bark types and even different ones from dogs they know and are familiar with versus others. So there's definitely something to these barks that is in the data that we could get out and decipher to try to determine what the different meanings are. But the problem is never used web audio API before and not really sure if it does exactly what we need or how to use it to get the data we want out, et cetera. So first thing we do is Google it to see if it'll work. So I Googled web audio API, analyze sound and got a lot of good stuff actually. There were several real time analysis examples and of course the official docs from Lozilla and some Stack Overflow posts that kind of helped explain what the docs were talking about because they were a little terse and helped connect that with the examples and really give me a better understanding of the types of things that web audio API could even do. So some of the things it does do, the main thing we were focusing on here because we're analyzing sound is appropriately named the analyzer node and the analyzer node at its core just lets you get frequency and time domain data. Now frequency is essentially the pitch, the higher the frequency, the higher the pitch and the time domain data is the wavelength which the higher the amplitude or the top of the wave, the louder it is. So time domain, waveform, amplitude all mean basically loudness. So there's two methods to get this type of data from your audio or video files and they're appropriately named get by frequency data and get by time domain data. And these both give you similar data that's an array of numbers telling you the decibels or wave values of your frequency and time domain data. And it does a lot of other fancy stuff too that I didn't fully understand like let's you use fast-fairy or transforms, stuff like that. But basically that just lets you set how many samples of data you wanna get. So your frequency bin count is half of that and that says I want X number of samples from my audio. So for my first attempt, I was kind of naive and I thought I've heard of video and audio elements in HTML. So maybe we should just use that and populate the source with a video or audio file. And that is supported by Web Audio API. There is a method create media element source which basically lets you take source data from an audio or video file and use that in Web Audio API. The problem is it has to be playing when you use the methods on it or it's not gonna get any data and you'd have to call it a whole bunch of times make sure it's playing, start, stop, do a bunch of different stuff to get all of the data you actually need from the source. It also was only getting frequency data at first. I wasn't really sure if I needed frequency or wavelength or what they did differently. And so the frequency data I was getting was only while the file was playing I would take a small sample of frequency data which would basically just tell me the pitch of a tiny, tiny little section of the file which isn't super helpful for determining dog barks and types. So these small data snippets weren't super useful and even when we got them and they were reliable data wasn't really sure how to use that data to its fullest potential. So I did some more Googling, played with the API some more, tried to figure out a better way to do this. So the thing I learned from attempt one to attempt two was playing the file while you're analyzing it is not ideal because you don't want it to just be playing the whole time and say it's a 20 second file you have to wait 20 seconds for it to finish playing to fully analyze it. It's just kind of clunky and not ideal. So Web Audio API gives you a thing called offline audio context which is much better because then instead of just playing the file and taking little samples you can load the entire file into a buffer and then you can take that buffer and use get by time domain data. So you're basically getting the loudness of your sample across time and you do it for the whole thing, for the whole buffer. So then you can see where it spikes there are barks potentially because it's spiked which means it's louder so there might be a bark there. We also hooked up the ability to upload files using a really helpful add-on Ember file upload and it makes it super easy to upload files, do drag and drop to upload files, all this good stuff. And we also built in the ability to use the microphone so you can use get user media to get the microphone data and started exploring visualizations, heavily borrowing from this visualizing audio series that has a lot of great examples showing how to do a bunch of these different things. I'm gonna show you a little bit about what we've been doing to do some of this stuff. So everything is a glimmer component and the audio capturer as I just mentioned uses this get user media method. So basically this thing just uses a built-in method, grab some audio from your microphone and passes that into the audio analyzer. So on the flip side, you can use the audio uploader which is basically just a wrapper for Ember file upload and it's gonna do the same thing. When you add a file, it's gonna pass all that data to our service and the service is where the magic happens. It's this audio analyzer service and it is essentially just wrapping vanilla JavaScript. So there's no crazy stuff to do here that's Ember specific or anything that would trip you up. It's essentially you can just tie all these things together in the various components into this one service and get a nice neat package that Ember provides to help wrap this thing we're doing here. And the magic is this analyze audio function which takes the thing I just mentioned, the offline audio context, which is just a fancy way of saying we have a buffer of audio and we wanna take that audio and pass it into a function while it's being processed so that we can get the data out that we need to determine the dog marks and that's provided down here. So you can see as it's processed, we do some different things to get data out and we'll talk more about those later. So the next part is actually figuring out dog bark science and I'm not an expert in dog barks. This project is based off of several different scientific studies where they are experts in this but it's based on my personal interpretation of these results. So if you find anything that's not super accurate, please let me know. We would love to keep improving it. So these studies that I mentioned, they found that dog barks are somewhere in the 250 to 4,000 Hertz range at most shelters. They just measured output of dogs barking at shelters, got that data and they also determined that all breeds have some part of their bark that's in the 1,000 to 2,000 Hertz range. So you can be sure that within those ranges and somewhere in the 80 to 90 decibels range from about five meters away is all kind of the spectrum of dog bark data and anything outside of that can be mostly thrown out. So there are three to four buckets of bark types we're gonna focus on for this project. And the first one is more of an alert style bark which could be rapid barking at a mid-range pitch and it can signal to the pack that there's a problem and what the pack leader to come check it out or maybe there's an intruder coming in or things like that. Then there are happier ones like greeting or playful. So it might be one bark for a greeting or for playful it could be like stutter bark or various different happier barks. And for distress barking, that kind of is a broad category for now but it kind of would be if they're lonely or in need of companionship or even if they were hurt or scared or things like that. But even with just these few buckets and the small amount of data we have they're seemingly infinite combinations of different nuanced dog bark types. So we're just gonna focus on these as broad categories for now. So as I mentioned, dog barks are only in the 250 to 4,000 Hertz range. So anything above that we can go ahead and take out because we don't need that anymore. We only care about the stuff in the range that a dog bark could be in. So once we know the limit of pitches essentially there from the frequency, we wanna find where it spikes. So if there is a spike, we know a dog probably barked there. So if the loudness goes up really sharply and then falls and then there's another really sharp loudness we can assume maybe there's two barks there. So we're gonna take all of the data from that and essentially get the averages or the modes also sometimes of these things to decide whether it's a low, mid or high pitched bark. And then we're gonna take that data and map it further to determine how many barks there were and exactly the nuanced meaning behind the bark. So we've talked a lot about vanilla JavaScript and the web audio API here. And that's by design, this is what we use the most of in this, but I think it really shows that Ember can shine as a great building block and starting point for projects like this. Pretty much anything you wanna do can be built into Ember. And Ember has strong conventions that help you structure the way your files are laid out and has things like add-ons that you can install which make working with different functionality super easy. You don't have to worry about all the plumbing, it just kinda works. And glimmer components are great for all the power glimmer provides and all the tracked properties now and all that stuff. A lot closer to vanilla JavaScript implementations and if you haven't checked it out you should definitely check out the new stuff. But the main things that we got here are Ember service worker which allows our app to work offline and Ember web app which allows our app to be installed. So you can install it anywhere progressive web apps can be installed. So Chrome, Phones, et cetera. So give a little look into that here. So if you see this little plus button you can click that and install it. Then you can run it as essentially an app on your computer that's wrapped around Chrome obviously but it's pretty cool. And then you can have it on your home screen on your phone and things like that too. So that stuff is driven by just those couple add-ons. Ember service worker and Ember web app. And you pretty much just install those. And Ember service worker has a lot of different packages. We're using the Primber package here so that we can do fast boot and Primber and everything is jam stack static stuff. And so there's a manifest.js that you can do with Ember web app which lets you set your name of your app and description and your different icons and things so that when you go to install it on various devices it looks and feels and behaves like you expect it to. So let's look at some of the bark type utilities here and things that we're using to get some of the more specific bark data. So the methods that matter the most here are this determined bark occurred which basically just checks the different loudness values and tells you if a bark occurred in that segment or not. And there's a determined bark pitch which is basically I'm gonna tell you was the bark low, mid, or high pitch. So we can use those to then go into this determined bark type function and kinda check how many there were, what their pitches were and map it to those types of barks we were talking about earlier. So if there's one bark and it's high, it's an alert. If there's one bark and it's low it's distress and otherwise if it's kind of mid range it's probably just a greeting. And if it's two it's usually a greeting. And then for more than two you've got, if it has some mid frequencies it's playful and other ones can also be playful but if it's all low it's usually alert. It's more aggressive, low repeated barks. So that's kind of how we're getting the data to map to the bark types there. And in the future there are several things we could do. We can add more bark types. We basically have only those three to four rough categories now but there are at least 10 basic minimum types and we could support those. The one that's most interesting to me would be to support the lonely bark where it's a string of barks followed by a long period of silence followed by more barks because that's kind of saying is anyone there I'm lonely and that would help you identify episodes of separation anxiety for dogs that have problems with that. And we could also refine the frequency ranges a bit because right now we just have the three buckets the low, mid and high and we could have a mid high, mid low, et cetera. And that would help us kind of refine even more which maps to which frequency. And the coolest part of the new features is we could add a talk back feature which I think would be the standout feature of this app. So if your dog says something to you and then you record in and get a response it could take various dog barks that it finds that would respond in the way you wanna respond to that bark and build a custom bark to respond to them of the exact frequency and number of barks for the response and then you could kind of go back and forth and maybe communicate things with them and kind of tell them what to do and how you feel. And that would help break down barriers between the normal communication of you and your dog. So the demo that we played with is out on Wolf Dot Plus right now. You can try it out. Please do, please let us know if you encounter any issues. These are some of the sources we use, the Web Audio API docs, stack overflow posts, articles about dog barks and visualizations and things of the Web Audio API. And thank you for listening and for your attention. Again, I'm Robbie Wagner. You can always chat with me about Ember. Please do in Discord or via email or however you wanna reach out. R.W. Wagner90 on Discord. I'll be hanging out there pretty much all the time and you can email me at Robbie at shipshape.io. We'd love to talk with you. Thanks for your time.