 So, quickly you're in the session now hearing this, putting voice, video, and text into the rail. A quick introduction. My name is Ben Plank. I'm actually very proud of you on the Save Atlanta welcome. Hope you all have enjoyed it like so far. You may also know me through some of my open source contributions. Just a quick show. Okay, has anyone heard of the Georgian? Has anyone used it? Come on. All right. Cool. I'm going to talk about a Georgian that you want to just quickly mention because it bears relevance to the talk. A Georgian is an open source framework for voice applications. So you can think of it as the rails are super-level. A Georgian is for voice and for built-up video games. I'm also the founder of a company called Learning the Language based here in Atlanta. And this is what we do. We work with voice applications. We build them. We scale them. We do usability. This is a topic close to my heart by communications applications in particular. Today, I want to tell you why the web is a lot like outer space. Because on the web, no one can hear your screen. So let me just paint a scenario. Sometimes you're working with your app and all of a sudden something happens and you realize you need to speak with one of your customers. Now, what most of you are going to do is you're going to pick up a telephone. And the main problem with this is that when you pick up that phone, any communication that you have is now outside of your business process. It's not noted within the business application. It's not recorded. The fact that you can call, have it is in no way, in most cases, is in no way reflected in the state that you're going to your customer. And also the communication itself is fairly limited. You've got this really kind of crappy narrow-bed audio signal to talk through. You can't easily share pictures. You can't easily share links. You really don't have a very rich communication experience. Wouldn't it be cool if we could, instead of having that phone call happen outside of your app, put the communication right into the application itself? That would be cool. Yeah. Okay. So that brings us to something called WebRTC. I'm going to show it again. Has anyone heard of WebRTC? Cool. That number goes up every time I ask because it's absolutely an happy thing for me to see. Has anyone actually tried it? A couple. Okay. Well, hopefully at the end of this talk, y'all have some resources for you that will inspire you to give us some information on how you can try it. For those who aren't familiar, WebRTC is fundamentally about audio, getting the speaker, microphone, and the camera in the browser, and making use of that in a web application. So what is it? It is the camera and microphone, but it's without any plugins. This means that if you want to go build a real-time communication app, you want to take advantage of the mic and speaker force and kind of app. You don't actually need a watch. You need a jargon. And all of the bad things that come with having plugins such as crashes and ICQ, it's built right into the browser. WebRTC additionally has functionality built-in to establish peer-to-peer connectivity between two or more partners. This is really an interesting point, which I'll touch on in more than a minute, but connectivity across the internet can be really tricky with map firewalls and things like this. So WebRTC has functionality built-in to help traverse connections from higher walls. The last thing it provides is a common set of codecs for actually exchanging high-definition media. So I'll talk more about that in a second as well. So fundamentally it is a WebRTC as a JavaScript browser API. You tend to access it using JavaScript problems built in the browser. It can also be used for mobile, although in the mobile world you get all of the bad things to get the standardization, but you don't necessarily have the same API. It's different, it's different. I'm not going to talk too much about mobile today. But the standards for functionality are really interesting. So these codecs, OPUS, G711, HT64, and VPA, these are what make very high-quality audio and video possible on the internet. G711 aside, which we really care about, OPUS is really a pretty amazing product. It comes from a lot of research, including significant contributions from Skype. If any of you have, or most of you haven't made Skype calls, and you know this and just talk good about your sounds, OPUS builds on that research, it actually goes further. OPUS as a product is good enough not only to transmit voice efficiently, which is to say using minimum amount of bandwidth to preserve the size of the audio voice, it actually can scale up and also transmit the music. So it's a very, very high-quality product. It's built to browser no royalties. I mean, if everyone has ever dealt with licensing products, you know what a giant peanut can be. OPUS is entirely royalty-free. 264 and VPA are two competing standards for transmitting video. 264 has been around for a while and it actually is patent-encumbered, although Cisco has paid for licenses so that open-source software like Firebox. Convention Chrome will support 264. VPA is actually a codec that Google acquired or company and then released all of its IP. So it's a fully open standard open-license codec for video. And that's very exciting because that means that we'll be able to do video without paying royalty eventually. It's still a product. But what these do provide you, built to the browser, is very, very high-quality audio and video. There are a few more alphabet soup type things that are built to the standard. SDP is the mechanism by which the two endpoints exchange information about where they are. We'll talk about that in a second. I study, in turn, these are the protocols used to reverse the firewall. And then DTLSSRTP is exciting because it is basically on by default in person. So all of your calls will be on the media, all of your calls will be in person. So finally, what is WebRTC? A lot of people in the telephony industry get really excited about the idea of putting a telephone in a web browser. And please, if you take one thing away from today, do not take that away. It isn't to put a telephone in a web browser because we can do so much more. The web is a rich power of user interface possibilities. So think of it instead as communications in a web browser. And a quick note on the relevancy of WebRTC. This is a chart compared by, this is the only undead chart I've got this whole time. Dean Dudley puts together this chart projecting the top keyword of WebRTC. And the gray line at the bottom represents browsers. And we're pretty much attending that point today where there are about a billion, over a billion browser-based, desktop browser-based devices that support WebRTC. The interesting part is the growth of tablets and smartphones because these communications options won't just be in the browser, they'll also be on mobile devices, whether that's mobile web or native to apps. Eventually, coming very soon, there will be a lot of WebRTC of mobile devices out there. Okay, so before we go further into WebRTC, I want to just give a real quick background on communications problems. So this is how communications are facilitated today. The most if you pick up a phone, you know, you might have your service to AT&T. When Alex wants to call up, she'll pick up the phone, she'll dial. That signal at AT&T. AT&T shoots over Verizon. Verizon sends it back down to Bob. This is called a trapezoid. It's pretty classic. That really relies on every subscriber having a relationship and then all of those carriers being federated with each other. The advantage of this is that everybody can call everybody. We have one set of phone numbers and generally, as long as all of the carriers federate, everyone's reachable. But there are a lot of problems with that because of the overhead that comes with all of that federation. There's a lot of innovation that gets lost. You just don't move very quickly when you have to coordinate companies all around the world and then not to mention devices in users' hands all around the world. Also, it's not particularly user-friendly. If you think about identity in the form of a cell phone, your identity is your phone number. But that's the least. It's 10 random digits that are assigned to you by your phone company. So it really means nothing to you. And yet we come and be associated with this identity. So this architecture has some significant drawbacks. The next kind of architecture is more of a trial. And Skype is a good example of this. You have one set of numbers and you have endpoints that connect it to it. Now these guys are able to innovate a lot faster because they control both the network and the endpoints. So we've got things like video. We've got things like high-definition calls. We have plenty of usernames, usernames that we actually picked in the process of signing up. But there's still two things that are problematic with this. One is that it's essentially a wall guard. I can't build them out that integrates with Skype all that well. I certainly can't base Skype in one of those processes. Which means, second of all, it's not great protection. I still have to go for separate service, separate applications to actually handle my communication. It's not based in my business process. So with my RTC, we actually get to do something that looks more like this. So we get to keep the standard for the trial that we've been failing. It's actually a more permanent trial. Because what's happening here is the signaling goes back to the website. So again, I'm not developing plugins. I just go to the web application and it serves me all of the tools I need to enable the team to base Skype. That sounds great. Second, the signaling and the media are separate. So what happens is when that phone wants to be set up, Alice will send it with Quest, which just contains her information to the web service. The web service will share that information back to us. But let's imagine you have a firewall here. The media actually is exchanged behind the firewall. So this has some really interesting implications to our performance. This has some really interesting implications for quality. If you are on a low bandwidth link, maybe you're in a... I was actually working in Marneos once in the internet connection off the island with Dev. So he had connections on the island, but not connections off the island. You actually can still communicate, because all of the media was exchanged for so that we were not using expensive bandwidth routers across congestive links. All of the video was being passed on the LAN, even though the session setup could be elsewhere. So let's take a little bit further into a WebRTC session setup might work. So we'll start with Alice using Firebox. She is going to send a request to initiate communication with Pop. That request contains something called an SDP, a session-disturbing protocol. For practical purposes, just think of it as a paid blob of text. But this blob of text contains a bunch of things, which include her contact information in the form of an IP address and a port. It contains a list of the codecs that her device supports, this being WebRTC that will be in the page 264. And it contains, as well, a public heap that can be used to encrypt communication being sent to her. Now the Web server doesn't have to do anything with that blob. Again, it's just opaque. All it really has to do is forward that on to recipient, in this case, Pop. So Pop, upon receiving that offer, generates his own response, contains largely the same information that passes it back via the Web service to Alice. Now at this point, a whole bunch of packets start flying between Alice and Pop, starting with ICE, and then STUN, and then TURN. So what those three things do are, ICE, in particular, enumerates all of the network interfaces that you have. So you might have a LAN, you might have a VPN. You will also ping out to the internet and figure out what your public IPs are. Use all that information to try to tell Pop how Alice can best be reached. They can make a direct communication on the LAN grid. If they can, it gives them several firewalls. Maybe we'll do something to try to pierce through the firewall. That's where STUN comes in. In worst-case scenario, if they can't make direct communication either locally or using STUN to reverse the firewall, then there are relay servers called TURN servers that will actually proxy the media. TURN servers will actually just be received from one part of the password together. Now, because they've exchanged private keys using the signaling layer, what will actually happen is the media will be printed. So even though the TURN server technically is in the palace, the media in the worst-case scenario, all that audio is still in the pressure. The TURN server can't see it. Can't do anything about it. It's just data being passed back and forth. So this bank insecurity is one of the big things with WebRT to see that is, I think, relevant given our friends in the NSA who'd like to jump into all of our conversations, properly deploy WebRTC and STUN for being able to see into that media. Now, one of the things I want to make my signaling is, I've used WebServers as my example, but it really doesn't have to be WebServers. All you have to do is get that STP and then play this point of view. We've done the limits with XMPP as the character of this message. You can do it with Redis. I've even seen an example where someone actually took it, put it up on a text file on a USB drive and carried it to the computer and loaded it back in, which would be the sufficient way I could possibly manage this type of call, but it doesn't work. All right, so that's another point. What really gets me excited about the applications is how we use it. In the last couple of years of building these applications, I've thought about what it takes to build these applications, and what attributes applications like this should have. So I came up with a lot of contents, and I want to share that you should consider when designing communications applications. So a modern voice application should be adaptive, which to me means that it should take advantage of the capabilities of these devices on which it's running. It should be fluid, which is to say it should be able to move across devices and across time, even across users, and still preserve the context of the conversation. It should be contextual, because really this is the value of what you're building, the communication happening in context with whatever application it supports. It should be trustworthy, because the worst thing in the world is to communicate something sensitive and then have it revealed or realize the capabilities are suppressed. And the last point is that it should be reference. So we go a little bit more on each of these. Adaptable. What does it mean to be adaptable? Alex, again, is on Firebox. She has a pretty broad range of options available. She has a keyboard for input. She obviously can set text back and forth. She's got a wide range of cameras and speakers. She can really have a very rich communications interface. And maybe she's talking to this guy over here, who's on his iPad, with a very similar set of input options available. So whatever app we build for them might enable video conversation, audio conversation, text, link sharing, all that. Now this one wants to join the conversation as well. She's on a smartphone, and this particular smartphone either doesn't have a camera, or maybe she doesn't have a bandwidth, or maybe not a battery to support a video screen. So she still wants to participate in the conversation. She still wants to talk about whatever the issues are. Well, she can still receive text messages if we have a, send or receive text messages if we have a mobile app to play. And she can also participate by audience. So think of this sort of as your conference call where some of the people have a side channel where they can use video with richer communications. Whereas this third party really is only in by voice, but currently she is able to participate. The same is true with this poor guy. I don't even know if they're worried about that thing. But he can still join it, right? He can still talk. And then we have this last guy who also has a browser, but either his microphone is broken or maybe his baby is asleep and he doesn't want to talk. Actually, I've got one good guy who's in Milan and he's six hours ahead of us. So all the time we'll have calls after his kids go to bed and he's always beautiful because he doesn't want to talk. So we'll say something and if he wants to feedback, a lot of times we'll just write something into our first side channel. So an app that's adapted will enhance or degrade gracefully based on the capabilities and choices that users make. That's what we've adapted all along. Alright, let's start by being fluent. So conversations often start, especially today with chat. I certainly don't want to reach out to somebody, but first the idea is to pick up my phone in a lot of cases. At least if it's a co-worker it's not. I'd like to start with chat and I want to see where they are and maybe ask a question to AC to see. I just want to see if it is available. But at some point chat comes to you so we'll switch to audio. So I want to be able to click a button that enhances that conversation from the same conversation to the same context from chat to audio. Maybe I want to pull a couple more people in because this is especially getting bigger if you want to. Maybe you can buy a customer. Maybe you want to buy someone from the department. I'll bring the video because some things, whether the pictures tell the thousand words and videos will actually work in a second. But then when we're done we should be able to go back to chat. And the flow here is that this is still one conversation and I think Skype knows this very well. You never had a Skype conversation where you started chatting and then went up to video in the background. You can kind of steal back in the history of your conversations. And of course, frankly, being able to switch devices, this is a big one. And not everyone really gets this right. Again, I want to give Skype credit for this. I'm at my desk and I need to leave. I can actually transition that call to my note very easily. Okay. Being contextual, this is my favorite of the five. A friend of mine, Jetpondwork has this really great book that in the future communicating isn't what you're going to be doing. It's what you're doing while you're doing something else. This idea that we have dedicated communication devices I think is, I said, I think it's done. I mean, even all of us, the phone that we carry isn't primarily a phone. It's everything else, right? So being contextual is all about getting context into the conversation or putting the conversation in the context of what's happening. These are just some sort of not entirely random examples, but examples of information that may be useful to a conversation. So how many callers are waiting in the queue in the context area? Or how much will be sold in the next month? I like the way this is how do I manage it for this call because it implies not only that the manager can easily add it to this conversation, but the business relationship is understood by the application. So if I may request, I say my manager, the application is who I am, who's my manager is, who's how to reach my manager, and then actually add to the conversation. You can see this in text as well. So a good multimodal application contextual knowledge will facilitate the direct participants of the conversation. There are also third party services. In this case, you can see that we were talking about looks like we had a problem with Astros, but you can actually see that notifications from New Relic were being pushed directly into the conversation. So that just gives everybody more visibility about whatever problem they are solving. This is a great example of really business-specific information. So in this case, I made a conversation and I just, when I said this message, all I typed was I wanted to know about A but 2.5. But the application did is it actually went back, it understood that that was a special string, hashtag, so to speak. And it actually looked up that information from the database and it rendered some information alive. So this conversation now has not only what I said, but the context top what I said with very little effort from me. So this makes all of that communication much more fluid. Everybody's on the same page. So the important thing, any communication that I absolutely ask the user, the user don't trust that they won't use it. So how can it be trustworthy? I think the number one rule, don't surprise the user. Don't do something that they don't expect. One example that would be don't share the contents of the conversation but they don't expect that conversation to share it. So if you have a conversation between two people, generally speaking no one else should be able to come back later and access that same conversation. It's really important I think as well to help users make smart choices where it's required. So there's been a lot of debate about as what RTC is matured as the browsers have adopted standards, there's been a lot of discussion about how to request permission, how to handle the right time. Google does a really nice job I think. Here we're going to Google Hangout, start a Google Hangout. When you first load the page if you've not been there before the first thing you ask is, can I ask the access of the camera? Now it does remember that, I'll get to these about that later, but it does remember that you've granted access before it drops into the conversation. You see a picture of yourself and it says here's what you look like, are you ready to go in? I mean that's an important step or somebody loads a site they've been to before, they've already granted permission. It takes you straight into the conversation and then they just realize that they're recording panics. So doing things, little things like that to try to help you always feel fully in control of their communication is really important. And that's especially true with microphones. You know, at least on Macs cameras kind of get a little green dot that tells us that it's on. Microphones don't have such a dot. So there are a lot of things that I could be able to do. I could lead to some unhappiness. Another item about trust readiness is identity. So identity is an interesting thing. There are lots of applications that make their name based on anonymity. There is no identity. That's an important use case. But for, I think the hybrid app having identity is core to facilitating communication. You want to know moving the other end of the call. You want to know the number. And I can say, well, that's my wife. I know who that is. In reality, quality is actually very, very easy to do. So, the only reason that we don't see a lot more of that is just because it's basically the carriers controlling the network itself. But anyone who gets a certain kind of voice or a peak connection or a peak or an old PRI would be able to actually set the quality to whatever they want. A lot, many more options for a certain identity. We have the law. We have social identity from Facebook and GitHub and Twitter. And we can actually use those to enhance the communication. And if your communication is built into the app, use the identity that comes from your app to assert who the user is. And finally, these conversations should be referenced. So, referenceability is, I think, all about sharing. Conversation, in my mind, should have a URL. This is an easy thing to do. We deal with resources and objects and all that. So every conversation with the URL that is permanent and unique, it represents the latest state of communication request. So, if you schedule them all, then you should generate a URL that says, this call will happen. If the call is going on and you hit that URL, really, it should bring you straight to the conversation. It should present the user interface that lets you be a part of the subject of communication. Once the call is complete, then you should provide some kind of transcripts and you are recording maybe multiple content types. So, if you record the call and you transcribe it, it allows you to download the audio as well as search the transcript. Any images or leaks that were shared can be combined into that view. But this idea of a single conversation can be referenced at a URL in all its forms. And then, whenever possible, because you don't know something's there and you can't find it, it may as well not be there. Oh, there are questions to be able to be shared. That's really one of the main points of having a URL, right? I can copy and paste it into anyone and assuming they have permission to view it, they'll be able to see it. So, those are my tenets. I want to try to apply those. I've got three idea applications. These are not necessarily great ideas by themselves, but I think they illustrate and to enhance web applications today. So, they're kind of a silly one. A live anonymous matchmaking service. Think tender about the video. So, I come to funny dates. You can kind of see we've got two people here who have video sessions going. They've got some stats and how they were matched. Looks like they all like most mustaches and puzzles. But, these people also really want a sense of privacy. They don't necessarily want to share as much information about it. They certainly don't want to give up phone numbers. So, not only did we give the ability to find each other and communicate, we've also given them these stickers that go with their face. You've probably seen some of them similar with Google. We can help them obscure some of their identity by giving them these tools. They can still get a sense of who each other is. They can always see some of the expressions. But you can still hide some of that identity in your phone. So, what does this give you? It gives you safe introductions with strict anonymity. Everybody comes to the site. The site all reveals what it's designed to do. No need to exchange phone numbers. There's a very low friction to get a Google started. There's no app to download. There's no plug into install. Really, just by going to the site, the entire tool kit of communication is built by then. And then we use silly tricks that can be used to break the eyes and if you want, you can do an upsell. Skype just did their language translation, right? Why not apply that to the site, even by text or by auto? All right, so the second example is an instant response app. So, my background is before I became a developer, I did a lot of ops. I did silver administration stuff. And this kind of thing, whenever something goes down, you get that phone call at three in the morning. So, what if we could build something like this? What if we could build something that would enable people to not only discuss whatever is actually broken, but also bring in contextual information that's surrounding the problem. So, on the left, you can see the chat. Just like before, you can see where people are discussing the problem and third-party services, the tan and green lines, third-party services are pushing in data. The content there is important, but it's the idea that anytime someone does a deploying, you can see the play was made. If there's an alert from the monitoring system, you can push into that text chat. On the right, you can see the voice and video conversation going on. And of course, the people who are doing my video will see each other. But there's nothing to say that they couldn't also join by mobile device, by either my telephone call or my mobile app to stay a character in their car. Now, what's really interesting, what makes this different than just the bottom is charts, graphs, contextual data from the monitoring application itself. So rather than waiting for an event to come in, I can actually see trend lines happening in real-time as part of the communications tool. So the way I business is that someone actually, if you have a company that builds monitoring tools, goes and builds this into their bash group. I will not get into that. Whoever does this. So the key here is timely and contextual information. The view itself can adapt. If you're on a mobile device, you'll get a more focus on the communication and less of the bash for-type features, but on the desktop they'll get the full experience. I like, in particular, the emphasis on group-based communication. I can click a button and everyone on the ops team gets an invite to join that particular conversation, context about that conversation. But more importantly, if I need to bring in a vendor, maybe my store wants to give them a user account in my system for that purpose, or we can generate this unique URL, we'll have a token from straight to that page, and he can join in that conversation and see what's discussed there without exposing any of my other conversations too. Of course we can better connect with external services like I mentioned, we can push in data from GitHub or Rural or whatever. We can also record these instances and learn from them later. So once you record it, the same sequence down right breaks in a similar way, someone does a search and finds it, you can come back on this original conversation and understand how this could result. Okay, my third example, many of the records, patient services. So, imagine you have this very simplistic looking website and you want to see, you've been to the Opera Models, you want to see the advice of the Opera Models gave you. So, you actually, gave me some advice. So, the call was recorded, that recording is available for you as the patient to download this site. The transcript of that recording is there as well. And the doctor was actually going to go back and do annotations. So, last time I talked to the doctor, he used some words that I thought I knew how they were spelled and I was wrong. And he was in this case, he would actually write them in and I'd be able to find out how informational those things are. So, if I call with my bill, or if I need to talk to another doctor because I have an urgent problem, I can click a button right here and immediately be connected with someone who already knows who I am, who has access to whatever information I was looking at when I initiated the call, whether that's a bill or medical information. And I don't need to keep track of this phone number. I don't need to keep track of any security information. I think in particular, the identity part of this is interesting because if you call your bank and they ask you for the same three pieces of information your name, your account number, your last footage and your social on it, anybody can fake that, right? But if I have a secure authentication on this website and I log in with my password, they have two backup medications and then they're closing the contract. That strong authentication is carried through to my post-conversation. So, when I click that social security so medical vices face secure for authentication I think is one of the big deals here. You'll reuse the primary authentication from the web app. Maybe you can verify. You can do voice biometrics. Make sure that the person who's calling sounds like the person who's back. You can even cross check against the location which is not something you can really do and then you can automate the claim. So you've got the recordings, the transcripts, the bill, all those people. Any of the medical vices given so that that long string of things I should do three times today that you gave me that I didn't write that because I was too busy listening is on those files. I can go back after and read it by me. It also gives you an easier way to do auditing and service quality assurance. Okay. I hope I can put you to sleep. I have a demo. So, this moment to see things pretty cool and I thought that it would be cool to learn. We have as you can see, I've got Firefox running. I built this really simple little scenario app and all it's done so far is connect to this page so this is an example of what RTC requesting permission from the camera. You may have heard me earlier mention that sites will remember this preference. The standard has fallen upon the idea that if a site is not using HTTPS that it will replace every time. So, I'm running scenario certificate. If this were a site that had HTTPS and everyone should remember this, then it would be wrong. So, this is Chrome. So, I've got I'm sorry, this is Firefox. I've got Chrome over here. I'm going to bring up the launcher. Now, what we have is I'll show you Chrome Chrome is right here. You can kind of see the video coming from Chrome and it's being transmitted to Firefox. Now, these, both of these browsers are running the global list so the traffic is actually only going through the back, even though the server is most important. The other thing I've set up is Google has this really cool speech JPI and they expose it. There's a drop-through library called Recy, if I just listen right now. If I do a break, now granted this is demo so you can probably go below me. But, if I saved the network it should actually activate the launcher then I should be able to talk to the launcher to steer it and fire it. So let's see what happens. Wet on screen, move left, move left, move left. I should have made it go further. How was that? Danger snow. Move down. So the communications here is, the video is obviously WebRTC, the audio is actually using a different browser extension, not a browser extension, excuse me, a different JavaScript API. That is Chrome only. The particular recognizer of music right now is Chrome only. But speech recognition can be done client-side or server-side. If you're using WebRTC, you can very easily call into something like producer address, do your recognition there. And then the last piece of it, which is just to move the logic of the projects, is really nothing more than a curl request, or in my case, just a jQuery API call. So that's really it. I mean, it's true to nothing but adding to it. So that's pretty fun. So with that, that was my presentation on WebRTC. I have a few resources for you. You'd like to know more about WebRTC. But personally, in particular, it's great. It's what I use primarily when I'm setting up my demo. It's the official set of samples from WebRTC.org. It's the official set of sample code from WebRTC.org. It's on GitHub. The demos are alive, so if you go to this page, you'll be able to click through and actually start the video. And that average demo includes the link to the GitHub and the source on GitHub. The WebRTC.org itself is sort of the central point for WebRTC resources. I also want to point out an initiative run by a friend of mine, Landry, called the WebRTC Challenge. His goal is to get 1 million developers who are using WebRTC by 2020. It's a pretty obvious goal. I think you can do it. He's got some pretty interesting content, a mailing list. Highly recommended. Let me check that out. A couple things I want to mention as well is visiting RailsConf. If you're interested in doing more voice with Ruby, definitely check out my vision. It's the Rails like framework for voice. I'll also point out Ruby Speech. If you do get into some of the more interesting speech recognitions each recognition scenario's Ruby Speech is a library we're generating the markup needed for driving synthesizers and web parameters. It makes that whole process a lot easier. The last bit is my contact information. I've been playing. I've got the plan to work some quicker at GitHub and, of course, your emailing. But with that, do you have any questions? I would love to answer them. Yes. Yeah, go back to your medical summary. You talked about your body only before. Yes. Yes. The connection of peer-to-peer may have been painful or critical. How did that happen? How did it work? Save that. That's a great, great question. So the question is, in the example with medical reporting, if WebRTC does peer-to-peer encryption, if it's end-ended encryption, how would the call be reported? So the answer to that is that WebRTC can be peer-to-peer, but it doesn't have to be peer-to-peer. So I kind of hinted at this earlier when I said that if the architecture is built to enforce that, you can insure it. The simple thing to explain is that if you have a media server in your network, something like FreeSwitch with browsers, it will participate as a WebRTC endpoint. So instead of going direct from peer, from browser to browser, you go to browser and say FreeSwitch, FreeSwitch with browser. And then in that case, because you're decoding all your error, you need to report this. Yes. So in the use case where the person on the other end steps away from the computer is not at the computer or whatever, do you have options to like route over playing your telephone or just take a voice message or what are your options at? So the question is, if a user steps away from the computer, how do you deal with that? And it sounds like, I guess they step away from the conversation now or during the conversation. Yeah, maybe they're selling something on some site and they aren't usually made yet, but a person on the consumer wants to reach that. So you try, I say, so it's like a contact center scenario where the agent is walking away from the desk. Okay, so in that case, I would probably make sure that the call that comes in gets routed for something that can make more intelligent decisions. So it might try WebRTC first and then after five seconds if that doesn't work, either try somebody else or try to sell them. Now, Astros can re-switch our boat open source to let the agents, they both support WebRTC. If you take that call from WebRTC on the client side, once you get it into either of those, it can be converted to almost anything. It can be converted to more WebRTC, it can be converted to standard SIP, it can be converted to SIP, voice writing, or even to regular telephone network calls. So the same kind of rules that apply are something that I'm very good at with applying that in that case as well. If you really want to get esoteric, there are some motion detection libraries from RobStrip that will actually if you wanted to detect presence by moving the camera, but I'm not sure I'd recommend that. But basically anything you would do to detect session activity and timers, anything you do to detect users' presence on one end can be used as input to some other decision-making and for others. That's the question. Any other questions? Yes. So what are the rules for that? What do you want to state? What do you get out of this location? So are you asking about the IP address? Are you asking if I can get the first IP address or I guess location based on the location you're at? You could. So that particular means what you would have to do in your product line, what you would have to do is you have to control the negotiations in two tiers such that you can strip that person in and then apply all the information. You have to only refer the server side to detect the other person. It could be done if you write a new product line. If you're putting it on text editor and how strict you're... All right. I thank you for my time. Thank you very much.