 My name is Contra. I build things, do open source, and create companies in San Francisco. You're probably familiar with a project that I made called Gulp. It's a build system. I think it was in some of the talks earlier. But I'm not here to talk about that. I'm here to talk about WebRTC in the real world. So I have a ton of stuff to burn through. Let's hold off on questions until the end. Sound good? So over the last few years, I've shipped real products utilizing WebRTC as a core technology. It wasn't easy, but through the struggles, pitfalls, and pain, I learned how to harness this powerful technology and create real things for real people. I wanna teach these lessons I've learned so that you can skip the struggle and get straight to building things that matter. So before we dive down, we should do a quick recap of where WebRTC came from and what it's all about. It all started in 2010 with Google acquiring the rights to a bunch of new video codecs. The most notable acquisition from these purchases was a company called On2, which owned the VP8 codec. It's the codec behind WebM. Almost immediately after this acquisition, Google released the source code under a BSD license so anybody could use it without worrying about lawyers. This was a huge charity offering for the open web which had been struggling to secure a stable codec everyone could rally behind since the video tag was added. A year later, Google released the initial code and idea behind WebRTC. The reception was kind of like, eh, this looks pretty cool, but nobody really saw the practical implications of the technology. It was kind of just demo tech. Six months after the code release, we saw the first browser WebRTC implementation in Chrome 23. This triggered a huge uptake in people toying with the new APIs because now any web developer could build something without having to mess around enabling experimental flags and about config. And for the next two years, the WebRTC specification evolved and the code evolved with it. Things broke constantly, releases were plagued with bugs, browsers crashed, many tiers were shed, mostly by me. This was a dark time to be building things on WebRTC. During this phase, I was working on a Skype competitor that used WebRTC and spent sleepless nights figuring out why Chrome on Android couldn't call Firefox on Mac or why Chrome on Windows and Chrome on Android calling each other caused a segmentation fault and both browsers just crashed. So I think really it was just way too early to build something real on top of WebRTC and a lot of people got burned by this. But in 2013, Firefox became the first browser outside of Chrome to add WebRTC functionality. And then following that, summer of 2013 saw a huge wave of WebRTC releases. In that summer we got Firefox, Chrome for Android and Firefox for Android all running WebRTC. So as of this talk, there's currently somewhere around 1.7 billion devices with WebRTC running. That's Android phones, iPhones, computers, web browsers, all that stuff. But by 2018, that number is gonna grow to over four billion devices, which is insane. That's more fucking devices than there are people on this planet almost. But that includes watches, TVs, cars, tablets, consoles, phones, your pants are probably gonna run WebRTC. This might have WebRTC on it soon. We're gonna have stuff all around us that has the exact same experience as you would get with Chrome on the desktop. So in 2010, it was an idea. In 2013, it was an experiment. But now in 2015, it's a reality. With over 300 companies building with WebRTC across numerous devices, it's safe to say that this isn't going anywhere. It works, it's interoperable, and it's here to stay. And with those assurances, let's look at what features ended up in WebRTC. And I'm not gonna bore you by deep diving on every little detail of the specification. So I'll do like a thousand foot view of the system as a whole. The first thing is the ability to access any attached video and audio devices as streams of data in the browser. The API for this is called GetUserMedia, but it's commonly abbreviated as gum, G-U-M. Using the first option, you specify which streams you need. So in this example, we're saying we just want the video by using video true. And in this one, we're saying we just want the microphone, so no video, audio true. Requesting both the audio and video is pretty obvious given the last two examples, audio true, video true. So GetUserMedia is governed by the same permissions model as geolocation, which means it prompts the user to allow access when you call the function. Depending on what you ask for, the message in this little modal will change and the user has the ability to select which camera or microphone they wanna use. And once you get the stream back, you toss it into a video tag by setting it as the source attribute. Your camera is now streaming into a DOM element, which means you can use it in conjunction with all of the other web goodies that are out there. So this is a project I made that turns your live camera stream into a kaleidoscope, randomizes effects using CSS filters, then lays it over a music video. The result is a really kind of trippy experience where you become a generative component of this 16-year-old Swedish rapper's music video. If you're gonna play around with it, make sure you mute your microphone or whatever on your computer. It's gonna get really loud because it auto-plays audio. But you can see that that's me in the camera feed if that loops. And yeah, that was a fun thing that you can do with the camera stuff. So any CSS filters transforms whatever. The video DOM element works just as you would expect it to. And most importantly, WebRTC gives us the ability to communicate directly with other people without going through central servers. And the API for doing this is Pure Connection, which is where things start to get a little more complicated. There's a bit of a song and dance involved when it comes to getting two people talking with each other. One person starts off by creating an offer. In this code, we create a Pure Connection instance, create an offer, then we send that to the other person somehow. The offer message contains information that tells the other person how to talk directly with this user. And when the other person receives the offer, they create a new Pure Connection, set their camera stream on it, then create an answer response. Again, the message tells the other person how to communicate directly. Once the offer and answer messages have been exchanged by a signaling server, both parties have sufficient information to connect to each other directly. Now if you want people to have a channel for arbitrary data, not just video and audio, you can create this thing called a data channel in the Pure Connection prior to sending the offer. And with a data channel between the two peers, you're able to send strings and array buffers exactly like a web socket, but without going through a central server. And data channels give us a unique ability to configure the transport settings via SCTP for each channel. So for example, in a multiplayer game, you might want user chat messages to be on a reliable channel so they never get lost. But player movement messages can go on a lower latency, unreliable channel because it doesn't matter if a player movement packet gets lost. There's another one right behind it. And you can tween to compensate for those lost movement points. So you get your movement packets a lot faster and with lower overhead. And with strong encryption being mandatory and not configurable, WebRTC is by far the most secure voice solution on the market. There's no way to disable the encryption, so all connections in WebRTC are always encrypted end to end. The usage of unencrypted communication is completely forbidden by the specification, so your private audio calls and data channel messages are safe from snooping. All of this made possible thanks to DTLS SRTP, which ensures the RTP flow stays encrypted via a fingerprint included in the signaling process. So even when the signaling isn't encrypted, peers can still trust each other. We'll do questions at the end. Sorry. So in the code examples earlier, there was a to-do note in the function for actually passing the answer and offer messages between the peers. And this is where a signaling server comes in. But this is where WebRTC ends and we're left to implement our own mechanism for communication. So typically you would set up a WebSocket connection to a central server and that server will exchange the signaling message between users and that's all code that you have to write yourself. So on the product end, pretty much any existing VoIP product is fair game to disrupt by creating a better leaner version using WebRTC. Also unlocked though are new problems that traditional VoIP stacks were never able really to solve. So for conference calling, WebRTC eliminates problems like lag and bandwidth usage, so it's an ideal solution. A great example of this is Taki, which provides a simple video conferencing solution. And using WebRTC, they get excellent bandwidth negotiation. So as users join the call, let's say you have 10 users and then you add another five, you're gonna start to tax the bandwidth on your computer. So what actually happens is the audio and video quality automatically adjusts to compensate for that. So based on network load, it'll degrade or go back to HD. Maybe some other computer joins your network and they are downloading a movie or something. Your video quality will degrade gracefully without just dropping. So if you go to this link right here, you'll join a peer to peer conference call with everybody in the room. So if you wanna talk to anybody else during the conference, you can use this. And for a simple chat, current solutions send messages through a server that's possibly thousands of miles away, even though you might be talking to your friend who's located in the same neighborhood as you. But with WebRTC, we can send messages directly between people without having to bounce all over the world. So we get low latency, secure communications. This website right here, RTC copy, is one of many of these simple demo chat sites that popped up with WebRTC. So if you go to this link, you'll have an encrypted chat with everybody at the conference. And I think it has file sharing and stuff built in, which is pretty cool. And that leads us to file sharing, which as ridiculous as it sounds, we still can't do in this day and age easily. So if you had to send a 700 megabyte file to a non-technical friend right now, how would you do it? Anybody? FedEx thumb drive drop box? So it's simpler, and this is the standard conversation you'll have when discussing that problem. The simpler solution, a lot of the times it's just to put it on a USB drive and mail it to somebody or give it to them in person. That's easier than it is to guide somebody in like setting up FTP or going and figuring out how to add them to your drop box or whatever file sharing stuff. It gets really technical. So WebRTC completely solves that problem once and for all, I hope. So it's as simple as dragging a file into the browser and then you just share a link with your friend. When they open that link up, the file gets streamed directly from your computer to theirs without any central servers being in the middle. So that means you never have to wait for anything to upload. And the speeds are like super fast, like you just won't believe how fast it is. You don't need to worry about your sensitive stuff laying around in like a drop box folder that has like a public permission set to it or something for like your secret like social security documents or whatever. It just goes straight from you to them, completely encrypted and safe from snooping. And now some cool stuff. With WebRTC, we ditched the whole idea of a central server for delivering messages. So we now have the ability to create these mesh topologies in the browser using WebRTC's Pure Connection API. A cool example of a partial mesh is this project called WebTorrent by Feros. It's an implementation of BitTorrent in pure JavaScript using data channels. So instead of downloading a file like a movie or a Creative Commons Zero movie, you toss, instead of getting that from a central server, you toss the hash of the file into this mesh and discover people who have that file available to send to you. And then once you're in contact with those people, you get little chunks off of them. And then at the end, you just take all those chunks and put them together and you have a finished file and you never had to download anything from a central server. So we can apply that to content delivery, CDNs. Why download static assets from a web server far away when somebody in your neighborhood has the same stuff already downloaded and cached? So there is a project that does this and it's called Pure CDN. And this means less load on your servers, less money spent on bandwidth, and faster speeds for your users who come to your web application. So earlier this year when there were less web RTC capable devices, you could expect to save around 60% on bandwidth. So I'm assuming that that number has grown quite a bit since then. Any browser supporting web RTC will just download the assets off of somebody close to them. Otherwise it'll fall back to a normal CDN and then that person will become available to download off of. So imagine if something like YouTube or Netflix used this for delivering their content, like the days of buffering YouTube videos would be gone. Removing a central server provides massive improvements for Twitch-based synchronous multiplayer games as well, where milliseconds really matter when it comes to message processing. So using purely WebGL and web RTC, Mozilla made this demo game that has no central server controlling the game state. Players can connect and fight each other all powered by peer-to-peer communication. I'm about to show the URL, but the conference Wi-Fi is most likely going to fall over because there is 60 megabytes of game assets to download. So if anybody has something to an email to send, you might want to do it now. That's the URL, whole Wi-Fi goes down. Feel free to open it up and play around with it right now if you want to fight other people who are in the room. If you're able to get it to load, otherwise you might want to wait until later. So that's a few industries web RTC is going to change up, but what are some new problems that web RTC allows us to solve? A product I'm working on right now is a video dating platform based around web RTC. So this is something I've been working on for the last couple of months. It's called Charmed, and we're currently in a closed beta. If you would like to try it out, you can talk to me afterwards and I'll hook you up with an invite if you want to be a tester. But it's got the same mechanics as Tinder for filtering out people based on physical attraction. You either swipe right or you swipe left. In this screen we use get user media to let people record a three second video for their profile photo. So instead of making a snap judgment based on a low res, like crappy still photo that they probably edited, you kind of get more of a window into their personality before deciding how you're swipe. So in testing, it's been really interesting to watch our users like at first they're like, oh, it's kind of weird that I have to record a video, but after a while they start getting really creative with it. Like they'll set up the camera, it counts down from three, and they'll set up this whole thing where they'll like jump in from out of frame, or they'll like pop up, or they'll like do some weird thing, and it's just a fun, interesting way to kind of show your personality in your photo. And instead of messaging back and forth for a week, we let you cut right to the chase and figure out if you have chemistry with the person by doing a 90 second virtual date. This is a video of me dating myself. If you like each other after your date, you can exchange offline contact info, and then you go on a date in real life. If you don't like each other, you never hear from them again, they're just gone. Simple as that, no more Tinder stalkers. So for this product, WebRTC was a great fit because the virtual date is supposed to be like you're sitting across from the other person at a cafe. We needed extremely low latency HD calls to make it feel like you're really there. So for dating, humans need subtle body language to figure out chemistry, that's just how we work. Like what happens when I say this, like what do their eyes do, or like what's the tone of their voice when they say this? So by going with WebRTC, we get fantastic video even on like the most unreliable crappy networks. I think one time during testing, I was on a glacier in Patagonia on a satellite network, and I called somebody who was in a McDonald's in Montana, and the call went through and it worked, and I could see them and they could hear me, it was, it just worked. So I think that that definitely put Skype to shame because you can't even load Skype on a network like that. It's like loading contacts, and then it just doesn't even work. So the fact that I was able to get this working in that like extreme edge case just really goes to show how great the bandwidth estimation technology is. So while WebRTC does have a lot of promise, there are some problems you're gonna face when you try to ship real things for real people. The first one is browser support. So most of the bugs for Chrome were cleared up in the past two years. It's probably the most stable and best implementation of WebRTC, but that doesn't mean it's perfect. As of this talk, there's 449 acknowledged bugs open on the Chrome issue tracker for WebRTC, and this number can be a little bit deceiving because I don't know if you can read from back there, but most of these are like enhancements to something. They're not actual, like this is broken, we need to fix this. It's like, hey, on Mac, if we use this codec, we get better performance for this certain case on this network. So they're not showstopper bugs by any means. For the most part, they're enhancements, but there are some real bugs like this one. On 53% of Windows laptops that were tested, the microphone level was half of what it was supposed to be due to a bug in the analog gain system. So this means if a Windows person gets on a conference call, nobody will be able to hear them and people will start talking over them. So imagine the type of like weird chemistry that could create in a company. Like let's say the Windows guy gets on a call and nobody listens to him, so he feels like nobody values his opinion and then quits his job. That's something that could happen. So there's like real ramifications for bugs like this. Like people would just start talking over him in the call and he would get pissed off and like quit the call and then his boss is like, why'd you quit the call? Chrome is still missing some of the features that are defined in the spec, but their approach when it comes to implementing things has generally been like do the whole thing at once and do it right the first time versus like incrementally increasing their coverage. So they're typically not in progress for too long. So the ones that are partially implemented are probably gonna be done very soon. So Firefox lagged behind Chrome for a while. I mean Chrome had the first WebRTC implementation. They came up with WebRTC. Obviously they're gonna have like the best WebRTC stack, but recently Firefox put in a ton of work. They actually are beating Chrome in terms of WebRTC spec coverage. So they have more WebRTC implemented than Chrome does now and that's pretty big. There's still a lot of stuff in progress. So Firefox has a different kind of strategy in their development. They are more of like incremental with their stuff, but for the most part, like all of the major chunks are working and even all of the stuff that's partially implemented like works good enough, like except for a couple of edge cases that are mandated by the specification. Now coming to Opera, they had a couple of problems implementing WebRTC, but recently, and I'm sure you know why, all of the Opera bugs have mysteriously disappeared because Opera is Chrome now basically. They're using Chrome's Blink rendering engine. So Opera and Chrome have the exact same WebRTC implementation. Opera lags behind in updating Blink sometimes, but for the most part, you can expect Opera's WebRTC to work exactly like Chrome's. And continuing down the line, we get to Safari. Safari is where things get complicated. This is the current state of WebRTC on Safari. Apple has not announced an intent to implement any of WebRTC, and as per usual has given no indication of what they plan to do. They haven't said whether they like it, what they don't like about it, nothing just completely silent. But last year, they joined the WebRTC working group and that maybe means something, but who knows, they never said anything about it. They're in the working group. I don't know, nothing new. This is the only emoji I could find that explains how I feel about this, and I think how most people feel about this. Safari on desktop might only be 10 to 15% of the market share, but on mobile, Safari is 45% of WebTraffic. And that controls, Safari controls the rendering on iOS with an iron fist. So even if you have Chrome on an iPhone, it's not real Chrome, it's Safari. It's just a wrapper around Safari's web view due to the way that iOS works. So we don't really have any way to get WebRTC on iOS yet. And I just have to say, it's disappointing that the most used mobile browser has refused to implement this widely accepted specification. A lot of people are really pissed right now. I don't know if you can tell, but I'm pretty pissed about this. I'm hopeful that they'll feel bad about being the only one left out of the party and they'll rectify this soon. No comment, maybe. So lastly, Internet Explorer. This one, they actually kind of have a reason for not implementing WebRTC. So when the spec was announced, implementations were already underway in Chrome. So Microsoft kind of just like waited it out, waited for the spec to get more fleshed out before they did a full audit. And after they did their review, they released this competing spec called ORTC, which radically simplifies the API and messaging flow. A lot of people were like, what, they're not gonna do WebRTC, they're gonna make their own stupid thing and people were just like really mad about it, I was mad about it. But after a while it was like, people actually read the spec and were like, you know what, they actually make a lot of sense. Like WebRTC is kind of messed up in a few ways. So over time people realized this and warmed up to ORTC. And recently Chrome and Internet Explorer announced WebRTC 1.1, which has a new set of changes to the spec and it incorporates like all of the good stuff from ORTC. So you can probably expect that to be implemented soon in Internet Explorer. And this is good that we were able to have like spec, competing spec, and then they converged versus like both teams just running with their own thing. So it's good that we're able to rally behind WebRTC 1.1. Across the whole spec, this is compatibility and features for WebRTC. So it's mostly red and yellow, but if you look at the top seven rows, oh, well, it's completely yellow on this. I promise some of those are green on my screen. The top seven rows are green. If you look here, this projector is all yellow though. So you'll just have to trust me. But yeah, the top seven rows are like the core meat of WebRTC, all the features, like the big ones that you need to get done. All the other ones are like nice to have in my opinion. So you can build stuff using the top seven today. Compared to networking, browser support is a piece of cake. So in all of our WebRTC products, networking has consistently been the primary struggle. Getting peer-to-peer calls to connect across weird networks is a really difficult problem to solve. The root of all these problems is NAT. And NAT is short for network address translation, and it's an essential tool for slowing down global IP address exhaustion. It's basically the reason why we haven't run out of IPs already. So instead of every device in your house having its own IP address, your router has an IP, and then all of your stuff sits behind that, and then you have like an internal address that's assigned by the router. But this means PC1 can use a web browser to access a server on the internet, but a device on the internet can't use a browser to access a server on PC1. So this behavior basically breaks the whole idea behind WebRTC, which is setting up a direct line of communication between two devices over the internet. So every device in somebody's house is almost always going to be behind a home router. So this NAT situation is like 90% of the people will come into your website. So what do we do in this situation? PC1 and PC2 need direct communication for WebRTC to work correctly. We have an extremely common network situation where that isn't possible. Now the primary workaround for this situation is called STUN, which is a tongue twister that stands for Session Traversal Utilities for NAT, which is a recursive acronym because there's nested acronyms. So STUN is a mechanism that basically allows a device behind a NAT to get the real origin IP and port that they want to be reached at. When PC1 wants to allow connections from PC2, it'll first ask a STUN server, hey, who am I? And like, how did I talk to you? It's the equivalent of calling somebody's phone number and then saying, hey, what's my phone number? So the STUN server then responds with PC1's real IP and origin port. This information gets passed to PC2 as a part of the signaling process that we talked about earlier. So now PC2 knows how to get around PC1's NAT and talk to it directly. So then the next step, PC2 does the exact same thing, contact the STUN server, get my info, then send it to PC1. So now PC1 and PC2 can talk to each other directly. STUN is going to solve the NAT problem for like 90% of the people that are having trouble with this. Plus Google and Mozilla operate public STUN servers for you to use so you don't have to spin up your own STUN system. They have public ones available. You can just drop their STUN server info into the pure connection config object and it'll just work for 90% of the people, which is most of the time. For the other 10% of users, typically people on corporate networks, things get a little complicated and we have to use this other strategy called TURN. So in those 10% of cases where STUN failed and we weren't able to traverse the NAT, there's just no way to set up a direct line of communication between these two people. So we use this thing called the TURN server which acts as a relay between the two people. So all of the calling data goes through this TURN server. This costs a lot of money to run. It's a lot of bandwidth, it's huge pain to scale. So you're not gonna find any public TURN servers available. So you basically have to set up your own, manage it and scale it out based on the number of calls that you're routing and how much bandwidth you're using. And once you've tackled networking and browser support, you're gonna have a couple of small problems with hardware and these are always the really weird ones because there's like millions of really crappy webcams on the market and they've been flooding the market for like 20 years, like really cheap $5 webcams you can buy at a gas station. So you're gonna have trouble with those. And hardware acceleration is a big one. So in 2011, Google released the Intellectual Property for hardware accelerating VP8, that codec we talked about earlier, to the hardware chip vendors. So they were pretty quick to announce like hey, we're gonna build this NVIDIA Intel AMD ARM and like all the big ones came out and were like yeah, we're gonna do this. I'm not sure what the current number is right now, like how many people have actually implemented hardware acceleration, but I think that pretty much any non-Apple device made recently is gonna have hardware acceleration of VP8. And that takes us back to Apple again. So they don't support hardware acceleration of VP8 at all and probably won't for a while. So on Apple devices, you're gonna get better CPU usage and battery life if you're using H.264, which is hardware accelerated on those platforms. There's one problem with that though. Chrome doesn't support H.264. So you can't have Chrome on OSX, call an iOS device and use H.264 even though technically both devices have hardware acceleration and the codec available. Chrome announced last year that it was coming soon but nothing happened. So that was January of last year. I wouldn't hold out for that coming anytime soon. So I think for now you just have to be okay with hardware acceleration not working great. Another problem on OSX is where the camera randomly disappears. I don't know if anybody's ever had this happen. Your camera just vanishes and you have to restart your computer. And I actually have a solution for this. So yeah, when you get user media as a web application to grab the camera, you'll just get an error back immediately that says, hey, the user declined to give you a camera. It's confusing for the end user because they don't even get a prompt. It just immediately, they'll get a tiny little icon they won't even see in the corner that says webcam blocked. So they have no idea their operating system has basically betrayed them by losing their webcam that's like in their laptop. It didn't become unplugged. It's in the laptop and it's super confusing as a web application because you don't know if they actually declined to give you the camera or if they just didn't have a camera. So your error message that you show them is like, please give us your camera. And they're like, you never even asked. So it becomes really confusing. And the only way to fix this is to either restart your computer or you can open the terminal and kill this process on OSX called VDC Assistant. Now you really can't expect your users to do this. They have no idea what's going on. If you're like, go to the terminal and run this. They're like, what's the terminal? So this is just like a massive problem for us right now. And I'm hoping that they fix it in an upcoming release but until then it's kind of a massive source of headaches. So we checked out some of the cool stuff we can build. I ruined it by showing all of the problems that you're gonna have when you're building stuff. But now is when I do the fun part and reveal all of the solutions to those problems because they are for the most part all solvable. So if you're planning on building anything with WebRTC you should probably get like a pen and paper handy to take down some of these links or I'll tweet the slides afterwards or something. The first one is simple peer. So instead of using peer connection directly, taking a photo, that's a good idea. It's pen and paper, it's a waste of paper. So anyways, this module is simple peer. You don't use peer connection directly, you use this. It's an abstraction over the WebRTC APIs that makes it simpler to use and handles a lot of the problems and gotchas you'll have when you're doing stuff across browsers. So you never have to really worry about spec changes or like weird incompatibilities or race conditions when it comes to message processing. It just works no matter what or what browser it runs on or whether you're using like a crazy weird WebRTC implementation that should just work. For solving browser support, and this is my favorite one, we get this library called adapter.js which provides a native extension for Safari and Internet Explorer that implements a spec compliant WebRTC subset of APIs. So it's a really crazy undertaking. I don't know who wrote these, but like kudos to them. People on Internet Explorer are already used to installing crap anytime they go to a website so they won't care. Safari users will just click it anyways. They probably don't care anyways. But yeah, this is like huge and we use this on the dating site and it was seriously like require adapter.js and now it just works on Safari and Internet Explorer with like one line of code. We haven't had any problems with it and interestingly enough the video on Safari is actually clearer than the video on Chrome so whoever wrote this did a really good job with that. And for mobile support, Android, Chrome WebView already supports WebRTC. It's the same WebRTC you can expect from the desktop version, but for iOS where the WebView will not have WebRTC for like the near future, somebody made a Cordova plugin that implements all of WebRTC, more of WebRTC than Chrome actually, which is another huge thing. So it's pretty new, gained a lot of traction. A lot of people are using this now so it's proven to be like pretty stable, pretty usable. So using Cordova, this plugin and adapter.js, you can write a WebRTC application that runs across all devices, browsers and operating systems with the same code base. So you don't have to do any crazy crap to get stuff to run in different situations, it just works. And for those network problems, we use this module called Free Ice, which is a public list of open stun and turn servers to help with natural reversal. There's no turn servers on the list yet because nobody has offered to host a public turn server because it costs real money, but there are like 80 stun servers on there. So based on if you're in India or something, you'll probably get a stun server in India or if you're in California, it'll probably give you a stun server in California. So you always get like whatever the best stun server is for your location. So you pretty much should never have to run your own stun server. For those 10% cases where we do have to use turn, there's this great open source turn server implementation that is like super easy to set up. So this is the repo for the turn server, but if you Google a bit, you'll find Docker containers and droplets and whatever Heroku doodads to spin up that'll just give you like a one click to get it running type of install. Oh, I should also mention some guy at Google wrote a project that auto scales it too. It's like one click install and auto scale and like do everything on Google App Engine, I think. So definitely Google for that or I'll give a link, I'll tweet a link later. But yeah, that's like super cool because it's just free infrastructure set up pretty much. No DevOps. An exclusive for jQuery India, I want to announce a project I've been working on. This is the project that powers all of our WebRTC applications we've made in the last few years, including the dating website. It's called Wildfire and it's a WebRTC platform that provides the best experience across devices out of any solution so far. Now Wildfire consists of a front end library that gives you a simple API for doing cross-platform WebRTC and a highly scalable API and signaling server for managing call state and user authorization. This URL is private right now but I'm gonna be opening it soon. I just have to finish polishing the docs. I was working on it right before I got on stage. But just wait, I'll tweet when it comes out but this'll be up in the next couple of days, I promise. And accompanying that is React Wildfire which is a set of components that use Wildfire under the hood. So imagine a component just called call. You can just drop in peer-to-peer calling and data in your React application by adding like two lines of code and it works perfectly across platforms. So this whole suite of tools is just aiming to make it dead simple to add in all of these cool new features. And with that release, thank you for having me. If you wanna discuss any of the work I do, here's all of that info. And now I will open for questions. Anyone? Rebroadcasting lets you do a partial mesh for video. So to solve the conference calling problem of like how do you have a conference call with 100 people in it? You just don't have enough bandwidth for it. So to solve that, rebroadcasting basically lets you take a peer connection coming from somebody else and use that as a peer connection coming from you to rebroadcast somebody else's stream so that you can do a partial mesh where like, okay, I'm connected to these 10 people. These people are only connected to 10 people and they all kind of share each other's streams to broadcast those. The encryption stuff in WebRTC is different from HTTPS. So it's not comparable. Can you repeat that? It's a DTLS, SRTP. I would look up the documentation on that. Cool, well, thanks for having me. If anybody wants to hang out, grab a drink, go skateboarding, ask me questions about stuff. Come find me. Thanks.