 Hey, welcome to the first of our WebRTC updates, coming to you today from beautiful stock home. We're going to try and do these at least once a quarter to make sure developers, wherever you are, can get a chance to hear the latest from the WebRTC team. I'm here with Justin Uberti, who is tech lead for the WebRTC project, and Sergio LaChapelle, who is product manager for WebRTC. My name is Sam Dutton. I'm a developer advocate for Google Chrome. And before we start, I just wanted to shameless promotion really for our new home for sample code, which is now on GitHub, the URL just on the video now. We've moved WebRTC HTML CSS JavaScript samples to GitHub to make them easier for you to write patches, make issues, any feature requests and contribute new content. And it also makes it easier to view the demos. There's a URL for that now. So we'll be adding some new demos considering stuff in relation to testing, stun and turn servers. Also some work on AppRTC, our video chat demo. We've added some new features to this such as stuff like audio muting and a kind of range of UI improvements to make sure that it works really well across different display sizes, screen sizes. And we'll be moving the code for AppRTC to GitHub as well. Just wanted to give a shout out to Sylvia Pfeiffer's great blog post, which lists the various options that we can use with AppRTC. And we'll actually be adding that to the demo as well with options in addition to that stuff like controlling bit rates and so on. So first up, getting back to the main topic, what in general is in the pipeline? We'll just start with you, Serge. So Chrome 34 is going to the stable channel this week, and that one includes a lot of improvements, as usual. I'll focus first on echo cancellation. So the fixes that we rolled out in previous Chrome releases to Mac and Chrome OS are now reaching Windows and Linux with 34. And what we've done here is that we were doing basically an extended filter for the echo cancellation. So we should be able to catch even more edge cases that the echo canceler did not catch previously. Also on mobile, we can now use the hardware echo cancellation of the mobile device if present. And we've turned this on for most of the Samsung devices and the Nexus devices that have hardware AC. So for mobile, if you use Chrome for mobile 34, you should be seeing pretty impactful improvement in echo cancellation. You made a big difference. I did. The other big news we have for 34 is that developers can now add screen casting to their WebRTC experience. So if you extend your current WebRTC app with an extension, this extension will be able to allow for screen casting in WebRTC. We're doing this because of certain security issues with having to do with casting of a desktop or particular Windows on the general Web. And we're still discussing this as a general WebRTC community on how to do this in the most secure way. So this is our first release of this. And companies like Uber Conference and Jitsi have already added this to their WebRTC app. So I suggest that you go take a look at those apps if you want an idea of how it can work. That's the Chrome Choose Desktop Media API. Thank you. Other quick things. Optin IPv6 support is now available in 34. Optin DSCP support not available in 34. And Chrome OS devices that have VP8 hardware acceleration will now be using this hardware when doing WebRTC calls. So Chrome OS devices that have VP8 hardware are usually the ARM-based ones. And for those machines, you'll see a significant improvement in decode speed. So that's for 34. Justin, you want to give us a preview of 35? Yeah, so running right behind 34 is going to be 35. And I've got a whole slate of fixes going into 35. We've been doing a lot of work around bandwidth estimation. And bandwidth estimation is the kind of magic process where we try to figure out how much capacity you have in your network and send just the right amount of video to fill that capacity that you have. You don't want to go too much or all lose packets. You don't want to send too little or else you won't get enough video quality. This is a really tricky thing and we've been spending a lot of time trying to optimize this. Looking at cases where we don't do a great job and trying to figure out why didn't we do a great job. And in M35, we're doing a lot better job about making sure that the timing analysis we do is really accurate. And this additional work we're doing to make the timing right has translated into much better bandwidth estimations over various types of networks. There's a whole bunch of other stuff that we're doing, a whole bunch of grudgy details I want to get into right now. But the bottom line is that in M35, you're going to see even better video quality and a better or more varied class of networks than ever before. And so we're very excited about that. Another thing we're really excited about is we've got a new version of Opus. Opus is the audio codec that is our sort of preferred codec for WebRTC. And it supports sort of full band quality. It supports all sorts of different complexity modes. It supports stereo. Well, in the past what we had was we had really good quality but there was a high CPU cost or higher than we really wanted. And especially on mobile, especially in ARM, this actually ended up being a bit of a hindrance. And so what we've done is we've integrated the latest stuff from the Opus team. This is Opus 1.1. And with Opus 1.1, we can have this great quality at much lower complexity. And so you're going to see on, especially in ARM devices, the complexity of using Opus go down by half. So this is a major optimization. So overall, these improvements will happen across the board. You'll see better quality or you'll see when you're using Opus, lower complexity on both desktop and mobile. But you'll see really that the biggest effect will be you can now use Opus without concerns on mobile devices. We've also got in 35 a new version of what we call NetEQ. And NetEQ is kind of the magic stuff we have inside WebRTC for adaptive jitter buffer handling. And I won't get into all the details, but the bottom line is before we did stereo, NetEQ caused a lot more complexity. And it was because the old version of NetEQ really wasn't tuned for stereo. NetEQ has now been rewritten, made for the future, made for stereo. It can handle stereo much more efficiently. So if you're doing Opus stereo, you're going to see way better results. So we're very excited about that for M35. Just go through a bunch of other minor changes. Turn TLS, the ability to tunnel, turn over basically a TLS socket. We had this previously in WebRTC in previous Chrome releases, but it didn't work properly with TLS domains that were identified by hostname, which is pretty much the way you always want to use it. So it wasn't really usable until M35. But we fixed that. We've tested it. Basically turn UDP, TCP, TLS should work fantastic in Chrome 35. And we've been doing a lot of other polish on reports we've gotten from people saying, you know, my mic didn't work, my webcam didn't work. We've gone through and investigated these issues, made a number of fixes in M35 that just will raise the overall quality of the platform, you know, and that'll benefit your application. So yeah, that's Chrome 35. That's fantastic. It feels like there's a lot of stuff moving towards a more kind of adaptive, responsive way of doing business. Yeah, I mean, we just, you know, we get these reports. Things don't work. We go investigate it. We figure out, hey, here's something we didn't know about. We can try to figure out exactly what we need to do in the software to deal with that particular case. We make the adjustments. We roll it out. We see people that they're happy with it. And we move on to the next one. And like this is the way we raise the quality of the platform, you know, just chipping away at, you know, issue by issue, piece by piece, you know, fixing these things. There's no silver bullet, but, you know, we've been really making a number of fixes over the past year or two that have really made a big difference. So please continue filing bugs. Continue using the Discuss WebRTC list because that's really where we get the best feedback and help us track our progress there. Absolutely. So, well, I mean, one of the things I think we've been asked about a lot from developers speaking recently is considering iOS and video on iOS. Is there any kind of progress update you can give on that? Yeah, so this is, I'm really happy to be able to talk about where we are with iOS. You know, we're still in a situation where due to the way that, you know, the requirements are on the iOS landscape that we can't ship a version of Chrome that has, you know, full WebRTC support built in. And so this makes us kind of sad, but we do ship now a version of iOS, basically an Objective-C library, you know, with Objective-C bindings that basically provides the same WebRTC API that you can compile and make your own iOS app. And in the past, we just had support for audio. We now have full support for video as well. So we're really very close to parity on our iOS SDK. Then we are with Android, with our own Android SDK, and, like, you know, the Web. So, like, you know, we're not at 100% parity. We still have some work to do to add support for data channels and STTP. But this is something that we are going to be working on this quarter, you know, to basically fill in that last remaining gap on iOS. On the topic of DTLS and STTP, it's good to mention that these things are now available on Chrome for Android as well. And also the Android stand-up with SDK. So you can have data channel application working across Web and Android in very soon iOS. Awesome. And I know we get a lot of questions also about, you know, what's happening with the specs, what's happening now, and what's maybe further in the future. A couple of things in particular, thinking about output, you know, device selection. Right. So now developers will, in the near future, be able to add selection of microphone, speaker, and webcam directly into their JavaScript instead of having that managed through Chrome. And so we think that's going to make it a lot easier for users to be able to change input and output devices. And also it'll allow web developers to customize their app, to customize that experience to their app. So that's something, I think, a small thing, but I think it'll have a lot of impact on usability. We got that feedback that developers really want to be able to manage the stuff in their own application, present this UI themselves, and get that control. And the API that exists right now, there's a media stream track to get sources. And this is actually going to change in the new version of the spec. It's a method on the navigator object called GetMediaDevices. And whereas GetSources only allow you to enumerate input devices, GetMediaDevices will allow you to enumerate both input and the audio output device as well. So you can figure out, you know, decide which, for a given audio or video tag, what device you wanted that to render to, so that, you know, if you want to have a headset or something, you know, toggle quickly between a headset and speakerphone, you can do that with just some JavaScript. Yeah, that's great news. It makes a lot of sense on the navigator object too, I think. The other question, of course, that people are always asking about is media stream recording. Any news on that? Yeah, so it actually ends up being a bit more work than we had expected. We're doing a lot of the plumbing right now. Getting the ability to sort of marshal a media stream track, move it around in the system and do that sort of stuff. We're getting a lot of the stuff in place so that, you know, when we actually can turn our attention to media stream recorder, the API, we've got all the guts in place. So right now we're working on that and we hope to have more to announce in the next update we do. That's great. That's great. And I guess maybe a good place to finish off is just a little consideration of WebRTC 2.0. What is this? What's happening? Yeah, there's a lot of talk about, like, you know, where are we with the spec? WebRTC 1.0 or C2.0? There's this thing, ORTC. You know, basically, you know, we're trying to finish up WebRTC 1.0. Basically get to a very stable baseline of functionality that all developers can say, you know, here's what you can expect across browsers. This is the stuff that's fully documented, you know, in the spec. It fully works. You know, this is the baseline. You can go off and build these great applications. And we think we're very close to that. At the same time, we kind of want to look to the future about, you know, where do we want to go in the future? Where's the sort of ultimate destination for WebRTC? And, you know, trying to do maybe some more advanced things, you know, things like simulcast, layered codecs like SVC, you know, how will we express that sort of stuff? Well, we don't really have the right APIs in 1.0. And so in 2.0, we wouldn't have a lot more flexibility to be able to do, you know, maybe some different types of scenarios that, you know, now people are trying to do, like, way more advanced things. And so it's still very early days. But there's this community group called ORTC. And ORTC has come up with, like, a preliminary sort of draft specification that gives an idea of, you know, how to do some of these things like simulcast that you might want to do in the future. And, you know, it's currently, you know, we don't know if this will sort of make it into an eventual sort of working room product. You know, we're interested in it because it kind of matches a lot of things that we want to do. And others, there's a number of other people that are interested. We've heard that Microsoft, you know, has been sort of looking at this as well. So overall, I think there's a lot of community interest, but it's too early to say exactly where this is going to end up. So I just think that if it's something where you're kind of looking at the future of where we're going to go, it's something just to take a look at. There'll be a lot more we can report, I think, in our next update. Right. Okay. So, thank you very much, Justin and Serge. And, yeah, join us for next time. And, yeah, final plug for the new home of WebRTC samples on GitHub. And please, we really appreciate all your issues and additional content and feature requests for those. And, yeah, look forward to seeing you for our next WebRTC update sometime in the next couple of months. Thank you. Thank you. See you next time. Goodbye.