 So, without further ado, I'd like to introduce our gracious hosts and the initiators of this WebRTC movement at Google. So, Sirs Lysappel, it runs Google WebRTC product management, and Justin Uberti is the lead for engineering, and I'll let you guys take it away. Thanks. All right. Thank you. So, this is Justin, and this is Sirs, and together we represent the WebRTC team here at Google, like new logo. So, we've got a ton of stuff that we want to share with you, and only 40 minutes. So, sit down, buckle up, and hang on. Sirs, take it away. All right. We're going to talk about vital signs of WebRTC in general. We're going to go through some news. We have some very important public service announcements. If you miss these, it's your fault. Great improvements and community attractions. Some vital signs. With the help of my buddy, Sahi here, we're tracking about now over 720 companies that are using WebRTC in their products in one form or another. This past February, we were around 550, so we're still seeing some nice, great traction here. And I spent a lot of time with Fipple recently, as you all heard, got to learn a lot about how others use our service, and that was very, very practical, and I really, really recommend you read these reports. Also, there have been a lot of acquisitions since we started in this field, and some of the recent ones here mentioned here, we're probably going to see some more going on later. So, it just shows that the ecosystem is very healthy. It's growing. There's a lot of stuff happening in it. In Chrome, when you choose, when you go to the settings page, and you yourself choose to send anonymized metrics back to Chrome team for helping Chrome make a better service, we can see how many people use peer connection API, and so how many times this API is called. That's what we see. And so, we're seeing some very nice growth here. Obviously, there's a ton of data this way, a ton of data that way. But around late last year, we saw a peer connection API growing substantially, so that for us is also a metric we keep taking a look at, the health of the API, and how many developers are using it. And now, for some news. So, about two weeks ago, we announced this new Alliance for Open Media, where we're bringing together some of the biggest names in tech around new, open video formats. Between many folks who are actually working on Next Generation Codecs at Mozilla, Cisco, and Google, and a bunch of other companies that are very interested in streaming large amounts of video data, Amazon, Microsoft, we've got a great number of stakeholders here interested in building an open and interoperable future for video. And so, you can see more details at AOMedia.org. But the TLDR on this is that we basically want to create an open video format that sees great adoption across the entire industry. There's been a lot of debate about Codecs in the WebRTC space, just a moment. Between VPA, H.264, and this Next Generation of VP9, HVPC, and a lot of companies are saying enough, let's build a standard that is interoperable, open, works for the Web, and allows people to escape so the nightmare of royal peoples. So, a lot of work still to come, but we at Google are very invested in this. We've been promoting VP9 for a long time, playing the super long game, and it's great to see others joining us in this effort, and we think we'll end up with a really great result. And we think that this is starting to already have effects across the industry. Microsoft is going to be supporting VP9 in the Edge browser, and just realized in the new iPhone 6S, it appears that Apple is dropping HVPC. So, momentum continues to build around open formats and open Codecs, and our push we've been making with WebRTC for a long time is really paying off. So, Microsoft Edge, we are super excited. We've now seen our get-user media samples that are on GitHub WebRTC samples now work inside of Edge, and we know they are working on ORTC support soon. We expect to support ORTC, and I'll talk more about this later through our adapter library called adapter.js, and we hope that when it's announced, which hopefully will be soon, that you'll be able to run those samples directly on Edge. So, we're going to talk about public service announcements of things that are coming up in Chrome soon that you definitely want to make sure you pay attention to, and a lot of times we talk about things, you know, we give a release or so about, heads up about what's coming. Some of these are big enough, you're going to want to pay extra attention because they're coming, you know, later this year or possibly early next year. So first up, there's been a lot of discussion in various forums about WebRTC and IP addresses, and a lot of it is sort of a lack of understanding of exactly how WebRTC works, and one of the key issues about WebRTC, and this is sort of mentioned, you know, in Emil's discussion earlier about like peer-to-peer and everything like that, is WebRTC is trying to make connections typically peer-to-peer, and in order to find the best peer-to-peer connection, it checks all your possible addresses, whether it wants to go out a 3G interface, whether it wants to go out the Wi-Fi interface, and in certain situations, people are actually expecting things to go out one particular interface. They think, if they're using a VPN, everything's going to go through that VPN interface. The problem is that WebRTC has a really hard time of distinguishing that case from the case where you're just using your VPN to connect to your company email server, and you don't necessarily want all your WebRTC traffic to go through that interface. And so this creates problems in a couple cases. You know, some people say that, well, wouldn't it just be easy for WebRTC to just prompt the dialogue when someone wants to actually use WebRTC? The problem is not every WebRTC application is using the camera or microphone in a way that the user can really understand what's being asked for. If something asks for permission to say, let's just gather your network interfaces, like how could you present that in a very meaningful and intuitive info bar? That's one of the real challenges we're facing. And so the TLDR in this is that the general gathering of interfaces and going out possibly a different route than HTTP traffic was by design. But we now understood that what users are expecting from the behavior of their browser may be a little bit different. And we're pretty sure we can solve this problem in a way that sort of maintains good quality of service for WebRTC and matches users' expectations, a smart, safe way. So our first step in this direction is a new Chrome extension called WebRTC Network Limiter. And if you install this extension, basically all your traffic will go through the default route, the same route that will be chosen by HTTP traffic. And so if you're using a VPN and your browser traffic goes through the VPN, your WebRTC traffic will also go through the VPN. Now, as I mentioned before, that's not always what you might prefer. If you're using your VPN to connect to your company email servers, they might not appreciate you pumping HD video through that company VPN. But this gives users a way to make sure that if they want to control where it goes, they can do so with a single click to basically inline install this WebRTC Limiter extension. But you certainly know that not everybody is going to install this extension. It's an opt-in thing, and we really want to make sure that this sort of works without people having to do anything special. But this lets us make sure that we understand what might break. We have a number of tens of thousands of people actually used this extension. We found that almost every site works out of the bars. If you don't have turn, it might not work. But if you didn't have turn, on your stuff wouldn't work anyway. The one case we've seen is in certain cases where the client isn't generating any candidates, ice candidates, by itself. Things don't work, as well as some of our demo pages under WebRTC slash samples, because they don't use a standard turn server. They don't work. So we've come up with solutions to these things. We're rolling them out. And so our next step is going to be turning this behavior on by default once we resolve these issues. And so the default behavior is going to basically be bimodal. For cases where the user has already consented to use WebRTC, by giving permission to use the camera or microphone, we're going to maintain the existing behavior, which is basically access to all sets of candidates. But if your application has not gotten camera or microphone behavior, we'll basically try to do the smart thing of sending the data out through the default route. And we think this provides a very effective compromise between the needs of WebRTC applications that need to do advanced things, as well as people who don't want to have sites that are using WebRTC in ways to sort of detect VPNs or other sorts of things. And this prompt will help ensure that. So WebRTC will still work. Even if you haven't asked for the camera or microphone permission, you might end up in a few more cases where it's going through turn. But overall, everything should still be fine. And you get the full set of functionality if you ask for the camera. Now, one piece of feedback that will be really useful for us will be understanding whether applications that don't ask for the microphone camera are negatively affected by this. Because the one thing we do not want to have happen is have apps that don't need the microphone camera asking for the microphone camera just so they can get these permissions. And so that's a case where we might add a different permission to say ask for the network or something else like that. But we like to understand, is this something that the community needs? Is this something where applications are negatively affected? And the way that you can find out is your application negatively affected is try out the network limiter extension. It's a super simple install and see how your app works in this case. So please try this out. We don't know exactly when we're going to make this behavior default, but we're tentatively targeting Chrome 47, which would be rolling out at the end of the year. OK. All right. So you see this is red. This means pay attention now. If you are asking for, if you're using GetUserMedia on HTTP, it will not work by the end of the year. OK. We're going to ask everyone to make sure they update their apps. Only HTTPS will be allowed to get access to microphone and camera. All right. Let's do this once more. No GetUserMedia from HTTP starting in December. Thank you. And just to underscore that, we should have allowed a third one. The bottom line, a lot of people ask, why are you forcing me to do this? And the bottom line is that if you're using HTTP, there is nothing preventing an attacker on the internet from basically jamming a GetUserMedia call into your web page if we don't do this. So by doing this, if we don't do this, you go to some random web page that's HTTP, and the next thing you know is asking for your microphone and exfiltrating the data to God knows where. So this is really important just for the overall security of the web. And there are plenty of SSL registrars where like Namecheap or whatever, we can get certificates for $10 or less. I hope this is not a barrier to anyone. Everyone should be doing this already. But this is going to happen. And I hope everyone gets this fixed before December, because it would be very unfortunate if you said nobody told me about this. Right. Now we have. Pardon me? Chrome is not going to accept self-signed certificates right now. You can file a bug on that if that's important to you. Another thing is coming. DTLS, this is kind of like crypto geek stuff. But we currently use RSA certificates for setting up DTLS encryption. And this is great because everyone kind of knows, understands RSA. The issue, though, is that RSA cert generation is expensive. Even generating a 1024-bit cert on a modern PC can take 500 to 1,000 milliseconds. And we hear lots of presentations about making call set up fast. Well, if you have to generate a certificate, that makes it slow. And it turns out that on mobile devices, it's even slower. In fact, generating a 2048-bit certificate on Android device might take you 10 seconds. So it's really bad. Well, fortunately, we've got this great thing. It's called ECDSA. It uses these things called elliptic curves. It does all the great stuff that RSA can do in terms of encryption, but it's way, way faster. And so it's like, well, why don't we just roll this out completely? There may be things out there that expect RSA certificates. And if you're using a modern version of openSSL, boring SSL, NSS, things should just work. But you definitely want to make sure you test. And so the first thing we're going to have is, in Chrome 47, there'll be a way to basically say, give me an EC certificate. It'll be super fast. You'll probably want to start using it, just in general, if possible, because it means the first time a user uses your app, it'll be much, much faster. But then you'll make sure any back-end servers that you have that may be using some older version of openSSL or something have all been updated. Make sure that actually works. And it can work fine where you have ECDSA on the Chrome side and RSA on your back-end. That will work fine. You just need to make sure that you can handle Chrome's ECDSA server. And like I said, if you're using a recent version of boring or openSSL, which you should be, I think it would just work. Because we are then going to make this the default in Chrome 48 or 49. We also have this in our mobile toolkits, the standalone ones. And that's where it's even more valuable. Because like I said, on Android, and especially low in Android devices, reducing this thing from multiple seconds to tens of milliseconds is something the user can easily see. Also coming, we are migrating support not just DTOS 1.0, but DTOS 1.2. Firefox already supports DTOS 1.2. So I imagine for many of you, this will be something you're already testing with. But just wanted to make this known, we're going to start rolling out DTOS 1.2 negotiation. You can try this out in Chrome just by adding this command line flag and testing with your app. All right, test.wibrtc.org. This is just a public service announcement again. Take a look at it. Try it out. Use it. Think about how you could integrate this code or this experience into your app for being able to give your users better support. This is an application written in JavaScript, open sourced on GitHub, that allows your users to test their network, allows your users to test their camera, tries to see if there's audio coming from their microphones, tests to see if it can go through the network and has network connectivity, also tests a bunch of other things. So take a look at it, and you should come away from here thinking about, how can I help my users help themselves? Because that's also key to running a good service. And we meet a lot of you here. We meet a lot of you on the streets. We get tweets. We get chats. We have a new rule now. If you did not report it on chromium.org, on WebRTC issue tracker, it does not exist. And if you did not report it, it did not happen. So please, when you engage with us, when you talk with us, file a bug first. And then it's much, much easier for us to follow up. Yeah, I mean, I can second that. We triage all the bugs in bugs.WebRTC.org and crbug.com every week. So if you have a bug and it hasn't been looked at, for some reason, then you can let us know. But definitely, that's the place where you should be going. And if you're not sure how to file a great bug report, write in this link all the directions on how to file a great bug report that's actionable, and then we can do something with it to help you help your customers. All right, we're going to talk about some enhancements now. So this is what's actually shipping. With stuff, in fact, you've rolled out in standalone or Chrome over the past couple releases. And then we'll talk about the stuff that's going to be around the corner right after that. So take it away with AEC. OK, delay agnostic echo cancellation. I've talked about this for a while. We've been working on this for about two years. It's now mostly rolled out on all platforms, except on Mac, where we've seen issues. And we hope that next week we can start ramping that up towards 100% slowly but surely. A delay agnostic echo canceler means it tries to understand the delay between the play out of the audio and the input from the microphone itself instead of depending on the operating system. Apparently operating systems lie, which is when you get, and that's how you get echo. So we're really happy with the performance of the delay agnostic echo canceler. It covers much more corner cases than the old echo canceler. And when it fails, it has the ability to recover. So sometimes with the DAAC, it'll fail. We'll never stop working on echo canceling. This is going to be a project for the rest of our lives. But this time when it fails, it recovers. So you might hear echo for half a second, a second, and then you'll hear it go away. But as with everything, you take two steps forward. You need to take a step back. You've heard about a ton of improvements in Chrome recently about battery usage performance. But this summer, around the 42, 43 timeline, in certain cases, the sandbox that Chrome uses for security, somehow we think that there's audio packets being duplicated when data's being shuffled from the sandbox out of the sandbox. So that threw the old AEC off. And when you get in that condition, you get full echo. And it just doesn't go away. The new echo canceler adapts for that and kills it after a second or two. But the poor Mac folks don't get that just yet. So we're still investigating why the duplicates are happening. We think we have a couple of leads we're working on. If you have any hints, let us know. But next week, when we start ramping up, the new one on Mac will have painted over the mold on the wall while we figure out how to remove the mold from under the paint. That's a good analogy. It'll bite me back. But that's about the best I can come up with. Surgeon moldy AEC. There you go. And just one other thing I just want to mention. If within your app you want to control whether you're using the old or new AEC, we had this parameter that you can use for toggling off the Google DA echo cancellation. We expect to get rid of this as this becomes the default. But if you're hitting echo in your app and you want to see maybe related to the DA AC, you can try this. And the next step for this one will be mobile. And then we'll have one echo canceler across mobile and desktop, which is going to be great for everyone. Yes. 10 years coming. Screen sharing. Hopefully you notice those of you that use screen sharing in your apps. In Chrome 45, which went stable last week, it got a lot better, especially in cases where you do scene switches. So you go from a white slide like this to a very complex image. It could take several seconds when you went from something low complexity to high complexity. And we've also added a lot better support for scrolling through documents. We had problem with Excel. Some of the lines on the borders that separate rows and columns would blink sometimes or flicker. So we've improved that. And in Chrome 46, it's going to get a lot better as well. So this is one that can be demoed. Down here, you'll have the source of the screen share. Here, you'll have the Chrome 44. And here, you'll have Chrome 45. And I'll just press play on this. So you see this is a simple one. And now we're going to switch to an image that's also simple, a little bit faster, not much. But now we're going to go to something very complex, a very complex image. And here, you see how much faster the new screen share is. And you all get that for free in your app without doing anything. So you're welcome. All right, more performance things. We've improved the renderer on mobile by a ton. So rendering video on mobile is going to be accelerated now by we measured 5x. We do that by encoding and rendering video from textures. Worked on the capture pipeline, tried to remove how many times we copy the video in memory, how many times we convert the colors and so on. And we've noticed a big increase in performance on the GPU and a big reduction on the power consumption. Here are some graphs. So the scales here are not the same. But you'll notice that just this year, these are the improvements we were able to get this year. So a pretty big reduction here. See, here's 356, here's 313 milliamps. And you can see that the whole, sort of the threshold has gone down quite a lot. And the same goes for, this is appRTC.com, or the, sorry, the native appRTC running on Android. Here as well, a pretty big drop in CPU load. GPU, we've made huge improvements. So we use a GPU a lot less. All right, and now more on mobile. The new Android audio stack for Android phones that use the OpenSL API. 40% reduction in latency. And iOS audio, we've reduced round-trip latency on iOS audio by 30 milliseconds. Complexity is also being reduced, but we haven't had time to measure it for today's presentation. And we're solving a bunch of crashes in the iOS audio pipeline, which I hope you will all appreciate. So you can see we were working, doing a ton of work on mobile. And a lot of this stuff is all about making better use of the hardware. Serge just talked about the fact that we can use textures for rendering. That's fully rolled out on Android. We're still doing some work to get that going on iOS. Also working on getting the texture capture pipeline so that we can deal with textures fully through the stack. But then one of the biggest things is that we now have full hardware and cutter support on both Android and iOS. And so for the Android support, this is for Android Lonely Pop 5.0 and later. On iOS, iOS 8 and later, we've got the actual codecs we can now make use of and it's fully wired off inside the Weber C.org stack standalone. So you don't even really have to do anything if you're using our Weber C.org code and you set the flag basically to use the hardware encoder, you basically are gonna get the hardware doing the heavy lifting. It's not totally optimized yet. We still have some places where we're taking stuff to memory and pushing it back to the actual hardware encoder so the power is still a bit higher. Even though you see massive improvements what Serge demonstrated, we know we can still go lower. The further optimizations are still coming. Bug 4081 is the master bug and bugs that Weber C.org tracking the iOS work. But generally, we're now at the point where we're getting the full capabilities of these devices that the modern devices, Nexus 6, iPhone 6, success can do. It's now accessible to your Weber C application. Once our interesting note, we notice that now that you're turning your work over to the hardware encoder, your performance depends on how good the hardware encoder actually is. And of course, that is a mixed bag, so to speak. We find that the Qualcomm chipsets, which now power most phones, are the ones that seem to work best. If you're someone who works with another SOC maker and you're wondering what we're seeing with your SOCs, be happy in talking about it more. We'd like to get better performance across a wider variety of SOCs. So, one other fantastic thing we've been able to get working recently in Chrome is just an improvement in video smoothness. One of our engineers, Nicholas, and his team has been working beautifully on this to basically get- He's here, by the way. Yeah, you talk to him if you have any questions about this. You get that butter smooth performance so that if you're sort of, we do this test, we call it the wave test, we kind of move it like this, and you see any glitches as things move back and forth. And so, in Chrome 45, our story was not super good. Each of these red dots kind of represents how long we rendered a frame. Each blue dot is how long the frame we thought it was when we captured it. Now, it turns out that all these frames have the same duration. They should all be 33 milliseconds, or 33,000 microseconds, and we're all over the place here. Now, we got a large sort of cluster here, 33,000, but some are as high as 66,000 or something in between, and some serious outliers. And this is just because the way we time-stamped the frames coming into the capture path and the way we made sure we blended the frames on the render path didn't really, well, let's just say it wasn't super precise about time-stamping. And so, TLDR, here's Chrome 46. So, it's obviously much better. There's still a couple outliers we can still fix, but you're gonna see way better smoothness. So go, if you see Nicholas, give him and his team a big high-five. This, you get this all just for free, starting in Chrome 46. I think that was like the most delayed clap I've ever. Is this good? Is this not good? Should I clap? Is it worth it? It's worth it, it's worth it, it's worth it. So some other good stuff coming on Chrome for desktop. We've been doing, I said, a ton of work on mobile, but we're also actively investing in desktop. One of the most requested things was, how can I just set the freaking audio device that I want my media to come out of? We've got it in the spec. We've got it implemented. It's behind a flag right now, but we're working on removing that flag. You can track the exact progress of it here at this bug report, and that way you can make sure that your audio goes to a headset, it goes to a speaker, whatever. You now have that full control. There's a new API for how you enumerate devices. Previously it was hanging off of, I think, GetUserMedia. Now it's directly called media devices, enumerate devices. And one other thing we used to require that when you're sending data over a data channel, there was no good way of knowing, like when the buffer was empty, and so you had to run this timer, and like, is the buffer empty? Should I send more data? No, now you can just install a callback and say, when do I need to send more data? And this unbuffered amount low is the trigger for that, and that'll tell you when the data channel needs to be fed with more data from the application. And so we've done a bunch of work optimizing the data channel between this and low water marks down in the STTP stack. Your throughput should now be much higher as of Chrome 46 than it was in previous versions. So again, your data channel app should just work much better for free. And lastly, IPv6, we talked about this in some previous updates, it says something coming, we're rolling it out, we did a ton of testing, AB, make sure this didn't add any additional delay to call setup process, we did our full measurements, it didn't, we rolled it out, it should all be working, if you're on IPv6 only network, you should still have WebRTC work rate, and this is on Chrome, and it's on iOS, and it's on Android. So that was what you can already use, now is the stuff that's upcoming that I think you're gonna be super excited about. So what do we have first? iOS APIs, we've gotten a lot of feedback from people saying that these are kind of hard to use. And for a long time, even just building was hard to use. In fact, there was a blog post which we all read internally that was like, how you can use iOS, or WebRTC in iOS without wasting a day of your life, or something like that, it might even be multiple days of your life, and it kind of hit very close to home. I think what we're doing a lot better with this now, if you still find yourself having a lot of trouble, you know, let us know, but we want to get to this future where basically you get a cocoa pod, and you bring it in like any other cocoa pod, and you're off and running with WebRTC on iOS. And so that's what we're working on building. It's gonna take a little while to get there, and we're gonna have a few API changes to basically better correspond to modern object to see sort of practices, model the JS API a little more closely, change a little bit how eventing works, but we think this is gonna make life much easier for bringing up WebRTC in iOS. Basically, you know, we want to get, like I said, you just drag this into Xcode, and you're off and running. So if you're using WebRTC in iOS, and you have any concerns or complaints or worries about this, come see me afterwards and let me know, but we want to make sure your life is super easy, and I think this will make a huge difference. Okay. It sounds like it is painful enough right now, I think we're going to clap for that. If you want, I could take this one. In 47, we're gonna be releasing VP9, the VP9 codec without the flag, and right now our initial measurements show 40% less bandwidth at an increase of 15% more CPU on desktop. So there are clearly some great use cases that can extract a lot of value from VP9, and we've partnered up with a company called Vdial who we'll, who we're working on with not only setting up the packetization and all that, but also working on SVC support. So temporal and spatial layer with a low overhead. And, but the initial, remember, the initial release in 47 will be single layer. We're working with them to start adding it. And we still need to figure out how to control this via our APIs. So 40% fewer bits is one way you can do it, but you can also get 40% more quality at the same bits. So like, you know, depending on whether you're sort of focusing on low bandwidth or on higher quality, like you win both ways. With the SVC support, you know, a lot of people might be thinking like, this is just for conferencing. I know Emile talked about this earlier today, but one other great thing about SVC is around robustness. It's because you have the layering, you can still display a video stream even if like one of the pictures can't be displayed like that enhancement layer. And so one thing is that if we have the low overhead, we think this is gonna be something that you're gonna wanna turn on even if you're just doing like a point to point call. And it's because you'll be able to basically allow for, you know, up to basically 90 or 95%, you know, if your packets will not be in the base layer, so you'll be able to tolerate way, way more packet loss and still have a video stream that keeps on chugging. So we're pretty excited about this from the robustness perspective. The initial rollout, you know, we don't have the right APIs, I'll talk more about that in a minute. But just by setting some flag, you'll be able to like basically turn this on for all your calls, you know, it'll be great to get some feedback on around like, you know, does this actually have the robustness benefits that we think that it does. H264, well, you know, if you're one of those companies that still believes in like old like, you know, royalty bearing codex, then I guess maybe you'll be interested in this, but for some reason people continue to ask for this. So we said that we do it this year. We intend to fulfill that promise. We have it kind of working right now in development. You know, if you're using early tippetry builds, you know, it's not ready quite yet, but you'll see this working, you know, in the not too distant future. We're focusing right now on using, you know, software codex. We know that on some platforms, they have hardware acceleration APIs, you know, Mac has video toolbox, Windows has media foundation. We will actually probably make it work with Chrome OS. H264, since we've had that support in the past in hardware. But we are gonna make sure that it also works interoperably with the hardware encoder and decoders on Android and iOS. And as anybody who's spent a lot of time down with like the down and dirty of H264, you'll know that not every H264 is equal. We've spent a bunch of time just dealing with some of the friction and impedance mismatches of this. But, you know, the reason it's taking a bit of a while is just making sure that this works out of the box with some of the things that we're doing on the mobile platforms. So anyway, M48 is the current milestone that we're shooting for, for like a full release. We'll probably have initial versions in Canary earlier. And M48 will go to beta, you know, mid-December. So that's our current target, and if you wanna track the progress, here's the bug that you can pay attention to and subscribe to. Testing here is gonna be really important, so start testing it early. Right. We could not have- Come on, Justin. There's just a few more. Yeah. We could not have a presentation like this without talking about Media Stream Recorder. There are a lot of people interested in H264, but there are not, the number, this was actually the number one most starred bug on the entire Chromium.org bug tracker. Not just in WebRTC, this was across like everything. Like, so like, there could be some bug that, you know, Chrome didn't load pages at all and like, you know, more people want a Media Stream Recorder. So we'd like to know who's to blame for that. So this is gonna take, we have people all engineering working on this. Nicholas and his team are working on this. You can talk to him if you're interested in this. We'd be interested in which scenarios are most important. Right now, we're focusing on the recording of local Media Streams that come from GetUserMedia as opposed to ones that are coming off of a remote peer connection or something like that. So if there's something specific that you need or want, you can either add to this bug or come talk to us or the team. A few other miscellaneous things. Many presentations earlier talked about being able to get ice working faster or turn working faster. We've made a lot of improvements in this area. We think there's a lot more that we can still can do. AppRTC, we did a huge sort of optimization project where it went from setting up around one to two seconds. AppRTC now sets up typically in around 300 milliseconds unless you have to go through turn, in which case it takes longer. You wanna get that number down to 300 milliseconds in all cases. Another one is just around making efficient use of radios, especially on mobile, as well as being able to cut over from Wi-Fi to cell. There's this general problem, we sort of call the walk out the door problem where you run a Wi-Fi call and you walk out the door and now you lose your Wi-Fi, you wanna switch to cell. A lot of times right now, it's a pretty jerky process where the call drops and you have to wait for the ice restart to bring it back up. We think we can do way better. We think it's gonna be really useful to a lot of applications. And here's some things that I'm just throwing out there that people are asking for them. They're high up on the tracker, but with all the other things we're working on, we are not committing to work on right now. And this is remote processing of incoming media streams in Web Audio. It's support for Unified Plan. I think Unified Plan can be largely finally filled in JavaScript for now. We'll return to this later. And WebRTC in Chrome for iOS. This last one is kind of interesting because someone recently posted a $10,000 bug bounty if this bug could be fixed. And not that really influences our judgment or anything, but if somebody was enterprising out there and wanted to make $10,000, I recently saw a GitHub project basically where somebody took WK WebView, the thing that's gonna be underlying the future versions of Chrome for iOS and basically added WebRTC support to that. So if there's somebody out there had some time on their hands and was motivated, I think all the pieces are out there that just could be glued together to make at least audio and data channels work in WebRTC and Chrome for iOS. But this is not something that we are actively spending engineering time on right now. Okay, besides the H.264, the other favorite topic, and I know that there'll be some other discussion on ORTC in a few minutes, but I just wanna give a summary on where's WebRTC, where's ORTC, and what is the Google position on this. And so, short thing is we just had a great meeting in Seattle about WebRTC 1.0. Very, very good and closely on a lot of issues and we are now targeting our last call-ish thing by end of year 2015. The reason I say last call-ish is that W3C has gotten rid of the last call sort of process. But the bottom line is that we are planning on having a 1.0 done version of the spec and we are very much on track to get that closed out in the immediate future. One great thing about WebRTC 1.0 is that due to the work that's gone and both the ORTC CG and the WebRTC working group, we've kind of brought these things together. So that WebRTC 1.0 incorporates a number of the objects that were present in the ORTC specification. And if you're familiar with this spec, you'll recognize these names, the RTP standard receiver, the ICE transport, DTLAS transport, SCTP transport, and these all give kind of a better control service for applications that are trying to do advanced things with WebRTC. And so you can do stuff, even in WebRTC 1.0, like the switch camera on the fly, you can switch the codec on the fly, you can configure how many bits you want to give to a given video stream, all without munging FTP. And so if there's ever anything that people should be clapping for, it should be that, because I've seen some terrible things. Turbulence. Turbulence. Turbulence. Simulcast is one of the biggest open questions. I know a lot of people on new Simulcast will saw some great presentations on what can be done with Simulcast. We're trying to figure this out. It may not fall into 1.0, but given that this is a top demand, we're trying to see maybe we can do this in an extension spec or something else very soon. And this stuff, you should all expect to see this like these new objects and object model will be showing up in Chrome in 2016. And if everything goes well, maybe in a little in 2015. Peter Thatcher is here, who's our lead for our API team, and you can talk to him and twist his arm a little bit if this really matters to you. So what about like, OR2C, OR2C? What about being able to like do things without any peer connection or any FTP? Well, we are talking about the thing after WebRTC 1.0, we previously called it WebRTC 1.1, but since like numbers are hard to pin down exactly, it's now being referred to as WebRTC NV for next version. I admit it. But there you have it. And WebRTC NV fully converges ORTC with WebRTC 1.0, where basically you have the ability to use peer connection as well as some of these other objects typically in a read-only form. Or you can just say, no peer connection for me, I'm going straight down to the RTP sender receiver objects and I'm doing everything directly. And now I'm gonna use that do what I say API. And apps have that choice. They can program to the 1.0 APIs, the high level peer connection API, or they can go down and say, I know what I'm doing, I'm programming to the object APIs. And if you're doing the latter, you can completely bypass the SDP model if that's what you want. And so if you take one thing away from this presentation about WebRTC and ORTC, these things are convergent, not divergent. And basically we welcome Microsoft's contributions and appearance in the WebRTC ecosystem. I think this can be a great result and give application developers way more control and way more flexibility in building their apps. So yay. And that's it. There was a lot of stuff. We actually took 10 minutes less than we thought because we really went through it fast. We got 50 minutes, so we used 30, you know. Yeah, exactly. Do we get 10 minutes? Yes. So please. Questions. Before, as we hunt down for the questions, I'd just like to thank a bunch of you, Twilio, Talkbox. There are several that I forget that are now testing on Canary. They're now testing on Beta. They're filing bugs early. They're helping us catch stuff. Huge, huge, huge thank you. We're in a much better position now with our ecosystem and our developers that are reporting bugs early continue to do that. And if you're not testing on Canary and Dev Channel and Beta Channel, please do it because the site Rob said we're all human. We all have bugs in our code. The important thing is catching them early. Right. I mean, just to underscore that, if you test on Canary or Dev, we have on average around seven weeks to fix any bug that you find before it goes out to production because that's a stable. We don't test until Beta. We only have two weeks. And if you don't test until stable, well, we'll get you in the next version. And same applies for Firefox, by the way. Please, please, please test on Dev Edition, not on Beta. On Beta, we're only going to fix security issues. That's it. More like, not a question, more like a comment regarding like the crypto stuff. A plus, but guys, watch out. If anything goes wrong on the crypto layer, you won't get any notification in JavaScript or because like when we basically did the DTLS 1.2 and also the ECDSA stuff, we got like quite a few bug reports for people like trying to figure why the hell like their peer connections don't do anything anymore. And it always turned out like, yeah, you can find it somewhere in like some low level log files, but basically in the JavaScript layer, you won't notice it's just like, doesn't work. Does it hang or does it go to the closed state or failed state? It goes to failed state. Yeah, only thing is like ice fails basically and like, but people see like, hey, stun checks are going and working, but like we don't have like any API which exposes DTLS problems yet, except for the DTLS transport thing. Yeah, so like I have good news report there that at least we have on paper an API that will solve this. It'll take some time to be implemented. But yes, this is definitely something that people have complained about a pain point. I wanna fix it. I had a question about mobile. I know you guys have been focusing quite a bit on it. Is there continuing work on improving bit rate estimates for mobile specifically? Yes. Just because I know there's been a lot of issues. I'm guessing you guys don't feel like you're there yet. Pardon? You don't feel like you're there yet with it. Yeah, I'd say on Wi-Fi, I feel like we've come a long way. We are actually, there's two phases basically of the bandwidth estimation stuff that we are working on right now. I actually say three. First is moving bandwidth estimation from the receive side to the send side. This will give us the flexibility to allow you to bring your own bandwidth estimator, which is important for things like interop with Edge, because you basically have to just agree in the feedback format and you don't have to agree on all the details. Second thing is on dealing with like 3G, 4G, cellular type of scenarios. I think we work pretty good on Wi-Fi, but we still have more tuning to do on those types of networks. And the last one is the combined audio video bandwidth estimator. Right now, bandwidth estimation only runs for video. And so if you wanna have like the behavior that Philip was talking about, where if you're on a 2G link, you drop down to a lower audio bit rate, like, you know, we don't have that. So there's the three areas where people are working on bandwidth estimation and it's gonna be a long project. We've been working on bandwidth estimation for a long time. We continue to work hard on it. The combined one should be coming out this year though. I hope so, yeah. I did another one, but that's what's up. The other question was about, I'm just looking at Hanky's talk about looking at the adaptive audio. Is that something that'll make it back in adaptive audio? Like looking at basically very low bandwidth networks. The combined bandwidth estimator will also adapt audio bandwidth bit rates. Right. That's exactly the problem I'm trying to solve there. I see you've got the generate certificate. Hi, Tim. Hi, so you've got the generate certificate API in there. Did you also do the bit about being able to persist it into local DB, into index DB? So that's gonna be a key part of it for RSA is that once we get to the point we're gonna throw the switch so that you actually have to generate an RSA cert. We're gonna have the index DB persist and so that you can actually avoid having to generate the RSA cert that's super expensive each time. But can you persist the EC, the new cert? So the initial rollout, the EC certs will be throw away. You'll have to generate a new one each time. We won't have the index DB backing. But we're gonna implement that before we then switch it from RSA being default to EC being the default because you really want that index DB persistence for when you have to generate the RSA cert. I have a use case for general certificates. I don't use file about... Yeah, I mean, it's not that we don't wanna do it. It's just that we wanna get to our EC DSA future as soon as possible. And some of these things like, you know, the index DB stuff will come later. Hi, I'm Uded from BlueJeans. We've been rolling out WebRTC in production. No, we've been talking to you guys. So great job. Thank you. Question for you. Are you really committed to parity between what Hangouts does and what WebRTC third-party partners get? Absolutely, yes, great. I mean, everything that Hangouts uses, you have access to. Well. Except the screen share. Yes, people. Where's Philip? There. I'll duck again. So a lot of the work that we're doing, I talked about the Output Device Selection API, and this was one particular case where Hangouts had an API that was not exposed everywhere and that was just something we needed to do to get Hangouts out the door. We then had to get it through the standards body and figure out the standards, but we tried to fix that basically as soon as we could after shipping Hangouts. And so now you see the result. It's actually available in Chrome now. So if there are other cases, I'd be happy to discuss them more. Other questions? Is this on? Okay, cool. On the topic of screen share, you talked about some pretty awesome enhancements. Is there any thought around API for tab share or application share, and is there any work being done on that? Tab capture. Yeah. We've talked about it. We know there's interest in it from a privacy perspective of not sharing your browser Chrome and that sort of thing. A lot of this comes back to the Picker. We have this Picker that comes up and how can we display tabs in an effective way there? I think there's probably a CR bug open on this. It hasn't bubbled to the top of the list, but it's something that we are paying attention to and want to deal with at some point. Cool. In terms of security, I would think that actually brings down the security implications because your entire desktop has way more personal information on it than a single tab would. So I've worked with partners. I try to meet many developers frequently and I've worked with some that want to keep the bookmarks and the Chrome of the Chrome present because users then understand where they are. Some people when you're just broadcasting the tab, they won't recognize that hey, this is part of my browser. So there's, and that's also a security implication when your users don't understand what they're doing or where the data's coming from. So I've been asked both. Of course I've been asked more, your question than the other side, but just to say that there's always two sides to the metal. Sure, if there was a way in the API to differentiate between the two, I think that might give everybody the option. Absolutely. I think we have time for maybe one more question. We have time for a bit more than one more. Okay. Oh, it says one minute here, so. Hey. It's the first time I hear Tzahi say that. Sorry, please. So I work with some of the mobile libraries that you guys put out. Is there a good way to get a stable version of that? Is it to go into the release branches? Is that what you guys recommend? Yeah, that's what I would recommend. And I think that in all honesty, we have been kind of fast and loose with some of this stuff where every Chrome branch, we then go and have fixes that are back ported there. And of course that would include any fixes that would be generic things that affect mobile and Chrome. But when we have specific issues that affect iOS, stand on only or Android stand on only, there's typically not being back ported to that branch. So that may be something that we want to be more diligent about in the future. I think we just haven't had that many where the actual process of bug is introduced, it's detected and then the fix comes in and then fixed in the branch and then the customer verifies. And so like if there's some specific cases, let us know and maybe we can get to a little better process where we're not just merging regular fixes, we're also merging iOS and Android fixes back to these branches. Cool. And then when you guys move over to Cocoa Pods, will you be producing like Cocoa Pods for these like stable branches as well? Yeah, I think that's, I mean, when we go to Cocoa Pods, we definitely have to do that because we really want people where it's like, you don't just build your own Cocoa Pod, you basically take the blast Cocoa Pod equivalent to basically a Chrome release. Cool, thanks. It came up briefly earlier about hooking up the web audio API to downstream media collections, been open for three years, just wondering if that's something that's ever gonna happen. This one here? Yeah, so. That's it, yeah, yeah, I guess I missed it. The short answer is that it's really, given our architecture, there's architectural limitation and it's a lot of work to kind of unwind things is that the way media streams work is they're like post mixer and we need to sort of move them to be pre-mixer and while that might sound simple, it's actually a lot of machinery and so we've actually chosen to focus on things like media stream recorder which had even more people screaming for it first and then we can return it once some of those things are off the list, we'll then return back to these things. So it's not a never, it's just that I know a lot of people are asking for it and I'm sad to say that we are not supporting our engineering, doing engineering work on it right now but it's mainly just because there's so many other things that people are really screaming about. The simulcast support that we heard about in the earlier presentation, is that video codec agnostic? Will that work with VP9 and H.264 right off the bat? Do you mean the sort of homebrew once in Chrome right now or the one that we were talking about for the future here? Oh, either or. Well, the future one definitely will. It really should be codec agnostic because all you're saying is like, I wanna encode end streams at these resolutions and bit rates and everything. That'll be codec agnostic. Chrome does support this sort of homebrew screen sharing or simulcast right now where you can trigger it to generate three streams and it basically gives you three streams of a predefined set of bandwidths. I don't know if that's gonna work out of the box with H.264, that's something we should go check out. Last question this time? Okay. Great, good questions. Thank you.