 Yeah, I wanted to talk to you about what Mozilla has been working on lately in the area of WebRTC and what's coming up next. So let's start with like the pretty basic example of how do you make your video connecting to a remote site. You basically do your get user media call to ask for the permission to access the camera, which gets the nice prompt. And if the user allows it, then you get the video stream. And then in your JavaScript code, you connect that to the peer connection for this presentation. We don't worry about the far end of this. So here's like a little bit of sample code, which obviously doesn't work like that. But just to give you an idea, as we heard earlier before, you want like a video tech to locally render the video you just got from the camera. People can like basically check that the camera is actually correctly pointing at them. You instantiate your peer connection, then you do your get user media call. And if you get your stream in there, you first attach it to your local video that basically renders your preview. And then you attach the tracks to your peer connection. That's the normal scenario. So what we have been working on is use a canvas as a video input for peer connection. Hopefully the web developers here will be familiar with the canvas for the voice over P guys. It's probably like a more new thing. So yeah, basically what it allows you to do is to take a canvas instead of a camera and it will, you get like a video stream out of it. And you can connect that to your peer connection. It's supported in Firefox 41, which is I think is in beta right now. If you want to try it out, it's right now it's what we call preft off. So you need to go into about config and like switch a user preference to turn it on. But we're expecting it to be live in Firefox 43, which is I believe hitting the market in November. Just to go back to the previous slide. So basically in the picture, we just replaced the camera with a canvas as the video input. So this would be like, well, you could like draw on the canvas, right? And the far end would see what you're drawing. Sample code. We no longer need a video to attach it because the canvas is going to be rendered anyway locally in your browser. And so the new thing is basically what I highlighted here in red is the capture stream function call. You can tell it to how many frames per second you want. That's the 15. And out you will get like a video stream. And then you attach that video stream to your peer connection as we have done before with the regular get user video call. To make it a little bit more interesting, we can do a get user media call first from the camera, get that video stream, connect that video stream to a canvas, do some little magic on the canvas, take that video stream from the canvas and attach that to the peer connection. So that's what I'm going to demo now. And I'm not like brave enough to actually do this with like two browsers and remote and so on. So this is all just locally in my Firefox. So here's just a regular canvas on the left side. That's a peer connection to myself with the remote video being rendered here on the right side. And so I can grab this with cloth here and like basically it transfers as a video to the far end, which is nice. But yeah, I mean the more interesting demo is actually the one where we have the local preview window. We have the canvas and the remote video. So if I do share my camera here, so now my video gets first in the canvas inverted and then sent over to the far end. And I can play around and like change the colors, pretend that I'm on an old TV, or like for the real geeks, like turn myself into an ASCII thing. You can find the links to these in my slide deck at the very end. There's also a link to an actual demo, which does exactly the same thing, but with like two different browsers, basically uses peer.js to do it for two browsers. Just watch out that the actual peer.js 0.3, the official one currently doesn't work with Firefox. Use the one from Skyway, they have fixed it. So if you take this a little bit further, I think this is where it gets really interesting. You can basically take your canvas and like for example take multiple cameras and all attach them to the canvas and then you stream off from that one canvas to the peer connection. Or you take your local video and your screen share and mix them together in the canvas and stream that off to the far end. So one of my colleagues basically used this feature recently in a meetup in Oslo to turn Firefox into an MCU. So he basically ran an headless Firefox on Amazon EC2 and had an MCU which was able to do video mixing for up to eight participants. Probably not what you want to use for like for scaling your service, but like for quick demoing, pretty nice. So yeah, I think this is actually one of these pieces where like WebRTC has actually a big advantage over like what has been referred earlier here to as the voice-over API technologies because the canvas stuff, the APIs are like well-known, well-supported in the browsers and if you would have wanted to do the same thing in a voice-over API installation you would have meant like writing tons of C++ code, deal with desktop phones, what not right here. You just like, we did the work for you of like okay, get a video stream out of the canvas and you guys go have fun with like whatever you can do on a canvas. It will get streamed to the far end, right? So yeah, that's I think a pretty cool feature which is hopefully going to go live soon. And the second part of my presentation is about other upcoming features. So you can get thinking and become creative. More on a little bit on the technical side. We added IPv6 support. The FX basically is our internal abbreviation for Firefox. So that's like that comes with Firefox 42. Opposed to Google, it's not with a pref, so it's on by default. We'll see how that flies. We added support for ISTCP in 41, but again this is preft off. So you need to go to about config and switch a pref if you want to play with it. We're probably like going to turn it on either in Firefox 43 or 44. What that means is if I make my test calls in the office now, I basically for like a call with audio, video and the data channel you get like easily over 50 ice candidates. Which is like, yeah, interesting. Interesting to debug, interesting to watch. Upcoming audio features. So we're spending quite some time on improving the performance of the audio stack in Firefox. Basically means for example like garbage collecting was optimized and we're probably going to rewrite lots of parts there to prepare for more features. For example, the stereo support is going to be coming hopefully in the Firefox 43. Actually the actual stereo support is already in. Currently the only piece left is the opus codec, which right now, it's not the codec itself, it's just the way how we use the opus codec is that it does only everything in mono. So that's the only remaining piece before you can really test it. Yeah, we're increasing the audio from 16 kilohertz to 32 kilohertz to further improve the audio quality. And then another feature we're working on is like audio capture for screen sharing. Probably again in Firefox 43. This is probably going to be like screen sharing first and then application sharing and tap sharing probably comes later. I don't know which version exactly. More upcoming features as we started working on Simulcast even though it's not final on the spec yet. I just heard that's about the controversies in Seattle. I actually don't want to know about it. We're working on apply constraints for get user media. That basically allows you to, for example, if you made your get user media and asked for the highest resolution from the camera you were able to get, you can go and say apply constraints and I want a lower resolution video screen right now. So basically if your application for ever reason, you shrink the video or you figure out that there's not enough bandwidth or whatever you could like basically shrink the stuff down locally already and save maybe CPU time or bandwidth or whatever. We added in Firefox 43 a bunch of new prefs and abilities to basically control how your ICE candidates are going to be exposed to the JavaScript. So that comes in Firefox 42. We just yesterday published two blog posts about that and you can find the links in my slide deck. And another fun one is that in Firefox 43 we're going to remove the MOS prefix from peer collection. So we have already like get user media is already unprefixed in Firefox for quite some time. You just have to do the media devices get user media. So peer connection is basically the piece left which still has a prefix and so we're going to remove it soon. Basically stating how we think how stable this whole solution is. That's about it. Any questions? Questions? Give me one second to get your microphone. So when you add a new feature like the canvas capture is generally the path to push that back into the WebRTC standard so that all browsers will support it or are there? That's based on a draft which is again similar to Simulcast it's basically being actively discussed and it's not final yet. We just went ahead and like we got like lots of demand in the community and so we just went ahead and implemented the draft which also means that potentially the API could still change. But it's like a I think a fairly small API and not as much worried as with Simulcast. And in general I mean as Mozilla comes up with new ideas and cool new stuff is the idea that all of it will become common or that there will be some Mozilla value add maybe in some areas of cool new functionality? Well I mean we're an open source company so we cannot keep anything private for us so like whatever we add like you will be able to find in our source code anyway but in general we cooperate with like the Google guys about all of the WebRTC stuff pretty closely. So yeah no the chances of Firefox having like a pretty unique really WebRTC API based feature I think is pretty slim. We have time for one more. Anyone in the back since no? Alright you lose. So the video mixing is interesting. Is there a delay in combining it into Canvas and what's the requirements for the guts of the machine? I have to be honest I haven't done any performance measurement. I'm pretty sure there is a performance penalty. As I said like my colleague restricted his MCU demo to 8 that's what he figured out was the biggest that Amazon EC2 instance could give or handle for him. But yeah you probably don't want to try this with like with tons of things so that's what I said like you don't want to use it for doing your professional video mixing but for like little demo purposes might save you some bandwidth might save you some time but yeah you would have to I would anyway probably recommend to like double check it on your machine if I would tell you like some performance numbers there's no guarantee that it will be the same for your machine. Great thank you Niels.