 Can we bring up the QR code for the session? Or is that something we have to do next, after the talk? Well, after the talk. So then I will save my little talk with it later then. We have a very, very good panel here of people who are extremely experienced in, basically, real-time data. I strongly recommend that you just read the panel introduction from all of them. We've got Henrik Jortek right here, John Fallows, Wesley, who actually did this lovely on-slide demonstration as well, which by the way is right there. And then to my left we have, is it Martin? Martin and Rob. Sorry, welcome. I do recommend now that we have the on-slide up, I really strongly recommend that you guys take a look at that and get it started. As a perfect introduction to this talk, if you are on an iPhone and you bring up that web page and it goes to sleep, you must refresh the page. Because the website gets closed down automatically. Perfect introduction as to why we're here. I also would like to try using it during this talk. So as many of you as possible can bring it up. We'll ask some questions along the way. We'll see how it goes. But I think right now we need a general introduction. That's what Henrik's here for. So please. All right. Let's see if I can get my slides back on this screen here, maybe. Here we go. OK. Hi, guys. My name is Henrik Jortek. I work at a company called And Yet. And I've been building real-time web apps now for about four years. We started out using XMPP and StropheJS and Bosch and all that stuff back in the day. Long polling. Then WebSockets came out. We started messing with that. Socket.io and the like. Recently, we've gotten really into WebRTC. I think it's useful before we get too into the technical stuff to kind of step back a minute and kind of realize what this really does in terms of the web. The web is really about human communication. And so these are my kids. Aw. And they love their grandparents. Problem is, their grandparents live in Sweden. For those of you who with a typical American geographic understanding, it's far away. Sorry, that's low. I'm sorry. I'm sorry. This is not this crowd. I know. So we built an app called, so every Sunday, my kids get to talk to my parents over an app we built called talk.io that uses WebRTC. Talk.io is the first app I've ever built that actually passed the mom test, meaning my parents can use it. What it is is anytime you're on the same URL, you're in the same conversation. That's all there is to it. So no login, no off, no friending, nothing. And it's using WebRTC. There's nothing to install. It just works. So to me, it just serves as a very kind of practical example for how, ultimately, these real-time technologies are actually bringing people together and helping make the web better. So let's talk a little bit about WebRTC. We're going to cover other stuff, too. But I want to focus on this a bit, because it's been kind of a recent focus of mine. WebRTC really is a lot more than video in a browser. It's actually a low latency, peer-to-peer networking. And that's really exciting. So one of the other cool examples of what you can do with this technology is called PeerCDN. PeerCDN was actually built by the same guy who did Google Instant. But anyway, it's this really cool concept of where, say, it actually uses data channels to send files to other current visitors. So say, for instance, I'm hosting a simple little video site. I'm actually hosting my own videos. All of a sudden, I'm on Reddit, and I'm getting slammed. So normally, that would be expensive for bandwidth. But in this case, the swarm of people that are gathering on your site to watch this footage can actually download it from each other. So instead of creating this almost ad-hoc, like BitTorrent type network, which is really fascinating, it's just kind of an example of what you can do with the stuff. Also, given recent stuff with the NSA and with encryption, et cetera, et cetera, et cetera, I actually think WebRTC is really important for the web. It's decentralized. It's encrypted. And yes, maybe there's backdoors who knows. But ultimately, this is stuff that we should be doing, and it's a big win for the web. So how are we doing so far? Well, let's start with a story. A year ago, we built, and yet we built, ATTJS. ATTJS was kind of a demonstration used at CES by AT&T to demonstrate making and receiving actual phone calls in a browser using WebRTC, which is cool. We got to work with way too many caveats. It actually required running a certain modified version of Chromium that the Ericsson team was maintaining, not ideal. It's gotten better. So if you look at this, here what we actually have is a Nexus 4 running Firefox Nightly, a Nexus 7 running Chrome for Android, and then a desktop running Firefox stable and Chrome stable, all in the same conversation at the same time. So we have, it's gotten better. It's available now on some mobile devices, and interoperability is improving for voice and video. Sweet, so we're good. We can all go home. We should just use this, right? Nope. WebRTC is still quite finicky, and if you've tried to do anything with it, you've probably discovered this. So just to kind of give an example, here's what you have to do to set up a video call right now between two users. So first of all, getting user media. It sounds really simple. You request access to their camera and their microphone, right? You'd think. The methods are still prefix, which is fine. I mean, that's to be expected. They throw very different error types. So in Firefox, the error handler that you give it will give you a string back. In Chrome, it will give you an error object as it's supposed to, but neither quite follows the spec as far as telling you what went wrong. So in addition, specifying constraints for like, hey, I want a smaller video is available kind of with limited support in Chrome and not at all in Firefox at the moment. Screen sharing, which is actually really important for replacing something like Skype or Google Hangouts, is available in Chrome, but it's kind of flag. Very hard to detect error types. Requires HTTPS. So even if you're running local host, and if you don't have your own self-signed cert or something, it will just fail silently, and you won't know why. So as a result, what do we do? We create abstractions. So we wrote a get user media model to handle that part. Attacting media stream. Also something, so once you request media, you have the stream object, and it's your job to then attach that to, say, a video element or an audio element. This has gotten better. The APIs are more similar now, but you have to convert it to a blob URL, attach it as a source. In Chrome, you set auto play to true. In Firefox, you attach it and call play. Point is, there's differences. And often, you want to mute the user's own video, so they don't want to echo back to themselves, et cetera. So again, now that's another thing that should be simple, now has become another module that we were maintaining. So beyond that, the thing that a lot of people don't understand about peer-to-peer is, ultimately, you have to have some mechanism for the two peers to discover each other. It's not like we just magically know each other's external IPs, and we can just send stuff directly. So, and this is not in the spec at all. It's purposely left out, so it's totally green field. And actually, I think that's good. But that means, as a developer, you have to do a lot more work. You have to help the users discover each other. You have to help them figure out how to pass data messages directly to another user, which is not something that's necessarily in the tool bit of the average JavaScript developer. So because now you need some kind of server technology as well to be able to handle this. With Socket AO, it's not hard, but it's still something that's new for a lot of people. In addition, you have to do some level of capabilities detection for certain things. So for example, if you do screen sharing from Chrome, it won't appear for a Firefox user. But there's no way that you would know that programmatically without specifying through the signaling channel that, hey, you got users in this chat who are actually in Firefox. So there's just a few oddities like that. So then we write a signaling server. So peer connections. So this is kind of the mother. This is like the thing that does it all, right? This has some quirks as well. So first of all, the prefix, which is, again, to be expected. I don't think that's a bad thing. Create data channels. So this is the data stuff that we're going to talk about. This is extremely finicky at the moment. So you have to pass a very specific set of options to create a reliable channel versus a unreliable channel. It's supposed to be reliable, true, or false, but that doesn't actually work in either browser. As I was just talking to Kyle about, there is a upload limitation in the default settings for Chrome. So in order to actually get this to work, if you want to pass a file around, you have to actually modify the STP that you send. Tricky stuff. So anyway, we write a wrapper for that. Other challenges, data channels that are currently not at all interoperable between Chrome and Firefox. You can only do one video stream per connection. And all of this is just there are other browsers that will do this. So it's not going to get any better. WebRTC is also unique in that this is actually the first time I know of where browsers have to speak directly to each other. There's no intermediary. So once you set up a signaling channel, they better be interoperable. And that's a whole new level of spec compliance that's required in order to make that work. And I can only imagine how interesting it will be if Microsoft and Internet Explorer decide to do this as well. So I wrote a library called Simple WebRTC. Just basically, you provide a container for local. You provide a container for remote videos. And when it's ready, you join the room and it works. It makes a bunch of assumptions. But this is the kind of stuff you need in order to actually make this approachable. There's alternatives. Obviously, there's peer.js focused on data channels, OpenToc, and there's another guy who's done a bunch of interesting experiments. Kind of running out of time here. Anyway, the big thing I think that's really important is tinkerability is actually what drives adoption of new technologies. So we like to play with new stuff, but not everybody does. So in the same way that jQuery made the DOM accessible to lots of people, Socadeo made WebSockets accessible to a lot of people, Abstraction Library, such as Simple WebRTC, I don't care if it's that or something else, but hopefully makes it accessible to more people as well. And we really just need more open web hackers to really get into this stuff and build things with it. If not, it's going to be relegated into one of these like, hey, this would have been nice and it never actually works. I really think that if you haven't been playing with WebRTC, get in there, build stuff with it, make it happen. It's phenomenal technology. It just needs people to make it work. So file bugs, feedback, improve APIs, push for interoperability. We made a little site with a compatibility chart. We're also piping kind of feedback data from actual humans about the quality of the connections. But encourage you to get involved. Let's make the open web even more awesome with WebRTC. Thanks, guys. Excellent. Okay, first is a test. Can we bring up the moderator screen again? Okay. I want to see, we have four of the people connected right now. Let's just do a quick test. How many of you would be lying if you said you liked the talk? This is a test. It was just a logic question. Guys, work with me here. Excellent. Good. I want to see that we got lots of numbers. That's all I wanted to say. Okay, now that that's working, I would like to follow what Steve said, which is to have each person take about 60 seconds to say any comments, things you'd like to add on to 100th Century Talk. So if we could start with Martin. Is this working? Yeah. So, I mean, 100th Century Talk's done some great work on WebRTC and it's very interesting, especially the data channel stuff, in my opinion. But I think we shouldn't forget about other technologies. WebSoc, it's only what, two years old. It's only just now becoming really available in a lot of browsers. And there's a huge amount of stuff we can do with the TCP connection to the browser. Okay, Rob? Yeah, I mean, there's not a huge amount to add to that, but from my point of view, I think what's most interesting, particularly with WebRTC, is the use beyond audio and video. So I'm keen to discuss a little bit more into those kind of U-cases, perhaps games, the CDN stuff, and just hearing a little bit more about what's next. I mean, we have the implementation today and there's issues with that, but what could we do next to make things better? Excellent. John? Yeah, so I agree with the point made earlier about other technologies being highly relevant here. I think they're each set out to solve different problems. And the point about signaling channels, I think, going through a server intermediary, but technologies like WebSoc are ideally suited to that. And WebRTC, of course, is ideally targeted at the peer-to-peer direct connectivity. So I see it as a very powerful blend of technologies as the WebSolves going forward. And, Wesley? Yes, I see it. The centralized part is a little, we gotta get past that, but, first, as for this tool, all the remotes you're holding in your hand, it would be cool if we could pull up WebRTC to use the mic, and that way you wouldn't have people running around handing mics out. So, I mean, there's so many use cases, so many possibilities, I think it's... Excellent. Okay, I think we'd like to move to our first question. I think that's Andrew Betts. You'd like to... Do we have a microphone for Andrew? I've actually got the pearl mic, so I should be fine. Just... I've got 100 notes here, so just give me a second. Right. So WebSockets and other real-time protocols are commonly blocked by corporate proxies and content inspection firewalls. And that's a particular problem for the sort of customers we have at the FT. How much is this stifling adoption and what can we do about it? Would anyone like to take that? Sure. Okay, sure. So I think, just taking a little step back into history, when WebSocket was first added to the HTML5 specification, it wasn't even called WebSocket, it was called TCP Connection. And when we saw that show up, we decided to hop on that and try to improve the protocol to make it web-centric and bring HTTP to bear, so it's actually a web-compatible handshake. And the reason that we did that was to avoid tripping over some of the problems we've seen before on plug-in technologies getting defeated by corporate firewalls. So we felt that was a huge step in the right direction. Now, even given that, we still find situations where even though all the traffic might be over Port 80 and 443, even with re-crypting firewalls and things like that, that they can still intercept, but it's definitely a much better situation than it used to be. And in our particular case at Kazing, we've implemented some heavy lifting on the emulation side to even be resilient in those situations. Initially, we wrote the emulation stuff to kind of precede the adoption of the standard. So we get started with WebSocket architectures over five years ago. But moving forward, it tends to become something that's still there to support older clients, but also to address any intermediaries that might be getting in the way. Are you saying it's not a problem or it's easy to work around? On the vanilla RFC support for the protocol, encryption is a big help. But even in those situations, there can be intermediaries that will decrypt and re-crypt on the critical path and can still intercept. And then there's also many WebSockets that don't want to be encrypted for other reasons in terms of performance and things like that. From our perspective, we've seen it be an issue in the wild for non-compliant browsers. Trying to do emulation, we found ways in our emulation technique to address those shortcomings. Any other comments? I'll just make one little comment. I would say for this kind of audience, there's very little barrier. You can jump on with WebSockets. But if you really want to address all use cases, as John says, SSL does help, but you need to think about other fallback strategies. One of the things that would really help, and we maybe could think about, is we do, I'd push her a lot of work to try and to reuse successful transports again. But we don't really have enough information about the browser's network connection to always able to make good connections. So maybe that's something that we can talk about later, how we can discover, like in the responsive images discussion this morning, we were saying the browser has more information than the web application. This is a case where the web application could do with some more information, really. The audience clearly likes that comment, by the way. Um, one of the, uh... That was awesome. Okay. That was, keep that up. That was awesome. One of the comments on the Google moderator said that TLS actually is an effective way around this. Just any comments on TLS being a useful thing? Yeah, John just said that. I mean, yeah. Yeah, I'm sorry. We find that. But unfortunately, there are examples. I mean, for us, you know, we've had schools, for example, and they often block SSL. Um, they've got a kind of pretty bad situation. Got it. You know, comments? Can we move on to the next question? Well, this problem is very relevant to our RTC as well. I mean, the whole concept of punching through a firewall to get something that you can push directly to an end user is actually really difficult. And this is something that the likes of Apple and Skype have spent lots and lots of money trying to solve. And I would really love to see some of these technologies be more broadly available. There's a few open source projects. There's a server called StunD. But a lot of these are really difficult problems. And I wish that they were more openly well-documented solutions to dealing with this, rather than having to, as a blog post that came out yesterday did, actually try to decrypt what's going on with FaceTime and what they're doing to multiplex ports and all kinds of stuff. This is not my area of expertise, but I know there are people here who are really good at this stuff. And please, please share your work. This stuff is needed in order to make WebRTC good. OK. The next question is up by someone. Is it Gus Gussens? If we can get him a mic, please. Hi. There seems to be some functional overlap between web circles and web RTC. When should you use one or the other? I can take that. So the fundamental difference is that WebRTC is designed to be peer-to-peer. So WebSockets, if you're going server-to-peer, or server-to-client, that's a better use for that. The, I think, really where the comparison comes from is the fact that once you've established a data channel, it's largely the same API. Beyond that, there are different technologies. So if you're going peer-to-peer, WebRTC, if you're going through a server, then WebSockets. Yeah, go ahead, please. So I can say one of the other major differences as well is, again, looking at beyond just video and audio. So, for example, multiplayer games. WebSockets and WebRTC are incredibly different because one's UDP and one's TCP. So you have unreliable data connections and reliable data connections, which just allow for a very different way of doing multiplayer communication. You just cannot do a Twitch-based multiplayer game using WebSockets because you have to wait for things to come through. But WebRTC is allowing us to use technologies that we've been using in native environments. You want to jump in on that? I was just going to say, the support is a little flaky for reliable and unreliable data channels. But it's part of the spec to include both. So hopefully that becomes easier to use too. And one of the things I would think that we're likely to see as the unreliable data channels become reliable is that actually people will want to implement this on the server side. And so to be able to communicate with a game server via UDP from a browser is a pretty big thing. Yeah, I think that's a great point. And also, I think we touched on it earlier as well about using WebSockets server-centric strategies for signaling and set up around WebRTC as another interesting variant. One other thing I would mention as well in deployments in certain industries, obviously the security boundaries of these things often crop up. So it's a very interesting challenge to address some of those boundaries of where the trust boundaries reside amongst the users and amongst the server in a consistent manner across both technologies. Isn't there a perception that this is like a classic web thing where we've got one standard that doesn't quite work, so we come up with another standard, that overlaps a bit? And it's just kind of confusing because they kind of are somewhat related. Or can we really say that there's a pure vision of each one of these things and they both exist? They can both be parallel and it's OK. As I said earlier, I think these are complementary technologies that create a powerful combination as we move forward. And I think they have well-suited purposes in each way. I also think that there's certain issues like the ability to successfully navigate through these firewalls and proxies on the WebRTC side that in some of the fallback cases for WebRTC for reachability, WebSocket can potentially land a hand there too. Cool. Excellent. We have another question then from Matthias Kautzmann. Oh, my question is, will WebSocket's protocol replace server-centered events in the future? Why must we have both specs if WebSockets can accomplish the same tasks that SSC does and more? I think there's another example of possible overlap. I can take this if you like. Please. I would say the server-centered events is a very simple protocol. And so I think that drives a lot of adoption early on. But I think what we'll see, especially as the WebSocket spec and gets more widely adopted, and a lot of some of the missing features become available, for example, encryption is currently possible with server-centered events, but not WebSockets. That's the answer. Bidirectional. Bidirectional. Yes, of course, there are extra functionality. But I think the question is, for just the use cases that one would currently use server-centered events, what's likely to happen? For example, multiplexing is another thing that might allow people to use a single connection to address many use cases on the page. Just to add to that, thinking back to when we were working on the spec on this stuff, Comet was the flavor of the day when it was server-centered events was being standardized as a way to effectively standardize Comet behavior several years ago. And around that time, WebSocket was starting up too. So this exact question came up during standardization process. And apart from the points that we made already, one of the overarching arguments that was left was that by having a simplified interaction with server-centered events, it created a surface area where the browser had more control over the actual behavior. And the idea was that on mobile platforms, this might actually allow the same abstraction to be retargeted at mobile-specific solutions that didn't necessarily involve making a traditional HTTP request over a traditional TCP connection and getting a stream of information coming back down it. So there are different implementation strategies for the abstraction. If the abstraction is left high, the versatility of WebSocket obviously means that you can cover that use case and many more. And Wesley, you actually had a comment earlier when we were talking about possibly speedy push. This just adds to this layering. Yeah, so the way we send data to the client, there's bi-directional, there's WebSockets, there's SSE, there's, and now we've got HTTP 2.0 coming out and we've got right now what we can implement is speedy and that's available in Node and Jetty and other servers. So we've got these three options to choose how to push data to the client. And so that's with HTTP 2.0, is there going to be, I mean, is it gonna be WebSockets layered over speedy? Is that gonna be the approach? Or is it gonna be pure WebSockets? Or is speedy gonna have a mechanism to do bi-directional like push and then also receive messages on one channel? That was kind of my question, which I don't know the answer to. Are there any HTTP 2.0 experts in the crowd? Yeah. That would be really interesting. Don't be shy. That's gonna be good. Okay. Based on what I've seen so far with speedy, I'd expect it to play out where HTTP and WebSockets go in parallel over the same enveloped speedy connection as at least an option, because to keep WebSockets out of that would be to create additional resource hog on the client server connection for the TCP endpoint. So it seems like a very natural consequence of having selected an HTTP handshake to get started on WebSockets to let them all play nicely together. Cool. Any other comments people wanna make about this? Okay. Christopher Frolic, you've got a question number four coming up. Christopher? So the WebRCT WebRTC spec has driven centralized solutions, PubNub, et cetera, to a decentralized problem. What can we do to bring a secure, fully decentralized solution to bear? That's tricky. I mean, because ultimately you have to have some discovery mechanism, right? There are some attempts. I forget the name of the developer now. There's a project where basically you end up copying and pasting STP blobs back and forth over whatever mechanism you choose. It could be email, it could be whatever, and it still uses stun, ice in turn to actually connect. And with firewalls, I don't see that going away. I mean, I don't know how to solve that problem. I would love to see a solution. Okay, I wanna make sure we also encourage people to ask questions. We've been going, we're running through our questions just fine, but feel free to just jump in and get on the delegate list if you'd like. Does anybody wanna add anything to that question about centralized service? Or is it okay? Well, I suppose one of the questions is whether we actually, when do you need a decentralized solution and when you do not need a decentralized solution? I mean, I was not quite clear on what exactly you mean. What are the benefits that you'd want to see from a decentralized solution? So the first example that comes to mind is video collaboration. So my family and I, we've all got smartphones, high resolution cameras in our pockets, and we'd like to be able to shoot and collaborate on video together in real time and the overhead in trying to figure out where we are out and back down to each other. Seems like a lot when you just want a simple way to connect people that want to. Okay, well, that makes, I mean, that's a perfect example for WebRTC. I mean, when you need that super low latency and the UDP kind of style communication, it makes perfect sense. Well, in theory, Stern and I should help you locate and figure out that those, that they are in fact on the same network and then be able to connect to. That's not a complete solution, but it's something. Sure, so just to antagonize that point a bit. If we're all independently connected, say via mobile data, so we're not on the same network and we're using just a plain vanilla web app, there's no clear path to a central server necessarily in a quick and easy to use way. Right, now you're talking about connecting devices that really are not on the same network at all and that's a whole different type of problem, I think. I mean, I think that would be awesome but I don't know quite how that, I don't know how to fix that one. Okay. Kyle actually, I believe, has got a question. Let's make sure that he gets something. Is he sitting over there? This is a delegate, so we can make sure we get in. So it seems like there's two major reasons why centralization still happens. The first one's discoverability, obviously. If we don't have any way for two different people to get hooked up on a blind date, then they don't know what restaurant to show up at. So somebody has to introduce the two but that seems like that should be solvable through other bands like, there's been other peer-to-peer networks, music sharing and BitTorrent and other things like that. It seems like there should be ways to sort of solve that problem but there's another problem that one of the reasons why people centralize stuff is because they want to bill for it. So when companies are creating products around this stuff, if we truly completely decentralized and everything was peer-to-peer, nobody'd know that that was happening and nobody could have made a buck off of that. So how do you see that sort of tension happening where we do want to get rid of centralization but we don't want to not be able to charge for it, for instance? I can comment on that. So in my opinion, the telecoms of the future are Google, Facebook and the like. The old traditional telecom system is likely not to stick around. Somebody didn't like that. But really, I mean, if you think about it, the reason phones are so successful and they're so prevalent is I can call anybody, I have an AT&T phone, I can call somebody on Verizon. Why can't I call somebody from Google Plus on from my Facebook account? The whole concept of federation does not currently really exist in a broadly accepted way in the web. And I think the reason is no one's pissed. I think if that, I mean, that's something that if it was retroactively imposed on... No, some of us have... Now people are pissed. No, no, no, no. There we go. So that's why which three people are getting together. I guess. So I don't know, I mean, that's the bit of... I mean, I think that's why there's kind of this whole silo effect going on. But is WebRTC architected to solve that problem? Is that part of the issue? It is solved. Well, yeah, I mean, certainly, it's providing just a very base level of technology as far as the communication piece. But in order to do, in order to actually connect the two, you need other technologies on top. You need the discoverability piece. You need the addressability. You need a strong identity. So you can say, I am a Google user. You can reach me at my Google account. That sort of thing. So WebRTC is one of the pieces. It's not all of it. And the great thing about WebRTC is that it's enabling all the web developers in the world. It's removing a huge number of barriers from being able to actually innovate in that space. For too long, we've had desktop applications that have not been able to. I think the discoverability side of things as well is a big problem because right now it's just sort of, like you said, up to the developers and stuff like that. And there's so many ways to do it. I mean, you could manually place these blobs and stuff which is stupid, but you can do it. But I mean, there are ways that we need to, we need to look at how to better solve those kinds of problems as well. Like, I mean, pairing devices or things like that. So it's not always two people that are trying to connect. It might be one person trying to connect two devices. So for example, I might be trying to connect a mobile device to a TV to remote control it. The discovery mechanisms for that are going to be incredibly different from connecting two people who have full control over a processing can just join a chat room or something. So those are the kind of problems I want to see solved a lot more. And people are kind of approaching that. And there's a few solutions with trying to replicate things like Apple Bonjour and stuff in the browser and requesting that from the browser vendors. But there's not been significant traction on that yet. Well, it sounds to me like there's going to be a whole topology of applications. Like an awful lot of websites could just put in video chat to our tech support line. That'll be a trivial thing for them to do because it's entirely through in their own stack. Then it's about getting bigger. And is that the issue that we need? Are you trying to say that we need more standards for the bigger to happen? Well, I mean, these things have existed for quite some time. XMPP is extremely stable and extremely well used. And it's not something that web developers like. But they've gotten really good at solving these problems. And there's some efforts underway. Two specifically, one is Stan's AIO, which is basically the attempt to make, to give a clean JavaScript API to XMPP. And another one is XMPP FTW for the web. And thinking of things like browser ID as a potential alternative to identify, to be able to provide that strong identity piece. Because once you have that, addressability actually becomes fairly simple in terms of, once you know how to reach you. But these solutions are all very heavyweight solutions, right, all the XMPP. Even if it has a nice JavaScript API one day. I think what Rob's talking about is just saying, well, you've got two devices next to each other. Can they just make a little kind of chirp or a little some video communication or something like that to discover each other? I think we're kind of seeing people approach those problems. Very, very slightly at the moment, like the whole idea of using audio to connect to devices. Oh, Boris Muster that our ultrasonic demo. Right, right, right, exactly. Yeah, so there's people already exploring and potentially not exactly in this area, but there are solutions around right now. We've got solutions for pairing Bluetooth devices and stuff like that. Like there's methods and means. I think we just, we've not seen people use them just yet. And we will, as people start using WebRTC beyond audio and video. And that will just come in time. Excellent. We've got a question waiting from David actually, from the audience. David Stickman. And then we'll go to Natasha's question. We see a rise of security issue where people are actually trying to open too many TCP connection to a web server and they're actually killing the web server by opening too many TCP connections. So people tend to actually remove keep alive to avoid this kind of issue. What do you think about web socket and security for this kind of problem? And how can we solve this? Well, one of the things that was put into this back towards the end of its finalization of the standard was a clarification on what's the maximum upper bound on the number of web sockets that can hit a server at a time. And it was unspecified. So we wanted to get that cleared. And at the time, decision was made to not limit it but also to add a caveat to that that only one handshake could be outstanding to the same target server at the same time. So from a browser invocation standpoint, spinning up lots and lots of web sockets in rapid succession would give the server an opportunity to intercept and potentially detect repeated attempts and therefore mitigate that. In general, I think that things like speedy as an envelope as we touched on earlier underneath these things tends to mitigate some of that at the physical TCP layer. And also having higher level abstractions on top of the web socket going forward that's more of a, you know, like a publishing subscriber and event driven more architectural approach to things allows you to partition up the universe of addressability so you don't just think of it as connecting directly to the thing you wanna speak to just because that's the end of the web socket. Once you're attached to the architecture then you can reach a whole myriad of services through that same channel. Cool. I think we're having, the next question is actually quite high level about WebRTC in particular Natasha, if we can give you that one. Thanks, so this question's from an anonymous sender. Bandwidth throttling, especially for multiple HD video streams is extremely hard to do properly. Is there any risk that WebRTC will vary greatly by implementation? I think most likely I'm assuming this is about one to one is one thing. Viewing one to four, one to five, I think it gets to be much harder and are we going to see possibly just fall over because Firefox doesn't do it as well as, say, Chrome. This is actually, in my opinion, two problems. One is, so for example, Skype, what they will do from what I understand is they will elect the kind of strongest connection. It's not just connections about the ability to encode and decode video really quickly. So you need a strong device and strong connection to kind of be able to serve as a rebroadcaster. And currently that's kind of part of the spec, but it's not really there yet. Like if you take a media stream from one user and attach it to another, it just doesn't work. That's absolutely crucial for the stuff to be able to be usable for handling kind of these other interesting networked apologies. Mesh is only good for so many because you're uploading your video to each person that you're connected to and so is everybody else. So your upload bandwidth becomes a huge bottleneck. Well, didn't you say that WebRTC doesn't throttle down? So yeah, there needs to be control mechanisms for adjusting bandwidth and they just are not in place as far as I can tell. Nothing I've tried has worked. Well, that's actually maybe, I actually was working at a video conferencing company and that was probably the biggest aspect of our secret sauce was to throttle bandwidth properly. So if WebRTC isn't thinking about it, it seems like it's kind of heading for a bit of a train round. I think it's not thinking about it. I think it's just the implementations aren't quite there yet. That's my impression. I've kind of raised it to some of the Google folks who are working on this stuff and that's kind of the answer I get back. It's like, yeah, we know this is the thing we're gonna deal with. And I mean, Google Hangouts, they're well aware. There's a low bandwidth mode for that and we need to be able to do the same kinds of things without re-requesting a smaller video size. We need to be able to just kind of adjust them based on quality and lost packets. Over here. This kind of problem as well, it's particularly the one to many kind of, the bandwidth problem is applicable in games as well. So that's where the background is from my point of view. But if you're building multiplayer games, you're quickly gonna come over that problem. And it's not something you can necessarily avoid in that circumstance because you have to be communicating out to these multiple players in a game. But there are ways, like people have solved these problems before and there are ways to sort of reduce the amount of information you're sending. So for example, in a game, you don't send updates for people that aren't necessarily in your vicinity or something like that. So it's not always a technology problem. It's just a creative way of thinking about how the data's being sent to all the different people and are you sending it to the right people? There may be a hundred people in the game, but there might only be two that are actually applicable and need the updates that you're sending. Another sort of separate point to that is I think we might need to, it's only the very early days in terms of peer-to-peer connections on the web. So if you look at, you know, historically you'd have clients like Skype that are open for many hours on stable, you know, internet, you know, broadband connections. What happens, for example, on battery life on a mobile device when you have, suddenly you've got four different connections and it's sending the data four times, sending the video to four different clients. And you said earlier about strong peers. Is that actually realistic on the web? I mean, our web browser, you know, are tabs open long enough for that to be realistic? I don't know the answer to these questions, but yeah, it's a big push too. I mean, you have around 50 people connected here all day long using a WebSocket connection. And I mean, let's see, my battery life, I've got about 25% left. So I mean, this is the same thing. I mean, people are connected, they're using this protocol all day long at this conference and, you know, it's draining the battery. So. And that's one to one to a server, right? What happens if it's one to many with a WebRTC data connection? It also takes a lot of power to encode and decode video of that test and to do it with multiple people at the same time. So that's a huge drain on power right there. I know my laptop fans are going crazy when I'm testing with like five people. So it's a big problem. This seems to be back to the kind of discussion this morning about images. I mean, for real time data, do we need to change the way data is delivered for someone whose battery can be constrained versus, you know, at PC? Yeah, I see the same thing. I mean, for responsive video, essentially. Again, this comes back to games. Like these kind of problems are being solved in games in the way you sort of throttle the communication depending on the bandwidth capabilities of each person and stuff like that. So there's a lot of lessons we can take from things like games. And this is why I'm so adamant on not focusing so much on audio and video. Although yes, that is the use case right now. Some of the problems that we're approaching are being solved or being looked at in other areas. I think it's just interlinking the two and trying to sort of combine them and come up with a solution. But I am surprised that maybe as a web community, we always kind of wring our hands a little bit about saying everybody and inclusiveness and so forth. And sugar crush doesn't give a damn about battery life. Right? They just say, you want to play it? Then just plug the damn thing in, right? And now I'm not saying that's correct, but to a certain extent the native community kind of washes their hands at this a little bit. Are we trying too hard? Just saying. I think enabling mobile is really, really, really important because I think, yeah, certain things don't matter, obviously. Like if you're playing some hardcore game, you're not going to be sitting there forever maybe. But to not take it into consideration when these are the actual physical hardware limitations that we're dealing with, I think is selling ourself short. Yeah, and also think it's important that, you know, you've got things in place to avoid accidentally doing it. Right? So we want to make sure that developers aren't accidentally draining their battery. I mean, if it's a deliberate decision, that's one thing, but if they just happen to be on the end of a bandwidth connection that's a little less capable, we want to be able to react to that. I think we have, we're getting the APIs to play with this kind of stuff as well now. And actually, I'm not seeing people implementing this yet, but for example, using the battery API, and actually if the battery level is at a certain point, changing the way you're communicating and stuff like that. Yeah, yeah. My point, by the way, wasn't to seriously say we don't care, but to say it is interesting that some apps don't care. And we just do seem to be a little holier than now sometimes, which is actually good to have that aspiration, if that's all. I think that may very well just be the difference when you're thinking of it in a platform perspective versus thinking of it as an application perspective. Yeah, okay. Okay, actually question number six that we've already answered. I believe Steve Ther has a question. Right here, he needs a microphone. Actually, I think it was Rob, who's actually just touched on this. WebRTC seems to have focused more on the audio, video streams, and data channels are a special case. Shouldn't all peer-to-pea communication just be data streams where the different data types can be interpreted as appropriate? I mean, not all peer-to-pea is, how do you mean like should they all be the same? Like they're just data channels or? Yeah, I mean the question was sort of saying I don't rephrase it, why are we focusing, yeah. Most of the panel has just been purely been talking about audio and video, but audio and video are just a type of data. Right. Okay, it has specific characteristics. You know, why is the focus purely focusing on those rather than just coming up with a more generic data channel? I was just gonna quickly on one liner. I think it makes a better demo, which is why it landed first. That's my opinion. It's also kind of a special case in certain ways that you have to, you know, you're trying to negotiate in coding types, et cetera. But yeah, I mean, fundamentally I agree with you. I think peer connections, you're still doing the same thing. You establish a single peer connection. You add and remove data channels. You add and remove video and audio channels. It's still one peer connection. At least that's the way it's written in the spec. So hopefully it's actually, yeah, I don't know. Go ahead. Yeah, I think, I mean, generally, yes, they are types of data, but the media streaming is incredibly different and much more complicated than just sending sort of basic data across. So there is a reason why they are separated, but I truly do think that the data channel side of thing, like sending generic non-audio, non-video kind of data is where WebRTC has an incredible amount of strength and where we're not really focusing on yet. So right now we're not exploring that too much, but I'm very interested to see how that is explored because audio and video, yeah, is the interesting thing and that's kind of why WebRTC was created. And that's the exciting thing. Imagine being able to call Skype from a browser. That's incredible use case, but it's now, where do we take that technology? What's next? And I think the data channels is where that's sort of hiding. And of course, I mean, as far as I understand, there's some thought of bringing in other kind of media streams, not just audio and video. What about temperature information or sensor information? I think, I don't know if anything's happening that I don't really know, but I've heard it mooted as a possibility. Yeah, well, actually, there's one comment I wanted to make which is that so often we expect these specs to be perfect and if they're not perfect, we get really upset at them and look how bad the web is and so forth. But shouldn't we just maybe get, cut WebRTC and maybe others a little bit of a slack and say maybe to your point, what can we do as a community to kind of exercise things and then give some feedback? What can we do just to do that? Yeah, I mean, definitely, I would suggest everybody in here if you haven't start playing with this stuff. It's phenomenally just, it's very, I mean, I hate the word disruptive, but it's ridiculously disruptive technology. Like, I mean, I seriously built my own telecom to call my mom with in Sweden. I mean, come on, like, I'm not supposed to be able to do that. And so this is the kind of power that it provides you and I would love to see everybody here really get in and hacking on this stuff. Stephen? Well, I guess just on that last point, just what you said there, if we could quickly poll the panel, you say this is awesome, like what other use cases would the panel have if you could each nominate a use case for WebRTC of something cool that you'd like to see that you think that this technology enables? Excellent question. Can we just go through? Video conference, no. Yeah, I mean, so like, it would be cool if the remotes that you guys are using, like I said earlier, you could just pull up, get user media and you wouldn't have to hand you a mic. You know, you could just speak to your phone, microphone and be broadcast. So that would require like hooking up to the AV here. It would require, you know, writing the code to handle the binary data, but yeah, and then there's like a lot of geolocation, like in room geolocation, like there's been a lot of investments made in companies doing this lately where you actually can find devices in the room with much greater granularity using like, you know, a geolocation API. So with WebRTC, you could say, okay, I know this, I guess you would have to have some centralized mechanism, but you could find out all the people who like the color blue and then you would be able to pull up a video chat with them in this room, you know, using that newer technology. So there's complimentary technologies, I guess that would be required for that use case, but that's an idea. Okay. Yeah, I think just enabling the internet of things, machine of things kind of stuff, you know, cause a lot of times with a centralized model, we assume that even though these devices may be talking to one another, that they are actually able to potentially mediate or coordinate against some connected server system. So, you know, just not having to rely on that always being around, but still blending these two worlds together seamlessly. I think that's a tough challenge to do well, but just raising the abstraction a little bit so it makes these things easy and get it out of the developer's hands for trying to solve all these complex problems. Because then we call these things, call it the web of things instead of the internet of things because we think that a lot of the challenges that we see in trying to connect together those IP or lower level protocols with WebRTC that we don't actually see so much in the web socket because it's web centric is achieved by raising the bar a little bit and connecting everything together at the web level instead of thinking at the internet level. So we say web of things. Cool. I would love to see peer distributed rebroadcasting. Why couldn't I pull up my phone and be able to just stream something just off my phone to the entire internet? If you have a proper rebroadcasting ability where a peer could relay that feed, you could basically create, you could turn every person on the planet with a smartphone into a news reporter capable of live broadcast, which is phenomenal. And that's again, that's video and audio, but I think that particular one's really interesting. Of course, you can do the exact same thing with any other type of data. Examples, sheesh. Can't think of anything off the top of my head. I've been too focused on the video stuff. Cool. I've got to say, I mean, the simplest one is games and how that's gonna completely change now. We've got UDP and unreliable data, but the second one is the web of things like these interconnected devices and actually having the non-human interaction and devices talking to each other. That's where it gets really interesting. We have phones powered by JavaScript. The entire operating system is written in JavaScript. That's incredible. We now have Arduino devices powered by JavaScript and stuff like that. And how are things gonna change and how could we use WebRTC to interlink those devices and what does that now allow us to do that we couldn't do before? Using web technologies that already are interoperable with all of the other things that we have available to us, like the hardware APIs and just all of the other web APIs. That's where things are very, very interesting. Yeah, I think I would just reiterate that point. I mean, it's enabling. It's disrupting the control. And when you need an ultra low latency connection between two objects, it's now possible for web developers to build those things. So you have a control surface for a quadrocopter or something. You can now build that in a web browser. It's amazing. But isn't it gonna be an issue that not too many Arduinos will be running Chrome right now? Right, so how do we get WebRTC into these lower level devices? Someone needs to build it. Okay, I mean, I was really sure I was following up what you said because what you said is great but there's no immediate way for us to do that, correct? Right, this is the room of future developers, right? Yeah. Somebody can do it. I mean, this is partly why we're waiting for asking about WebRTC on the server and things like that. Once you can get it into things like Node.js, then there's a lot to do. Yeah, that'd be huge. I think we have a few minutes left and Christopher has been patient. We'll do him and then I think you, okay? Christopher, no, that Christopher. Sorry, I apologize if I sound like a broken record but as soon as you said enabling technology, I'm brought back again to the idea of decentralizing the solutions. I mean, you think of dissonance, dissonance suppression, say in Arab Spring and the raw potential that exists in the phone to directly connect people without a central network connection that isn't possible or emergency response as earthquakes, hurricanes, take out centralized communications. It seems like the technology exists to help solve these communication problems. Yeah, go ahead. I mean, that's a really interesting problem and that's something, I mean, I've not developed for but been thinking about it a little bit and the whole idea of creating sort of mesh networks out of nowhere using these technologies is incredibly powerful, like the ability to then spread communication amongst an ad hoc local network is crazy. You have to get over the actual connecting the two devices to get a problem but if we can solve that then what you can actually do with that is incredibly powerful. It's kind of like the only way you can do that now is to carry around some extra little antenna with you with extra power supply and create your own network. I mean, that would have to be like standard issue or emergencies, yeah, I mean, that's today but your phone cannot power that kind of transmission and that kind of blind ability. And you've been patient, one more question? Yeah. It doesn't seem that you need a web browser in order to have WebRTC. You could always do it through Node or through a JavaScript on chip device. Right, it's a simple matter of programming. It just doesn't exist at the moment. There have been some efforts. There's actually a Node.Govert C library they tend to bind to live jingle and I don't quite know status of that. It hasn't had much activity. I think we've got the wrap up. So thank you very much. If you like the session, please give us a vote. And if you don't, just give us a vote. And thanks very much. Whoever's writing the script to do like 50 votes go ahead and up that to 100. Awesome, thanks very much.