 Hi there. Right. Before we get started, this is a bit of an interactive presentation. So what I want you to do is I want you to visit this link, which is bit.ly.com. A lot of this is demo, rather than talk. And we're relying on conference Wi-Fi, which is an notoriously bad thing to rely on. So if you are going to be doing any downloads or updating programs, try not to, if that's all right. Cool. We've got 30 people. And that's a lot. Cool. Right. So I'm Ben. I'm based in Oxford, England at the moment. And I work for a company called White October, who is like a web development agency. And we also organize conferences. We've got JS Oxford, which is a small meetup, much smaller than this. And yeah, I help run that. So we're going to talk about multi-screen interfaces. So that's where you have several devices that are part of the same interaction with the web. And this talk is split into four different parts. So we've got capabilities, the types of capabilities that we have with devices that we can share with one another. Transports about getting that data between different devices. And then some examples of interfaces that use this kind of tech. And then the future, what kind of things we can create by doing this. I've never had over 100 people on this, so this could be interesting. So first, capabilities, right? So when we interact with the web, we do it through a number of different devices. And you might think of the baseline as being a laptop. So I've got my laptop here that's loading that web browser. And here we've got a keyboard and a touchpad, so we can type in information and we can move a cursor to click on stuff and drag them. But we interact with the web with different devices, right? So I've got my phone here. And this phone is a very different type of device. So this works, yeah, cool. So instead of moving the cursor and clicking stuff, I can just press what I want to. And this kind of, so I can just touch the objects, which is a little bit more kind of closer to that interaction. Because the touch interface is combined with the screen, I can use that to put up a keyboard and I can type stuff. And also we've got different interactions, so we can touch and drag and drag back down and swipe to the side. We can do multi-screen interactions, so rotating stuff around, resizing things. And we can really kind of like physically interact this device in a really kind of unique way that's very different from the way that you interact with a laptop. And this device has got more different things as well. So we've got geolocation, so we can find out where we are in the world, like long-achieved latitude. And I need to allow that. And so you can kind of link this to where you are. So we've got like long-achieved latitude and you can see that third parameter is the accuracy of that. We've got orientation, so you can kind of see which way around this device is pointing, so you can kind of see where it's pointing in the world. And we've got motion as well, so you can see as you move it around, those axes change. So what happens if we are able to connect these two devices together? So they're both very different, but they both run web browsers. And the web is a great platform for this. So I can load up this presentation on this phone and it's been resized to kind of fit with this viewport. And the way you change slide on this is by dragging it upwards. And because these two are connected, what I've been able to do is I've been able to change that HTML page on that device, which doesn't have touch capability, but by sharing the capabilities of this device, I'm able to slide with my finger, which is something that's not possible with one of these single devices. Yeah. And here we've got a file input. And if I was to choose that on my laptop, it would open a file dialogue, so I could choose something from my desktop. But there's hints on this which change it slightly on this device. So when I choose a file, it opens up the camera. So yeah, I'm going to take a photo of you guys. So lights on, brilliant. So what happens here? Sorry, one second. Excellent. So when I save that, that can then stream up to that other device. And what's nice about this is that these two devices are very physically different, right? So this phone has a really nice camera on it, and it's mobile, so I can move it around and get a position, which is better for taking a photo, whereas this device is connected to this massive projector. So we're going to combine the kind of physical properties of those devices using the web. Cool. So I'm not going to actually have any code examples, but I am going to mention any libraries that I use, some of them. So for those two demos, we use PureJS to kind of create a WebRTC data channel connection between the two browsers, which we'll get into later. And part, a related project called BinaryJS, which basically allows you to take large files like images and put that over a WebSocket connection. So I was able to kind of share that image over a WebSocket connection. So detecting browser features, like, you want to be able to utilize the capabilities of it when someone visits your page, and you can find out these features by looking at the window or document model, document objects. And one tool for using this is, like, Modernizor. I don't know who uses Modernizor here. That's it. Yeah. People. Great. So you can give the features that you can detect using Modernizor, so just by checking if they're available or not. And what I was able to do, like, so when you first connected to that URL, what we did was we looked at the features from this list, which every device connects, and now we can actually visualize every device in the audience, the features of all of them. So you can see that it's like, so we've got quite sporadic support for battery API. Firefox has really good support for this. And we've got bits and pieces like the gamepad's APIs. Like, actually, what's interesting about the gamepad's API is you can see how it's more solid on the left-hand side, and that's because people with laptops were able to type in the URL faster and connect. So laptops, it makes more sense to kind of connect a gamepad to it, whereas a phone you might use as a gamepad and be, but that's a different thing. Yeah. Cool. I think the notice about this is that there's a lot of blue here. Like, there's a lot of things that we can use, right? And so, and a lot of them are quite varied. So stuff like Modernizer helps you work out what you can and can't use, and that's the kind of important thing to try and, like, devices vary is what I'm trying to say. Cool. So we're going to concentrate on four of these features, and if you get your phones out, there's a bit of an interactive-y thing here. So what we've got is we've got geolocation, touch events, device orientation, which we've already touched on, and also WebRTC, which allows that kind of peer-to-peer networking and audio-video streaming between devices. And so what we're going to do is, if you look at your phones, you should see these four sliders for these features, and I want you to kind of say how experienced you are with each of them. So on the left is you've not touched it at all. On the right, you can, like, know it to the depth of the core of it. And cool, we're getting quite a good spread. So yeah, myself, I'm kind of, like, middle of the road for geolocation and touch events, device orientation. I have hacked around with it a little bit, but not to any depth. And WebRTC, I think, I've done more on. So is everyone cool? Cool. Right, so now your sliders should change to awesomeness, like how we think these features are going to impact the Web now or in the future. So cool, we're up on the awesomeness, which is nice. So yeah, I think geolocation is really important, because it's like, it gives us a way of the Web interacting with our world kind of thing. And yeah, touch events have removed a huge bottleneck in the way that we interact with computers. Cool. Right, is everyone all right? Cool. So now that we've got those two data points, we can plot those together. And we've got this kind of scattergram of awesomeness and experience. And I like to think of this as, like, four different quadrants, right? So we've got this, like, top right side. And that is where we think the technology is awesome, and we're really into it. Like, we know how to, like, practically do stuff with it. And in that space, we can really do some awesome things, like, some really kind of, that's where we can move the Web forward. This bottom right one, so there's quite a lot of people who think things are awesome, but haven't actually touched it at all. And that's a really great place to be, because it's very easy to move from that quadrant upwards, because you think that you know that that technology is awesome, and so it's really easy to kind of get to know it and learn how to use it practically. And the top left one, which is, that's really interesting space. That's basically, you know something really well, but you think it's rubbish, which is quite an interesting place to be. And it's like, it's harder to move from there across to here than it is from there to there. And then the last quadrant where we think things aren't awesome and aren't experienced in it, that's actually a really good place to be as well, because as you get experienced, you can kind of see how awesome a technology can be. So, yeah, you've got all this space to explore. All right. So that, as I mentioned, uses Modernizer to kind of detect those features, and then PubNub for gathering them, and also for keeping your slides in sync with mine. So what we have is like PubNubs are basically published subscribe infrastructure, pretty cool thing. And what you can do is you can basically every device subscribes to this presentation channel with a certain state, and that state has your features in it. And we're able to, I'm able to look them up and plot them on the graph with D3. Cool. So we're talking about the capabilities of the web. And, but I think an important thing to note is that the web doesn't start at browsers. Like the web, web's a lot bigger. Let web's, what's, browsers present the web for us. And there's a whole lot of other devices which, which don't actually, they don't actually have a browser or even a screen. So this part hardly ever works. So we'll see. Yeah, cool. This is a sensor tag, right? So this is a little Bluetooth low energy device that I can connect to with no process on there, and expose through web APIs. And this doesn't have a screen, and it doesn't have a browser, but we can still kind of make it part of the web. So for instance, so some of the capabilities of this has got buttons, so I can press these on and off. And you can see that kind of updating those two buttons on the top. There's also got other data, so you got temperature. So these two numbers are the ambient temperature and also the point temperature. So I can work out how hot something is by just pointing it at it. And you can see that my computer's hotter than the room slightly. It's a very different way of interacting with the web. You've got basic gyroscope motion, which is a bit more basic than the one on my phone. But what's interesting about this is this is powered in terms of months or years rather than days, which is quite a different scale of thing. And we've also got magnetometer, which detects a magnetic field around it. So we're sensing very different things about our world through this device, but we can make it part of the web by exposing it over web APIs and utilizing endpoints and HTTP. So and I think Jenny puts this really well that we make a mistake if we think of the web as only HTML. And it's like the web is so much bigger than this. And as web developers, that's our domain. And we can do some great stuff in there. Transports, cool. So if you've got several devices together next to each other, one of the big issues that you've got to deal with is getting data from one to the other. And using the web, so this can be a bit of a problem. So here we've got this big circle, which is a web server, and we've got these two browsers connected. And the first browser changes some information and puts it up to the web server. But we get this bit of a problem that the second browser has come along, and it doesn't know that that information is changing the server, so it doesn't know to make that extra request to say, give me the new information. And so there's a few ways around this. So the first one's polling, where you just repeatedly make requests to the web server to get new information at a great overhead sometimes. We've got long polling, which is slightly better, where you make a request, and the server holds onto it until the data's changed and you return it. Then we've got web sockets, where you can create a duplex connection to the server. So basically as stuff comes up, you can just pipe it straight back down the other side. So that makes it, oh, cool. Don't know what happened there, cool. And then, but then all our traffic has to still go through a server, which is where WebRTC comes in. So WebRTC lets you negotiate a peer-to-peer connection between those two devices, bypassing the server. So we're going to look at that. I have got a lot of demos, this is kind of weird. So what I'm going to do here is I'm going to send events from this device to that one. And I'm going to do it over four channels. So we're going to use WebRTC, web sockets, long polling, and a delayed long polling to emulate a kind of server that's further away or slower. So you should watch carefully, because it's quite fast. So I can start drawing. And you can see these top two are streaming those points, whereas the bottom two are coming in in chunks. And you can also, you might be able to make out that the WebRTC is slightly faster than the web sockets. And that makes all the difference when you've got devices that are next to each other, because as I start drawing another angle on that, when I'm looking at that other screen, if it reacts slightly quicker, it feels a lot more natural, so that's important. So what we can do with this is like, so we've got these stars, but it's kind of hard to see. So we can transform these and use the Z axis for time. So if you imagine as those drawings occurred, it's kind of coming out at you. And now that we're thinking of it in 3D, we can actually rotate this round and look at it from the time axis. So you've got, thanks. Cool. So you can see the two streaming things, the WebRTC and the WebSockets are good and smooth lines. They get a bit chunky here, but that's more to do with my implementation of it. So if I've got any advice, if you're making something real-time, try not to use four channels at the same time. That doesn't work very well. You can see the long polling measures get bigger as you get a higher latency to the server. And that's because in between the response and the next request, more data has come through onto that server. A really interesting, I find really interesting point to note is that little red dot here, the first one. And that's the first long polling response. And that comes in at pretty much the same time as the WebSockets. And what's interesting about that is that basically if you've got sporadic one-off updates, then long polling can be as fast as WebSockets. You don't need the full streaming fanciness. You can use something more basic and easier. So for that, peer.js, which I mentioned before, is the WebRTC connection. Socket.io does everything else. And 3.js turns things around. That's all I use it for, really. Cool. So we're going to talk quickly about the types of interfaces that you can do by considering devices in close proximity to each other and why these are different from what we're doing just now. So this is our Christmas party last December. And what we're doing is we're playing a game of Pictionary. And the person up at the front is waving a marker in front of a webcam and drawing pictures. And everyone else in the room is connected on their phone to that service. And they're seeing those pictures, and they're able to guess what the image is. And then those guessed images come up on the right-hand side of that main projector. And what's nice about this is everyone in the room had their own devices out, but it only really makes sense because we're in the same room. So we're using web technology, which is designed to connect together the world. But we're all in this gallery. And another thing is everyone was on their phones at a party, but that wasn't a bad thing. It really fitted with our social interaction. So what we were doing is we weren't playing with web, we were playing a game. And the web augmented that game and made it more fun, possibly. But yeah, that was our Christmas party. Another interesting use case of having separate devices. Now, this isn't a website. This is some sound-editing software. And what you can do is, so it's got this kind of skewomorphic sound desk at the bottom of it. And you use your mouse to kind of drag sliders up and down or turn knobs around. And what you can do is you can get a companion app for this which sits in the iPad. And what's nice about this is that turning dials on a touchscreen is a lot more intuitive than dragging it with your mouse. And sliding sliders is also a lot more natural. And because it's a multi-touch interface, you're able to drag multiple sliders at the same time to edit things, which is something that's not actually physically possible with your interaction with the laptop. So by taking off the second device, you're able to kind of utilize its capability set a lot more. This isn't a web project, but it's like it could be. This is another interesting one, which is YouTube TV. So if you go to youtube.com slash TV, it kind of switches the modality into being more of like less something you navigate. So it's kind of full screen and all that kind of stuff. And you can connect your phone to that, and you're able to kind of choose what videos you want to play and kind of skip through them or whatever. And what I picked this is potentially this could be quite a weird way, like you're doing something on this device and that one's changing. But it replaces a really natural concept in our mind, which is a remote control. So everyone is familiar with changing channels on the TV and using the web to kind of replace that functionality or using, yeah, allows us to instantly know what we're about to do and understand it. And this is what I think we should take as inspiration for how we develop devices. So this is a girl trying to log in, trying to use a mouse, I guess, for the first time. And it's just like technology can kind of come around again to be natural to us. And with touch screens, for instance, you can just like there's nothing more natural than being able to reach out and just touch and transform something with your hands. And yeah, I always like trying to think of this. Cool. And these are some patterns for multi-screen, this is from a few years ago, actually, some patterns for multi-screen interfaces and how you kind of share data around. And the one that we, the first one we see is coherence. And that's something that we'll probably have most experience with, like that's responsive web design, like making content suit a particular device and be more appropriate for it. And then the other one's kind of concerned how you share data between those devices and kind of how a user feels when they're using them. And screen sharing's an interesting one. And we're going to look at that just now. OK, cool. So we had 196 devices connected, and as well as all the features that we saw at the start, 197, great. We also captured the screen stock resolutions. So we can view those now. So this is like every resolution, every screen in all our devices. And if you think of standard screen sharing apps or whatever, what you would do is you would overlay an image over all of these and have it kind of copied out. But because we're dealing with all these devices in close proximity to each other, we can actually do screen sharing in a slightly different way. So if we think of this space as being virtual, we can actually arrange all these screens within that space. And so we're basically transforming every device into their own positions. And what we can do is now, when we choose to share an image, every device is able to show their own part of that image. That might not have worked. Did it work? Oh, great. Sorry, you got the sky. That's a bad one. I tried to pick a complex picture, but yeah, the sky sucks. So yeah, this only makes sense because all our devices are together. So it's a kind of different type of interface. And so what I can do is I can actually attach this other device here. So this is down the bottom. I've got a tree. And this device is connected slightly differently. So this has got a web sockets connection. So what I can do is I can actually just touch on here. Possibly. I might reload that. Yeah, so I can touch. And I can actually move this image around. And then when I let go, that then publishes over to all your devices. And also, we're utilizing, yeah, we're doing a lot of stuff there. But I can add another device to this. So I've got this tablet here. So I can add this second device. And everything should rejiggle. And I can touch with this new device. But I can also still touch the old device. And what we've got is we've got an inter-device multi-touch gesture here. So I know what's quite nice about this is it does feel quite weirdly natural. Maybe not with so many people. For me, touching those two devices, I know what's going to happen. And it feels like the right thing does happen. And yeah, so we can do that. So the future. So taking this stuff forward. I really like this example. This is Atari video game console game called Adventure. It was like 1979. And games preceding this were games like Pong or Pac-Man. And in those games, the player's model of the world was constrained by the screen. So your entire world within that game was that screen. So in Pong, your world was this tennis court. And in Pac-Man, your world was this crazy maze. Whereas this game had something slightly different where if you walk off the bottom of that screen, you appear on the top in a completely new part of this world. The player's concept of their presence within the game has just been drastically broadened. So suddenly you've got this huge cave system, possibly infinite. And what I find really interesting about this is that the technology didn't change at all. So it's using exact same hardware, exact same developers are developing it. But their concept of how a player can play it has broadened the horizons of how a person can interact with that technology. And I think so with the web, it's like, although in the next few years, there's gonna be some really insanely great stuff that's coming out. The actual way that we innovate is through concepts and the way that we imagine that people can interact with the web. And I'm gonna leave like last slide. This is my favorite quote at the moment and I'll read it, which is, my freedom will be so much the greater and more meaningful, the more narrowly I limit my field of action and the more I surround myself with obstacles. And what I take from this is that for us to be truly creative, we need to have constraints and we need to be pushing up against those constraints. And it's up to us to choose where those constraints lie. So a constraint might be to have a button look exact same over all browsers or to try and fit some kind of data into some library which doesn't really suit it. But we can actually broaden our constraints like almost outside the implementation and think about how people interact with technology and how we can kind of push that forward. Cool. So yeah, I'm Benjamin Benben on Twitter and yeah, white October sent me today. So thank you. Cool. That was awesome, thanks. What do you actually do at your company? Yeah. Not this sometimes. So you saw that picture game, that was a kind of internal project where we're kind of trying to push the boundaries and stuff. We are a web agency and we put on conferences, but it's very important to us to be trying to push the web forward and stuff. So this isn't necessarily my day-to-day work all the time, but it's something that's important to us. Cool, thanks. More questions? Come on, someone's got to have a question. I want to know more about the spinny thing. The spinny thing, the sensor tag. The sensor tag and then the spinning into the time graph and that bit. So yeah, what I liked about that, so the transports diagram thing that turns around is that looked very different when I was originally thinking, ah, I could totally do that. And basically it was just through playing with the data and drawing stuff on my phone and streaming it across that I could be like, ah, I'll just turn it around like that. And I was like, ah, shit, it's brilliant. What did you actually use to do it? D3 for the drafts. No, sorry. That was 3JS, I guess it was mostly D3. Cool, which we'll learn about. Any more questions? Yeah, there we go, got a couple. One on this side, either one. Yeah, I program, well, this is weird. I make a little program to synchronize a song through many laptops. And in a hackathon of my company, we try and we were maybe 25 people. And I was controlling when in every laptop was appearing some images. And we had a big delay. So what's your experience on these kind of games? It was something I was doing wrong. I was using WebSockets. Yeah, yeah, because WebSockets have always got to go through a server, right? So if that kind of gets a bit plunked up or something like that, then that can be slowed down. So we did some stuff in using Web Audio to actually use the audio to sync from the actual devices themselves because you've got that kind of main track that you can listen to. So we tried that, we didn't get that far with it. But then one of the things that you can do is try and work out the offsets of all the timestamps of the different devices. And then so you've got like 10 devices and they're all kind of like, have their own internal clocks. And if you can work out how the deltas between them, then you can say this first device starts playing it at its timestamp of X. And then you can translate that through the other devices. So that's quite a handy way of doing that. Okay, so I read like the technology is not there yet. Like the transport protocol is not there yet. So you have to make these little hacks. Yeah, well, it's not so much a hack. It's like, I mean, the thing with this kind of tech is like we sometimes re-solve problems that are happening that have already been solved with like distributed computing or whatever that. So you've got like, I don't know, clock synchronization protocols or whatever, which would probably, you could probably use the web for that, but yeah, there'll be stuff. I don't know. The data's still gonna travel down a pipe as well. And if it's traveling to different computers down different pipes, it's gonna take different amounts of time. I guess however good the technology gets that's linking those two things together, you've still got data going down a pipe or through the air or something. And that's just a variable that you don't have control over. There's a question over there as well. Hi, can you explain a little bit how you did the multi-touch, multi-device thing? Yeah, cool. Actually, that's really interesting. So I said, so every device has its own transform matrix for taking that image and transforming it into its own part of it. And if you work out that transform matrix, you can actually take those touch events and invert that transform matrix and pass them back through the other way. So then you get the kind of touch in the virtual space. And then on a laptop, I refire those as emulated touch events. So on the laptop, it's actually kind of, it's as if I'm touching it, which has got this really interesting bug that if I go like that on my phone, it sometimes switches to the next slide. By, yeah, it's quite amazing. So if you do a two-finger touch in one place and two-finger touch in the other one, it comes to, it comes like a four-finger touch? It could do, actually. I've not tried that, which is really not brave enough. So, yeah. Thank you. That's great. I still got time for. Thanks, man. Thanks.