 Good morning everyone. My name is Tim Dryansdale and I am a senior lecturer at the Open University. And today I want to talk to you about something I'm calling LabRTC. What is it all about? The problem I want to talk to you about is if you've got equipment all the way over here and people all the way over here, how do you get them together? How do you connect people to your hardware? This is an open hardware event. It probably means, for the most part, you've got some super cool projects but what you want to do is get people connected to your stuff. And if we roll back to the 1950s, the way you do that is you say, gather round. Come stand at my bench and I will show you my, in this case, Trollus experiment. However, now we have super cool hardware, things like a Miarm, I've got one in my house. And we're kind of still doing the same thing with hardware. We're still kind of saying, come round and see my thing. But the problem is we've got the internet, we've got an audience that is far bigger than the number of people that can drive to your house to see your latest toy. So you can do stuff like web sockets, which actually we use, which is fantastic, but look at this, there's a big green expanse here where there should be a picture of a robot or something. There's just an on-off button for an LED that I took this example from. And the lack of video is kind of a pain because everything we do now is about having video. Did it happen? Well, I can't see the video, therefore it did not happen. So if you want to get publicity for what you're doing, if you want that publicity to get you money, then you're going to need to have a compelling experience. And with all of the recent burns that have happened to people who bought Kickstarter projects and prototypes haven't worked, you need to do everything you can to have a compelling interaction with your hardware so that people will trust and believe what you're doing. And video is part of that. I've got a personal reason for being interested in having connection to hardware from a distance, and that is that my parents live on the other side of the world. My dad's quite techy and sometimes I like to show him what I'm doing. So we've got two good reasons already. One is getting publicity for your project. One is me helping my dad see what I'm up to. And the other one is my day job, which is spending quite a lot of Hefke's money on making a room full of equipment with a locked door that I can't let any students into actually do something useful for us. So all of that gets gone in a room, no students are going in it, and I've got to get them connected up to it and having a realistic experience. And so we share the same problem when we think about getting open hardware connected to people and my students getting connected to people. Why did we go remote? The Open University is a distance learning university. We do degrees, but the students don't come on campus. We used to use the television to get lectures out there, but obviously things have moved on a little bit since then. But something that's only very recently started to change is the use of home experiment kits. We, just skipping over some of those details, have a new electronics course. And electronics you can't really do unless you're doing something with hardware. So for us to offer it, we have to move on a little bit from some of the approaches that have been taken with home experiment kits being sent out to students because we want students to interact with stuff that, quite frankly, is far too expensive to stick in the post and send one out to every student. So we have a new strategic approach overall at the university anyway, which is not to send equipment out to students. And instead that forces us to look at how students can access them remotely. And it's actually a really good thing because it forces a great deal of creativity and encourages us to think really hard about what should happen in a lab environment and what the learning experiences are. And the people at the Open University that have gone before me in doing this, Nick Braithwaite in particular, one of his team is here, Edward Hans, who is waving in the front. I'm disappearing shortly after this, so if you have any questions about what we're doing, fall upon Edward. So I promised I would publicly embarrass him by pointing him out at this stage. So before I arrived at the Open University, I already won an award for the way in which it engages with students over the internet using hardware. And we were recently funded again, a different fund of this time and as part of that I'm looking after doing massively parallel electronics experiments at a distance for students. We've got some robots in one room. We have some national instrument equipment in another. Obviously none of this is open hardware per se. So for practicality reasons it's what we use. But the things that I'm doing to connect to this equipment relate directly to what I think you should be doing to connect people to your open hardware. We have a giant switch with lots of purple wires coming into it that's directly connected into the Janet Fiber optic backbone and that is solely for experiments. So just to give you some idea of the scale of what we're proposing to do, it's not just one experiment and a big queue of students. It's lots of experiments and an even bigger queue of students but they're more using it in parallel. So how do we actually go about showing something off? Wow, you should stick a webcam on it. So that's fine. You can stick a webcam on something but that's when your problems start. The speed of light is actually not the problem here. Once you've got your video onto the network it can go all the way around the world in less than a reaction time, more or less. So the problem you get with sharing video is actually latency. And you can try and get ahead of the game by starting slightly before the gun by predicting in certain things, not in this particular case. But if you're doing video, he says, clicking, then there are a number of things that you can try. And one that we've used successfully at the Open University is JPEG Refresh which works if you've got your equipment behind lots of security but you still have a latency issue there. You can stream stuff. We were doing some experiments with some security cameras and because this is being videoed I must say that I've just pulled these images off the web. I don't actually know which ones my team were testing so I don't know if these are the ones that are included or not. But basically we found that if you wanted a streaming camera, you had to buy a security camera and then the cost for the quality of video you got wasn't really particularly a compelling, how should we say, cost-benefit compared to what you can get out of those Logitech C920s. So we were looking at spending hundreds of pounds of camera to get a decent video quality with this built-in streaming so it didn't seem economic to us. And then you might start asking, well, if you're looking at all of these different latency issues and you've got a few seconds over here or you've got three minutes over there or you're starting to get a sub a second, what really matters? How much latency do you tolerate before somebody says this is no longer a compelling experience? And in computer games the answer is whatever latency means that you don't end up with people thinking they've shot people when they haven't been. That's the biggest previous kind of look into network latency. So in computer games you tend to cheat, you tend to predict where everybody is and you play the game and you just keep checking that your history has been updated okay. But that doesn't work for us because real hardware is only going to make it into our lab if we're doing an experiment that is too chaotic or random or difficult to simulate. So if we could simulate it, we'd simulate it anyway. Whereas if we're actually putting hardware there, we're not simulating so we can't predict. So the other reason that we're interested in getting the latency right down is this chap here. If you haven't read this book, get it, but give yourself a year to read it. I started a year ago, I'm still only three-quarters of way through because literally it's so dense with insight that you have to take your time and think about it. But the critical thing that his experiments have shown in others in that community is that any time you're holding something in your mind, you are burning psychological energy. Even if you are pushing a button and waiting two seconds to see what that button did, that's two seconds of extra psychological energy that you have burnt. And for our students that are holding down a full-time job, if they're coming into a remote lab virtually on a Friday night after an exhausting week at work, and every time I ask them to push a button, they have to remember what the button was for two seconds. I'm costing them psychological energy they probably don't have. And in the modern attention economy, we're calling it an information age. It's really an attention economy that we're in. There's so much information out there. It's can you get attention onto yours? If I don't like the start of a YouTube video, I've clicked off within four seconds. So I'm certainly not waiting for somebody's hardware to take four seconds to respond to something. And I don't think the students will accept that either. And so if you're trying to get your hardware in front of somebody, you really want to be able to go push, see the result straight away. You don't want three minutes or even a couple of seconds. So in one of the rooms in our lab, I have been beavering away trying to combine some existing technologies to make them do video quickly. And the way in which I am doing this is something called peer-to-peer video, which you all know because you've probably been in a Google Hangouts or a Skype call and that is a peer-to-peer video connection. You speak, the other person speaks, and you're having a pretty natural conversation even all the way around the world. So that means the technology exists to get video responses from hardware really quickly. But it's a little bit trickier to set up than just a standard one-port streaming media server. So the way it works is that the two peers need to find a way to share some information with each other about where they are in the world and how they can be communicated with. So they need to talk to a third party and that third party has to figure out where they are and join them up. And once they've done that, they start trying to communicate with each other and if the connection is successful, they're sending information to each other directly without the third party being involved. So let's get rid of one of the people and that's not open hardware, but it is hardware. Let's put some hardware in there and have the hardware be one end of that video connection. And let's get that low latency from that peer-to-peer. So it turns out that works and this is the very, very first rubbish interface that I built. But let's say I finished this about two or three in the morning, phoned my dad and went, Dad, would you like to play with some real hardware in the UK? Of course, it was Sunday afternoon for him, so perfect timing, he was wide awake. I gave him a web link for this and within about four or five seconds, the pendulum on the other side of the room started going and swinging about. There's this little rotary pendulum thing here. So basically, within about four seconds of me saying to someone on the other side of the world, would you like to play with some hardware? They were interacting with it in real time. There was no software to download and the responses were good enough that my dad kept logging in every day for weeks, figuring out how to make it do cool stuff like swing really high and he was like, son, if you do this with the controls, it was really cool. And it turned out that this thing that I thought would be really quite straightforward and not have much depth to it became almost a telepresence device in my house for a month. So that was pretty cool. So that connection, let me see if I can turn that on. Could you click on the video for me? It's the bottom left of the click pad to make it click. Yeah. Oh, right. I know what I'm aiming for. No, no, not at all. That's trying to do this from the mirror. Yeah, here we go. Okay, so you'll see here that there's a green dial on the bottom and there's a little mouse on that. That's my dad moving there in New Zealand and that's what he sees on the screen as he does it. So you'll see him move it back around again. So that's the latency recorded on the screen from New Zealand with equipment in the UK. So I thought that was pretty cool and we've gone on to build a bit more. Can you pull the same trick again? This is a slightly improved interface. This was when I was in Austin presenting this at National Instruments Week a couple of weeks ago and I swear that's better than I've had from one side of Milton Keynes to another. You know, genuinely I'm not faking it that is video taken from my laptop in a coffee shop somewhere in the east side of Austin. Back to New Zealand. I mean back to the UK, I should say. So I've had people try this out all around the world and basically it works. So what I want to do is show you a little bit more about how this is done so that we can start moving toward making this sort of approach available to you for use with your projects. So the architectural philosophy for this actually comes from what Ed and his team have been doing already which is make things simple. Put everything in a JSON object and send JSON objects back and forwards between your experiment and your user. And that way you've got a nice cut out. You can separate out the concerns of how the equipment works and how the web app works. So my web apps know nothing about the internal details of how the hardware is run and the hardware server has no idea how the user is processing the data they get. It just says, here's your latest sample. You do what you like with it. If you want to store it, if you want to re-zoom in, if you want to zoom out on the graph, your problem, don't ask for the data again. You've had it. And that keeps things quite straightforward. This stuff works on a mobile phone as well, actually, but mobile phones don't do fast food. It transforms very fast, so there may be some exceptions there. Now, the critical thing where this starts to get slightly more involved is that as of December 2015 you cannot have a mixture of secure and insecure stuff on a web page. So if you want to do WebRTC, you have to have a secure web page. So that means you can no longer do insecure web sockets on the same page as WebRTC video. So that means that a lot of the stuff that's around for doing web sockets on open hardware, you quite often see a line that says, oh, it doesn't do this securely. So there's a small gotcha there, but I can tell you how to get round that. So the critical line was this one here where it says there's an extra S. So you've got to get that in there. Web sockets are pretty damn good. I mean, they're better than the video. We can put 20 users onto the same piece of equipment, and they will get a message within a single digit number of milliseconds after we decide to send it. So, you know, as far as connecting to hardware goes for control and data, you could do that in WebRTC with its data connection, but Web sockets is, how should we say, it's a direct connection, so you don't have to go through something. I'll explain next, which is setting up the video call. So it's kind of nice when you go to a page to have something happening straight away. If you do do the data and control with Web sockets, that's a much more bombproof thing to set up than a WebRTC connection. So you can get data showing on the screen while the video is loading, and I think that provides a sort of a nicer user experience. So Web sockets are still in. Very good. Peer to peer. There's not really much point in investing a lot of effort in something if it's not going to be around for much longer and if there's not a very big community behind it. But WebRTC is in the right place at the moment. It's getting a lot of attention from the major browser companies, so there is a huge, huge, huge amount of investment going into making sure that the video is coming out of your webcam and getting onto the internet as fast as possible. So this is where the latency thing gets solved. Basically, WebRTC, you're not touching the video very much. It comes from the camera, it ends up on the internet going to the other person. That's why this is a preferable approach to others. It's just because it's going onto the wire and going straight there. Believe it or not, Microsoft's coming to play. ORTC, their version of RTC, is going to synchronize with WebRTC at standard version 2.0, or so we believe. So it means that it's the right kind of place to be investing in if you're doing development because it should be around for the foreseeable short-term, medium-term future. How'd you connect? I know this sounds trivial, but one of the things that started doing my head in when I was trying to get to grips with this at the start of doing it was how do these two peers actually choose one another amongst all of the other peers that want to talk to each other in the world? Why are you not accidentally ending up talking to somebody else? If I go to my server and I put in the room name that I want to talk to someone at, why is that room name not then able to be accidentally shared by somebody using a different server? It's actually really quite straightforward and that is that this line here, you can't really see the red background here, but this URL here is where your signal master is. That's that computer at the bottom of the screen that says, hey, I've got people that want to talk to each other. So if you go to a particular signal master, it's like going to a conference center, and then if you give it a room name, that's like going to a particular room in that building. So you have to get to the right building and then the right room in the building, and then the master with the right room name in order to speak to each other. And the thing that I'm doing to save a bit of bandwidth and keep the student's privacy is making sure that the student's not sending video back to the equipment, because quite frankly, the equipment does not care what you look like. A few of my friends got a fright when they realized that their video was actually showing back the equipment when I had the early prototypes. So I very quickly decided I needed to get the video coming back off. But it seemed wrong. So that's that bit there. So you've got full control over video and audio. At the moment, I have audio turned off for the system that's running in my house. I have it turned off for the systems running at work as well, but we're turning that on soon enough because hearing something clank and bang if it moves is kind of cool. It's part of it. But if you're running it in your house, you maybe don't want to be bugging yourself to the wider world. Yeah, we want to check that. And you can check because if you go to your own web page and you get a lot of feedback, audio, then you know that you've got your room bugged. So in our case, each experiment has its own room. So we have a signal master that we're running ourselves and each experiment has its own room name within that. And we'll keep changing them just to keep students on their toes. Then we get to the bit that, thankfully, there are a lot of smart engineers working on at the various browser companies, but it's that whole process of how you actually connect these peers up because, well, let's face it, if you've got your own broadband connection at home, it's very, very likely it is behind network address translation because there are not enough static IPs for the UK broadband providers to give you your own static IP address. So that means that the IP address that your computer has is probably going to be a 192.168 because it's behind network address translation behind your home hub router, which itself has an IP address which is behind network address translation. So you're more than likely to find that your computer doesn't actually know what IP address to give to the signal master, to give to the other person because if you each give each other 192.168 addresses, then you're not going anywhere. So what you have to do is ask each peer to talk to a server that can identify it. A stun server does that job. Basically, all it is is a computer that has two network cards so that some of the clever tricks that people play with pretending to have slightly different IP addresses depending on what IP address you give them can all be worked out. So that server at the bottom of the screen that I said does a connection actually has multiple parts. One of them is, yep, these are the two people that want to connect, and the other is this is where they're actually sitting at. And that stun server will generate a number of different candidates that could be connectable between each person. And if it all works, great. You might notice when you're using different WebRTC products that they all have a slightly different experience as you connect. People have different degrees of paranoia about how quickly they want to connect and how good they want that connection to be. So some people start off with the first thing that comes through and then go on to something that's really boring and slow. Some go with something that's really boring and slow and then upgrade if they can. And it all depends on the architectural philosophy. So the reason why peer-to-peer looks different in so many different products is because of these decisions that are made about how you hook up. Is it really that hard? It depends on your IP policies, your firewall policies. So it seems like an obvious thing to say, but if you're sitting at a castle, you want the things that are being thrown by catapults to go out. You don't want them to come in. And if you imagine that your router is a little bit like a castle wall, if you've thrown something out through it, there's temporarily a hole in the air that has gone out through. So something could come back in through that. And that's effectively what happens when you do a peer-to-peer connection. You have your firewall and the other person's firewall. And once you have each other's details, you just start spamming each other with binding requests. And what you're hoping is that your binding request went out creating a pinhole in your firewall where one of the messages that they're spamming out comes back in making your firewall think the other computer genuinely responded to you. But what actually happens is you're just both spamming each other at the same time, hoping that eventually holes open up on your own firewalls and the messages will get through. So this is why it's not as simple as a WebSocket connection. There's an element of jeopardy here. So if you use TShark or Wireshark, you can see an absolute shed load of binding requests go out. And if you don't see a success response like this popping up, they're easy to spot because they're actually more characters. So you can see a long block of text here, an extra line kicking out now and then. If you don't see the binding success response, no joy. If you don't want to TShark or Wireshark, you can check those connections in browsers. They'll go through a list of all the candidates that were tried out. But if it does work, then you can show your prototype off. And one of the things that I want to do that I'm working on at the moment is creating a light version of that demonstration that I showed you before with the pendulum that's built around using Open Hardware rather than relying on the National Instruments Kit that we've got. So a poor little Beaglebone Black, I have roundly abused by sticking all sorts of heavy servers on it, and it's basically looked at me and laughed and gone, that's easy. I can do that all day long. So I can run a full WebRTC stack on a Beaglebone Black, and it doesn't seem to be bothered particularly heavily by that. So the thing that it doesn't do very well is the video. So a Raspberry Pi and a Beaglebone Black, neither of them have particularly good frame rates with USB video cameras. So funnily enough, it's doing all the complicated stuff, but the thing that needs a GPU, it's struggling a bit. So actually, if you do try and do a single board solution to controlling something, running all of the WebRTC servers, and having the camera, the thing it falls over on is just the USB bandwidth and the processing of that camera stuff that comes in. So I think the X15 probably will solve that problem, because it looks like it's got laptop-style capabilities. So you'll have to spend a little bit more than 35 pounds to get one of those, but I'm hoping that that might provide a solution. There were some other gotchas on the way through, which is that the ARM version of Linux doesn't have a network on line targets, so it tells your system that the network's up and running when it's not. So you start your turn server or stun server and it says, yeah, I can't run right now, but that seems to have solved itself by taking the correct files from another distribution and dropping them in. And the other gotcha at the moment is I'm using Namecheap for my domain hosting and my SSL certs, because self-signed certificates aren't good enough for that what's the word, mixture of security levels solution. And they only offer a Windows client. So I have to have a Windows machine running somewhere in my house to keep Namecheap updated with the IP address I'm currently using. So it's just stuff like that that's preventing it from being a completely open hardware solution. All right. So what's next? Well, I've got an absolute shed load of work to do with the team back at Milton Keynes to make all of our hundreds of experiments come online. So over the course of the next year as in when time permits I'm planning to just keep putting a little bit of work into making this available in open source fashion. I've talked to everyone from the dean all the way up to the vice chancellor all think it's a wonderful idea to release this as open source. Now I have to convince our commercial department. So we'll see how that goes. It should be okay. And I guess I'm going to have to try and convince someone to find me an X-15 so I can try that one out. But the aim is that that should go swimmingly well the next year on a Sunday we can do a workshop on connecting your hardware up to anyone in the world using this approach so that you can do what I'm doing with my dad with your cool hardware and hopefully make some money on Kickstarter. That was basically this bit. Just do whatever you can to connect people to your stuff. Make it compelling. And what we're doing the thing we're showing off okay has lots of bits and pieces in it but to do this with one piece of hardware I think it's an entirely doable goal for you all. So partying thoughts, use video to connect your projects care about latency and in our case the thing we want to show off is our massively parallel remote laboratory facility for our electronics course and fun enough we actually have heaps of room for expansion to other courses so if there are things that you know about you think we might be interested in partnering on to do cool stuff in our lab do also let me know. Right there's a huge team of people, there's just a few of them that are helping out with this it's one of them and that brings me to the end of my prepared comments. Thank you very much.