 4 o'clock on the dots, I'm going to start right now because I've got a lot to cram into this, so hopefully I've got some really interesting shit to go into with you guys, so I'm just going to go through the boring stuff first, first and foremost, this is me, before I got rid of the emo haircut, my name is James Mallison, I usually post things like that on Twitter if you like any sort of humour and I currently work for Trevargo in Palmer, New Yorker and today I'm going to talk about, well first and foremost, I love PHP, that's why I'm here. So I love PHP so much that this is real but it's not Photoshopped, I bought this license plate for my bike, J77 PHP, I was told it's cut off a bit at the bottom but I was told by members of the PHP community that now I've got that, don't crash for the sake of the PHP community because JavaScript developers will never let us hear the end of it and then obviously here's me having crashed the bike three weeks later. And today I'm going to talk about two things that I find really interesting, I'm going to talk about web sockets, web sockets because it's a non-traditional way of doing client-server communication and I like doing things the way you're not supposed to and torrents because obviously you can do a load of a legal shit with torrents and download everything. I am not going to do any of that today and obviously I haven't done that in the past for reasons. So I'm going to talk about downloading the metadata of torrents and giving it from the server to the client and we'll talk about PHP obviously, event driven programming, non-blocking code, things like that. And this is kind of the agenda that I'm going to go into so I'm going to show the story of my interest in web sockets, I'm going to show how I implemented the technology to send arbitrary or in my case torrent JSON data from the server to the client. And the problem is that if you Google web socket tutorials it will always be in the form of say a chat tutorial which is sending information from the client to the server to the client and that's boring. Nobody wants to do that, people want to do push notifications and push notifications tend to be arbitrary data. So hopefully at the end of this I'll have shown you how I did it with torrent data and then you can maybe use this as a learning experience to go and actually implement your own push notifications with your own arbitrary data. Also I'm going to talk about some of the problems I encountered, I encountered authentication problems, blocking problems and how I solved them. And then I'm going to show you a demo. This demo actually completely failed at PHP Northwest Conference because you can't torrent very well over 3G apparently. And I'm also going to show what I learned and how you can then move on and take the knowledge that you've got here and then go and do some really cool web socket stuff in the future. So going on to the story, in 2012 I used to torrent shits on a university. I mean we had really fast download speeds at university, it was crap at home, great at university. So I downloaded a lot of completely legal torrents. So I used to take say 10 terabyte external hard drives because I mean in those days that's all I could afford as a student. And I just take these hard drives into university, get everything downloaded and put onto these hard drives because of the good download speeds. So the next step for me was to get something kind of automated happening at home. So I installed something called DDWRT on my router. Now this is actually before I knew what the difference between a router and a modem was. And you know I just got my student loan, I wanted to buy something really shiny and I saw this ASUS router that was all black and had blue flashing lights so I went and got that. And I realized you could install DDWRT on it which allowed me to SSH into my router. I could then install the whole lamp stack, which I did for no reason whatsoever. I served webpages from the router IP itself and also I managed to install a torrent client. And there were two at the time that were kind of the main ones, there was Deluge or Delugo or however the hell you pronounce it and Transmission. And Transmission was the easiest for me to install. So I was a pretty hardcore Ubuntu torrent downloader. And there was also the option for me to plug an array of hard drives into the USB ports at the back of the router and have every movie or TV show that I've ever watched automatically downloaded and transferred onto these hard drives without having to do anything apart from add the torrents. And obviously I didn't do that, it was just Ubuntu torrents. So this brings me to the gooey of the Transmission client. So I was just finishing university and I needed a use case to be able to show potential employers, you know, I can do some good shit and I also like to torrent a lot so I figured I want to put these two things together. And when you look at the Transmission gooey over here, it looks like a proper web application, but if you right click an inspect element you'll just see it's not an application, it's just a load of CSS and it's a horrible CSS framework and these buttons are a little icos and anyone could make something so much better than this. So I figured that the data must come from somewhere and be populating this kind of horrible CSS over here and the HTML. So I had the idea that I would use Bootstrap framework, Bootstrap CSS because it has nice progress bars and you can change the style attribute and have this progress bar go up depending on whether or not a torrents downloading. And I would need to get the information from the server to the client in a standardized format like JSON or, you know, if I was really, really miserable XML, which I didn't do, I was definitely JSON. So the first solution that I came up with was, I mean, you don't really see exactly what's going on here, but at the very bottom I'm just doing this horrible command line call which gave me all of this crap. And then I basically had to use some of the JSON from near the bottom of this output as well as the columns from the very bottom of the output and put them together to get exactly what I needed. So what I ended up doing was doing a horrible regex, manually interpolating brackets and quotes to make my own JSON and I mean I was a graduate so this was clearly the best way of doing it as soon as I got it working. And then, so basically it worked. It took 5 to 10 seconds, it was really slow and crap but it worked. So basically this was kind of the overall standard how you would send an agex request to the server and get the response. This is what my application did to get this information back. Obviously the browser makes an agex request to the server, the server runs the command line call and then that would return the JSON to the browser. So this proof of concept that I turned into a prototype and then became my final idea obviously as it always happens like that in business, it worked. I was really proud of what I'd done. So clearly the best thing that I could do at this time would be go to some of the nicest people on the planet, stack overflow and show them what I've done. And I was really proud, I was like guys, stack overflow guys, look what I've done, what do you think? And this was the response that I got. And as usual it was absolutely nothing to do first and foremost with my code. They didn't tell me anything about my code. They said that I was exposing myself and my unnaturally large amount of Ubuntu 12.04 downloads to the world. So I need another server somewhere else, somewhere else in the world, as far away east as possible preferably, that I could connect to and talk to anonymously so Ubuntu couldn't see how many of the tournaments I was downloading. And where better than Stockholm, Sweden, which is actually where the pirate bay data centres were. This was the logic that was in my head at the time. What wasn't in my head at the time was this was actually where they were raided by the police like oh I didn't realise that. But anyway I found a VPS provider over there and I needed to pay for that. So I'm a bike, you've seen the bike cross pictures at the beginning and so I have the helmet and the full leather gear and everything. And in Preston, this is related, in Preston where I worked before there was actually a petrol station around the corner where you could buy an anonymous debit card. Or credit card, I can't remember which one it was, but the point is that you go in, you pay £10 and then you get this anonymous credit card that you can then use to buy, to get PayPal or anything you want and you can use this card to then pay for whatever you want online. So I wore all this gear, went into the petrol station but obviously parked my bike around the corner and all this was for educational purposes only obviously. And got this prepaid credit card, came out, went home and then I connected to the VPS provider's website using Tor. So then when I rented the VPS they couldn't see where I was renting that from so I effectively got this VPS really cheap and theoretically anonymously. Which brings me to kind of the next version of what I did with my little educational project. So the browser makes an age ask request to my server like normal, but then my server would SSH to the virtual private server using an application called Torify so it does it over Tor. I think I use PHP Seclib at one point as well if anyone has heard of that because it had some secure SSH implementation that I found was really useful. But then it was the virtual private server that, shit I'm seeing pictures of people taking pictures of this, I don't know if I should but anyway, educational. So then the VPS over in the east would run the torrents and all that stuff and then return that information over SSH to my server and then that server would return the information to the browser. So it worked, this was my next step, I've done exactly what the Stack Overflow guys asked for. So I went back to Stack Overflow really proud, I was like guys I've done exactly what you said, what do you think? And this was the response that I got. And finally they gave me something really useful about my code, my technology, my architecture and they said do why don't you just use WebSockets. So they didn't tell me why I should use WebSockets obviously, they just said use WebSockets. So it's like going in there asking a question about symphony and they just say oh don't use a framework, thanks guys. But anyway, so I went there, I went away from there, I did some research and this is what I found out about WebSockets. Now this isn't going to be just a WebSocket talk because you can have someone stand and talk an hour on WebSockets specifically how to or what they are but I'm just going to skim over kind of the how things happen and then I'm going to go more in depth into it. So the browser initially sends a WebSocket or WSS request, the WSS is a secure version from the client to the server, you type in WebSockets you'll find this on images on Google it's really easy to find. The server responds with a HTTP 1.1 switching protocols as long as your server allows this then it does this automatically you don't need to worry about this so far apart from requiring the JavaScript library to send this you don't need to worry about how it does it, it just does. And then from then on you've got this big fat pipe this open TCP connection which is where the data can be sent bidirectionally either way. And what you have to do then is use that to choose when to send information to the server and when the server will send information to the client and that's what I'm going to show you in a little bit how I did that. And the important thing to note is that the connection can be closed at any time on either end and either end has to choose how to respond to that. So if the server closes down, typically the JavaScript library will try and reconnect every five seconds and then fall back to Ajax or something like that. If the client disconnects by going to another page that doesn't have the WebSocket call on or just closing the browser the server will typically automatically have to respond to that and it will remove that client from its internal array of clients that it's sending information to. So it's basically saying we're not going to send any information to you anymore and this is done in PHP which you'll also see in a little bit. So standard HTTP versus WebSockets. HTTP, everyone knows this, you send a request to the server, you get a response from the server and then that's it, it's half duplex. So you have to wait for response and then rinse and repeat. So if you're doing this every five seconds to get this torrent data it's polling. Now WebSockets, you've got to have something open on the server which is called an event loop that's going around and around and around that's waiting to have information given to it and waiting to send information out but it's really fast. And this is full duplex. So you've got to have this big pipe going between the client and the server and you've got this loop happening on the server. So what do I mean by quickly, very quickly, by half duplex and full duplex? So half duplex means in HTTP, in a conversation only one person can talk at once and the other person has to listen and wait to give a response. So if you're using Apache or Nginx or something like that, this is where you send the request and then get the response. Full duplex is this event loop which is where in a conversation everyone's able to talk to each other at once, like PHP internals, and everyone's able to understand what each other is saying at the same time, not like PHP internals. I think someone already knew what I was going to say there. So I've mentioned this word event loop, so very briefly what is an event loop and how does it work. So the point of an event loop is this loop that's happening on the server and it's not just a while loop and I'll explain why in a second. But basically the point is a request comes in to this event loop then you're registering an interest in receiving some data. So the actual processing of all this like IO operations which typically are blocking, database calls, all that sort of stuff is handled outside of the event loop. And then once it's done it's kind of injected back into the event loop and then that callback is triggered and the response is returned to the person who sent the request. So it's not just a standard request or response. Going on to a little bit more in depth about event loops, I really like this diagram. It's from embedded systems and the point of event loops in a single threaded context is that it's not just a while loop. If you guys would try and implement or any of us trying to implement a while loop you'd have at the top of your while loop procedurally in a procedural sense. Is there anyone trying to connect? You would say there is people trying to connect then you'd save the people into an internal array somewhere. And then you do all your processing in the middle that needs to be done and then at the end of your while loop you would say okay is there any information that needs to be sent out. So it's not just a while loop. That's polling in a procedural sense. So that's also blocking because all of the information that happens in between you can't have anyone else register an interest in receiving information. You can't send any information out while this computation is happening. So instead we can use something called interrupt driven IO. So with interrupt driven IO this is kind of a bit more in depth. You don't need to worry about it because it just happens when you use the technology to do all this website stuff. But the point is that a flag is set on the CPU so this big event loop can continue happening, continue going around. But the processes that is getting all this information it's suspended and then the buffer is filled saying alright we're going to make a database call that takes ages. And then finally once we get this information that information then changes a flag back and it's then put back into the event loop. So the point is that the event loop isn't saying have you finished yet? Have you finished yet? Have you finished yet? It's the source of the data that just says to the event loop. Yes we've finished. And the point, the reason I'm showing you this is because we need to get out of the mindset today of procedural code. We're going to talk about jumping around which is what it allows you to do with an event loop. And this is useful for inter-process communication because another process can change this flag and then put stuff into the event loop. And you'll actually see me do that in the live demo that hopefully won't completely screw up later on. Also this is how PHP's process control extension works a little bit because it's signal based. So I'm going to show you a very brief example of PHP's process control extension and how to use it. And then hopefully that will see how we're moving away from procedural to this kind of jumping around stuff. So I'm going to go through this line by line first. Ignore Decletics, I'll explain what that happens at the end, why that happens at the end. So we've declared a random function, a signal handler. Don't have anything in it, you could echo something out, do whatever the hell you want. Once you've got the process control extension installed with PHP, you have to say that process control signal, if we receive a SIG term, then we're going to run the signal handler. That's the next line. The signal handler is the name of the function. A SIG term in Linux is a minus 15. So if you run the script, you see at the bottom we've got a while true. It sleeps for 10 seconds each time and then just keeps sleeping this. You run the script it'll just sleep forever. It'll echo a process ID out so we know which one we want to kill but then it'll sleep forever. So run that script in a terminal, wrap it up here, it's fine, open another terminal. Then when you do a kill minus 15 of that process ID or the script or whatever, the minus 15 is the SIG term. If you do a kill minus nine, that's a SIG kill and it'll just immediately kill this process. But the 15 means that it's a SIG term, a termination, and then wherever we are in that while, this is the important bit now. Even if we're in the middle of this sleep call which takes 10 seconds, it will seemingly immediately jump from this sleep straight into that signal handler. That's the magic basically. The reason this happens is because we've got this declare ticks at the top. This works because the underlying operating system sleep call is interruptible. If you go into PHP sleep and then you see the C library call that it's doing, it's interruptible. So declare ticks effectively tells PHP under the hood to PUL for interruptible. PHP is still doing polling, but you don't see it because it's not in user land. Under the hood, after each executable low-level function call that has side effects that PHP does, PHP polls is there interrupt. If there is, that's why it jumps to the signal handler. If you don't do this declare ticks, then it won't execute the tick function and then it won't jump to the signal handler with the minus 15 that you sent. Why am I showing this? I want you just to see how we can jump from something that's seemingly doing something that's nowhere else at any time. I've seen DevOps use this to send a minus 15 and then turn off a load of AWS instances, shut down a load of AWS instances by an API call before terminating the program. So the library that I used for this event loop, and you'll actually see this jumping around in a second that I've shown with the process control extension, was React. This was a while ago, there's newer ones now, but I used React, which is primarily the event loop that I showed you. It has HTTP DNS components and Ratchet, which is a WebSocket implementation on top of React. So before we get worried about how many tech things I've just talked about, literally that is the end of the tech stack pretty much. You've got your standard server and you've got your standard JavaScript. You've got a library in JavaScript that allows you to make this WebSocket call and then you've got React, which is your event loop, and React Ratchet, which is your WebSocket implementation on the server. So you've got those two things on the server and you've got these two things on the client. That's it, so you don't need to think there's so much stuff here. That's it for the tech stack. So React is a PHP implementation of an event loop that uses select calls by default, which is a really slow system select calls. And that's why React and AMP PHP and other PHP implementations of event loops say please install, if you can, LibEvent or LibEV. And LibEvent and LibEV are cross-platform C implementations of the event loop. So effectively, when you call interface methods on this library, if you have these libraries installed, then it will choose to use LibEvent or LibEV and if you don't, then it will fall back to the crappy slower PHP version of this event loop. And that's why libraries suggest that you use these interrupt-driven IO sort of allowing C-based libraries because they're a hell of a lot more performant. So very shortly we're going to get into some code. The point with Ratchet is you need to create an event handler. This event handler class is the thing that handles all your events. So you've got to have methods like unsubscribe. These are the interface methods unsubscribe. So a user is subscribing to receive torrents whenever we have anything to send out to them. Unopen is called when we open a connection from the client to the server. There's about five interface methods or something like that that you have to implement. Once we've created this event handler class, we tell Ratchet this is the event handler class and then you type PHP index.php hit end to remove this terminal to the top left like we did with the process control extension thing. So it's ready to handle events and it just sits open and then you can start doing your clever shit in JavaScript. So this code is copied directly from the Ratchet documentation. After this today you can just copy and paste some code and theoretically you should be able to start doing some cool stuff. The only thing we've changed here is this R event handler class up here. This is the thing that we have to create that contains unsubscribe and unopen. Once we've done this, we've got the loop run at the bottom. We hit enter, like I said, move this terminal to the top left and then when JavaScript calls dot subscribe then R event handler class unsubscribe is called and we can do loads of cool stuff in that. So implementation time. I'm going to show you the WampSpec which contains those methods that I've just said unopen unsubscribe that you have to implement and I'm going to show you how you make the JavaScript library course to make that event happen, how you trigger those events. And that's basically how we keep everything connected because of the spec. And this is the WampSpec, which is a sub protocol that provides RPC and pub sub patterns. You don't need to worry too much about that. But this is an interface that you have to implement and then you have to have unopen, unclosed, unsubscribe, all that sort of stuff and you have to handle what happens in your event handler. So in my case, we have the concept of a topic here. You can see one of these things unsubscribe. We have a concept of a topic and in my case the topic was just the string torrents. So the JavaScript implementation, the library that allows us to adhere to this WampSpec was autoband.js, that's the JavaScript library and we have these methods on the server. That's how the connection works. So, first and foremost, at the very top we've got unopen. This is the WampSpec thing because we're implementing WampServer interface that this is what you have to have. So I've closed this unsubscribe here and this unclosed here. We've got this unopen. That's the one that you're going to care about now. So what we're doing here is we're basically saying that when someone opens a connection to our event loop this handler gets unopen, gets executed for us and we get a connection interface object. So then we store this connection object in our internal array of clients which is this thing up here. So what I want to point out here is that this is just a class. This is just a standard class. So I've got this logger here. I've removed the constructor but you can just put a new monologue logger in the constructor and dependency inject stuff in. This is just a standard class. The only difference that you'll see here from standard programming is that this thing contains state. There's not a new class each time because you've already hit enter in the terminal and you're running this thing in the top left. You'll have that array of clients opening and closing and there's always going to be state in this class. So this is really simple. This is the unopen and I'm going to show you the JavaScript that causes this to be jumped to and executed. So I've moved that to the top of the screen. You don't need to care about that. Now you can see this is the only JavaScript you need to make that unopen execute in your event loop. So we're making a call to this URL of a web socket, it's not the secure one, just the standard one. We actually call open and when the unopen succeeds we get a session object. That's literally the code to make someone connect and to be able to store that IP address or whatever the client ideas in the internal array of clients. That's the JavaScript library that conforms to that spec that allows you to call open and have unopen called on the server. That's the thing that allows that to be the same. So that one spec that I showed before. This one, that's the JavaScript library that allows these to talk to JavaScript. So that's it, those are the two things to make unopen happen. And this is kind of the same with subscribe. So I've closed unopen here and I've called unsubscribe. We have a connection and we have a topic. In my case topic was Torrance. So the only thing I've done different here is a little bit more complicated but we've got this loop add period timer and the only thing I've got this loop from is by passing the event loop which is an object that you create at the very beginning of this application. I've dependency injected that into this event handler. So I've passed it in. It has methods on it like add periodic timer in this case every two seconds we're going to make a horrible command line call to go and get that torrent data. And then once I've got that torrent data I can broadcast that data back to everyone subscribed to that topic. And I'll put these slides online it's really easy to do this on that. So I'm storing the timer and the topic and the client connection ID together for every single person that's connected. So then we can remove someone from the internal array of clients which means we're not going to send any information out to them anymore when we call on unsubscribe we can call methods on that timer object like stop or pause and things like that. And then the topic is just in my case it was the string torrents so we know who subscribed to torrents. This is the JavaScript that the only thing that's changed here to make this happen is with this session object we get back from unopened we call subscribe on it. So we're subscribing to the topic torrents and whenever we get some result back we'll console logging totally legal torrent data and then the result. So hopefully that's really easy to understand we've got this this thing is constantly looping around over and over and whenever we call this from JavaScript this unsubscribe method is called and then we start every two seconds so before I continue I want to show how easy it is to run this event loop and Apache on the same server because this is a problem you need to be concerned with because this event loop is running on an IP address and a port because it is a server effectively so you've got to make sure that web socket calls go to the event handler and your HTTP calls go to Apache so all is I think this was the only line maybe one more line but this was the only line that I needed because it was a virtual host proxy pass which states that web socket secure calls or web socket calls go to the event handler instead running on this port and you set up the IP address and port that they're running on when you hit enter in that terminal in the code of that or just use nginx because it's a lot easier to do apparently so kind of the now the high level architecture or data flow whatever you want to call it of this working is you subscribe to a topic in JavaScript dot subscribe on this topic then on subscribe is called automatically for us in the event handler which is where we store the connection of the person that's connected in an internal array we start the timer every two seconds for that person using ad periodic timer and then we store that timer against that connection in the internal clients array as well every two seconds the timer is hit we get that torrent data by the horrible command line call and then this takes ages in a little bit then we broadcast that data back to the server back to the client and then when the person navigates away from the web page on unsubscribe is called automatically for us or in your JavaScript you have a button that says unsubscribe that executes dot unsubscribe which then executes on unsubscribe in the event handler then we have to do clean up which is stop the timer we do an array search in the internal array of clients stop the timer for that connection and then remove that connection from the internal array of clients so this is kind of the life cycle of having someone connecting registering an interest in receiving torrent data starting the timer to send that to that person who's interested in it and then removing that from the internal array of clients when they unsubscribe and in our case we just console logged the information that came from the server to the client so there was a few problems that we had to solve with torrent I had this grand idea that me and all my mates could use this application that I was building to be able to have their own individual torrents so if everyone was subscribing to torrents the next person that opened a window on another computer they'd be getting everyone else's torrent data it's not on a per user and per connection basis and also if I was to open a browser window to ask for this information to be sent to me and then I open another browser window the exact same one basically with the same web page then that periodic timer will be started again for the same person so then both web pages will constantly be getting this basically double the speed of the torrent data and then we'll be able to use torrents and we'll constantly be getting this basically double the speed of the torrent data so yeah that's a big fail so the way I fixed that kind of a hack as most of this was basically a hack was the topic has to be a string and you know what fuck it Jason's a string so I basically put structured data in Jason and sent that as a topic so the topic was torrents and the user ID was the user ID I got from PHP so now every subscription would be unique for user ID so user ID 245 opens a browser window user 247 opens a browser window and they will each have their own torrent data sent to them every 2 seconds because that topic string is different for each of those people but the problem is with this the same user could still open another web page it works for different users but the same user could still open another web page and then they'll still get double the amount of information back on the string which contains the topic and the user ID will be exactly the same so they're basically subscribing twice on both of their pages so I got around this with the sort of authentication duplicate data solution so I was passing the problem was I was passing the user ID to a JavaScript variable and then subscribing to the torrents topic but if you were to pause JavaScript execute I don't even know why I was worrying about this because my mates weren't going to fuck around with my code so I probably would but I know I would but basically one of my mates could pause the execution of the JavaScript change that user ID and then click play and then they would get someone else's torrent data if they could guess the user ID and I was using auto increments so obviously they'd eventually find someone else's data so the way I got around this was to do something really stupid and really clever which was basically reinvent how PHP did sessions great so I would create a token on the server and I would pass that to the JavaScript before making the web socket call this unique token I use random bytes which is a well I would use random bytes because PHP didn't 7 didn't exist back then but I'd use random bytes instead of open SSR random pseudo bytes because that's not fork safe because security was a big concern in this application I'd pass the topic then so now I've gone from the the topic and the user ID being sent up for asking for torrents it would then contain a unique token if you refresh this page every time you get a different token in the JavaScript variable so if one user user two four five in this example was to open a second page and how to subscribe to torrents that string would then have a different token in so then they would be getting individual torrent data instead so we're not having one page affect another so they're not having double the data sent back now I know Ratchet at this point is a sessions component that worked with symphony I never even used it I know if you have HTTP foundation in your application then you can basically plug and play that but I didn't do that I just used this what I thought was a really simple solution the biggest problem out of all of this though was that it was blocking so this stupid five second call that took to get the torrent data during that time was waiting for a response then no one else could register an interest in receiving torrent data no one else could have that torrent data sent to them no one else so nobody could call.subscribe in the JavaScript and have unsubscribe called in the event loop automatically for them they'd have to wait until this five second bloody command line call finished so it's blocking and it's the same for database calls I was checking on the service side in the event loop that the user who sent up the ID and token I'd just say I'm not going to add you to the internal array of clients which means you're not going to get any information sent out to you so the way around this blocking solution that I found was you could either use another process to run that torrent call so it's in your child somewhere the event loop can keep continuing or use a job queue I really like the job queue solution and I'll show you both of these within the next couple of minutes but it requires a good architecture to make sure that your job queue is completely separate from your but then it still knows where to send the data back to I'll show you all that soon so option one in unsubscribe you can do this create a new child process which contains our whole command line call and then you add a timer to that and then you start the timer and then when data is given that's when you then broadcast it back to the client so you can say on STD out when we get data send it out and this is great that the event loop could then continue so it wasn't blocking anymore so the downside of this is this is a process and it only works on your machine so it's not a particularly scalable solution I have no idea why I was even considering this because there was only about eight of us who were going to use it so it didn't really matter but basically you're spawning a new process every time someone asks for torrent data every five seconds for every client and there's a lot of overhead in creating a new process there's file handlers all of this Linux bollocks that nobody understands and I don't understand but a process is effectively the biggest unit of execution within an operating system so it requires the most resources so it's not the best way unless you're doing like shitty little projects and I envisioned this huge awesome completely legal system of downloading torrents so the next step was to use a job queue so the point is that I would then fire off the request for this torrent data into a job queue although the event loop can keep continuing and then put that back into the job queue rather like that diagram I showed at the beginning which has the event loop in the middle all the blocking IO stuff outside of the event loop and then the request going into the event loop asking for the information so when you set up your ratchet and all that event loop and you hit enter and move it to the top left this is where you put this code at the very top here and all we say here which you can also copy and paste from the documentation when we get information sent to us from anywhere on this machine on port 5555 then it's going to call on ZMQ response a function that doesn't exist yet we're going to create it in a second in our event handler so any information that comes over port 5555 hits our method automatically for us this is what this code is saying so in our unsubscribe method when someone asks for torrents I'm using this thing called fiendstock now this talk isn't about a job queue there's a talk that I linked to at the end which is really cool about creating workers that are more oriented and they all adhere to nice solid principles and all that but I'm not going to talk about that but basically this is basically the only line that I would need to add into my unsubscribe method put that job shit into a torrent queue that's it and then forget about it so it's a fire and forget that's the important thing so looking at our new method that we've created here this on ZMQ response thing down at the bottom the point of this is that any information that's sent over port 555 hits this method we just get some random arbitrary data that could all send messages over this IP address port 5555 as long as you use ZMQ or the right library and you'll see that in a second as well saying oh you're talking shit or whatever you want to say and then it would pop up on the screen if I ever wanted to show it so all we do is we find the client that we want to send the information to and then broadcast the data to the client so this is how easy it is to inject stuff back into the job queue so the high level kind of data flow and architecture of the new way is that you subscribe in JavaScript to Torrent and it automatically calls on subscribed for us in like like before but we store that connection in an eternal array like before but instead we then use fiend store just add it to a job queue or whatever job queue you want to use fire and forget forget about it then you have all these workers as many as you want that pull that job out of the job queue do all the processing so they do this horrible command line call that takes ages and it gets injected back into the event queue the event loop into our NZMQ response method that we've just created here that's how we get stuff back into the event queue from somewhere else and this is a worker that's doing it but you can do a command line call or whatever hell you want you'll see me do a command line call in a second and then we broadcast that data back to the client so this is separating the processing like that diagram I showed you at the beginning back into the into the job queue when it's done which is then broadcasted to the client and the good thing about this is this adheres to kind of the original idea that I brought up at the beginning the event loop is just an intermediary for registering an interest in receiving data and sending data out that's it nobody's blocked from registering interest because this work is all done outside of the event loop and workers as long as they have been stocked and zero on queue of which there are so many languages so this is the part that completely fucked up last time so please turn off all your torrents so I can actually show you some completely legal stuff great okay torrents on the right completely legal stuff as you can see we've got this Ubuntu stuff happening over here that's currently paused now this is the application that I created for me and my friends from a theme that I bought online of course and also ignore the bit that says your stream mobile videos so I'm going to click downloads over here and also the important thing that I need to do is show you my code this is the code that you saw at the very beginning near the beginning which is actually starting this event loop so I hit enter on the web socket server I moved this to terminal to the top left I'm using php storm here but that's all you need to do and the log I've got web socket server starting that's just me logging something out when I start this so if I repress this downloads page now oh yes nobody's torrent in just me perfect so the important thing to see oh god it's cutting off at the bottom okay so this torrent information here is actually that thing that's getting pushed towards every two seconds so the other thing you need to worry about here is it says paused for all these torrents because they are paused so going back to these if I resume them and ruin everyone else's talks and then go back to this now you can see that that information now says downloading so that information has changed because every two seconds it's getting this torrent data and pushing it and the cool thing is if I show you the network for this I close this even though that information is being pushed towards two seconds every two seconds there's no information in this network thing because it's over web sockets and then if I pause these again and then go back to this and go back to the console when it comes up it changes from downloading to paused in the JSON which I thought was really cool basically eliminated the overhead of the HTTP request we're just having data pushed towards now so going back to the code I want to show you how I then inject stuff back into the event loop and get it working these are the four lines of ZMQ stuff that I needed to add and if you remember that on ZMQ Response thing well I've created yesterday I did this I've saying anything over port 555 hits on terminal test a method that I'm just about to show you that I've added in my event handler on terminal test I've called it that because I'm going to do it from terminal to send this stuff into the event loop we loop around all of the clients that are connected get the topic object I've called it original topic and then we broadcast the data that gets sent in over port 555 to everyone that's subscribed to topic so here is the only code that I need to get that stuff injected into the event loop so this is the code that you would use for your workers to put stuff back into that event loop so hopefully it's working so we're saying that everything that gets we're going to say that we're going to push this information over port 555 and this is the data here we're just JSON encoding it because everything is JSON encoded and I'm going to call php terminal test dot php enter dot bam your worker would do that so then if we look at this stuff that's coming through from the server you can see here we're sending some data here where it's working and then everything else continues so we can inject stuff whenever we want while this big pipe's open and this loop's going around over and over and over which is really cool so that's how we inject stuff back into the event loop and the demo didn't fail which is pretty cool okay so what about scaling I was in the previous talk and they talked a bit about HA proxy which I've honestly never used so this is one way you could scale theoretically in your own code so you could write this have complete control of scaling these event loops and this is one way you could do it so your client, your JavaScript client or have you want to do it you can talk to a database already which contains a list of all your event loops now event loops are really performant you'll only need to do this if you've got a shit happening in these event loops but you can scale these things the database can either round robin through these IP addresses that you want to connect to just do a random to choose between them or you could do that on ZMQ response thing and ask each event loop how many clients do you have connected and then choose the one with the lowest number of clients connected call that chosen web socket server I think this is going to be cut off at the top now just about the event loop then puts that torrent thing into the queue the worker picks up that does whatever it needs to do to make that torrent call happen and when it's finally done it sends the information back over ZMQ and then broadcast it back to the client and this is scalable and the reason this is scalable is because you can add a new event loop set it up, get your DevOps guys to bring it up add that IP address of the event loop and the port or whatever to your database so the next client connecting that event loop automatically becomes a part of the decision making process for which event loop to choose and then you can add and scale workers and it's pretty cool so basically you can do this and you'd be in control of the logic for scaling your shit but we're still polling on the server you say five seconds to get this torrent data so we've eliminated the overhead of the request response from the client to the server but we're still doing all this server shit that's not my problem basically for real time data you have to do demon programming if you Google demon programming it's not basic PHP stuff ok so you have to have a completely different mindset I see someone nodding in the audience for example if you wanted to do or if I wanted to do a monitor CPU usage example and I wanted to have real time data sent to the client so I would basically display top in the browser I would do this the crap way I would have a python script that honestly I do a python script that runs top, passes the output and then allows me to display it in the client now if you wanted to do it the proper way you'd have to do what I've copied and pasted off Google and Stack Overflow you'd have a C++ binary create a host resources MIB and then you'd have to sleep on a p-thread mutex in between to make it work so what I learned from this is that using an event loop is not automatically non-blocking or asynchronous so the easiest way to not block is to use threads, processes or an external library all connections between the client and the server you can do with SSHVs in Torify if you want to keep anonymous if you want could have done this with Node.js but obviously none of us want to sell us all here I found out at the end that this horrible command line shit and I only found that outright at the end which is unfortunate and often the case I also think I found out that you don't have to broadcast on a topic a connection itself has methods that you can send information to over that specific connection and if you want to do some real service shit then that's your problem so your choices with WebSockets are React and Ratchet and PHP and Aries and PHP is the latest iteration of this and it's pretty damn fast autobahn.js was the client-side library that I used and LibEvent and LibEV are the app get things you have to install to make the event loop happen in C instead of in PHP LibEV is Windows but obviously no one uses Windows here because we're all decent developers if anyone does use Windows where you can go next get a simple example of this working use processes first to offload the work like you did with that React process extension thing watch this talk highly recommend this talk for getting a job queue working and then you can write object-oriented code and have it completely separate from the rest of everything and then you can move on to actual async stuff which is where you use promises and call back hell and all that stuff in the event loop itself so you can actually do stuff in the event loop without worrying about it but I didn't do any of that I just offloaded everything and so I'm Jason and I'm on Twitter I do PHP 6 jokes object-oriented stuff you have to watch this talk it's seriously awesome if you use the knowledge from this talk and the event loop stuff from this I'll put the slides up which I already have at the bottom then you can do some really cool stuff with this sort of stuff and this is the application that I created in the end with clearly just an Ubuntu tone here How resilient is the web socket connection to changing pages in the web application? So that's a good question in fact when I did my thing every time I changed page I'd have to re-instantiate a web socket connection but which would call on open again and I'll subscribe which obviously isn't the best way of doing it but the way around that I guess would be to keep that client in the internal way of clients and then not remove them until a certain time out has disappeared from them not reconnecting or something like that so you would still have to do the client to the server thing saying we're re-establishing a connection but that's the only way I can think of a less yeah that's basically the only way I can think of doing it if you've got a better way I've still got this project on here so please tell me afterwards Did you do video streaming on the other app so that you can start watching the term okay alright I'll just be honest with you so what I did I might have this cook for you so I had an Xbox 360 right and my plan was to download legal that thing with the bunny which is completely legal to watch in and they have torrents I think you know what it means and basically after they had finished downloading I'd have a button that said convert and you click convert do the job with ffmpeg in the cloud and then it would move it to a certain open directory so I could go on my Xbox 360 go on a browser thing go into an explorer I think it was go to a certain URL or a video because it had been converted to webm.webm so that's how I would watch it in a browser so yeah I had the whole thing going so yeah you did see that and it was all legal anyone got any questions that don't relate to video stuff what are the use cases have you seen for websockets and maybe people that have used your sort of skillet and to get them going so I've seen people use websockets for news notifications notifications for the servers going to go down soon that's I guess that's just a general websocket question but if you are making lots of calls that are polling over and over for data typically I would look at maybe is websockets the right way for that sort of thing so if you're doing a set interval in javascript and making an Ajax request in that set interval to update a table using data tables or something like that then I would consider websockets as a potential solution but you've got all the architecture and the budget infrastructure and the right architecture mindset to be able to sort that out so it depends the reason for my previous question is to ask an answer to that question I'm already using rabbit in queue to do a lot of processing some of which can take multiple minutes because of what it's doing and so I was asking about how resilient is the web service connection because often times someone will have gone to the page which will trigger that process in the queue and I actually want to tell them that it's finished but they could have navigated to anywhere else in the application of the time and so it would be great to be able to have that web socket connection reopened so that when the process that was already in the queue did finish they got the notification. Oh yeah definitely I mean for example if you're using twig and you've got that templating engine you would just check to see if that connection still exists or wait until it has before sending that information out so yeah you can definitely do that and that would be an answer to that question at the back about what real use could you put it to nice thank you no other questions okay thanks very much