 My name is Mathias Meyer, this is me moonlighting in my second job as a conference photographer as well. And that's Doc Rockford running out of the picture, just as a side note. I work for a company called Bashow, who is the makers of a distributed data store called RIOC. And we should dive right in. Has anyone heard of Node.js? Yeah, show band, excellent. But we're not going to talk about that because it's JavaScript. This is a Ruby conference and we're going to be talking about Ruby. But Node.js made one thing very popular over the last, well, 18-ish month, which is evented asynchronous programming. And that's what we're going to be talking about because Ruby has had something like that for more than five years now and it's called Event Machine. It's basically evented IO for Ruby and it's non-blocking IO for Ruby. We're going to be looking into good detail about what that means in a minute. In the traditional IO workflow, like when you're talking to a network, you basically have stuff like you open a socket in the very literal sense, you read and write through the socket and you wait for something to happen for data to get to go out or data to come in and you rinse and repeat. That is kind of the basic construct which you usually use to, well, build a server in any language basically that is traditionally procedural in that regard. That's kind of, you could build that in C, you could build this in Ruby, whatever you fancy, but this is kind of the basic example of a single threaded server which basically does echo, which is kind of the hello world of network servers. And it can only serve one client at a time. It is pretty basic. In a Rails workflow, it's pretty similar. You have a request coming in, you ask the database for some data, you wait for it once again and you're in as a response. Basically, it's a pretty picture, it could kind of look like this. This was the closest thing I could find to how a Rails request would look like, but you get the idea. You do a lot of waiting. You wait for it and you wait for it and you wait for the database, you wait for data to come back from somewhere, you wait for data to be sent somewhere and that's a lot of waiting. You could do a lot of other things in that time. While you wait for something for IO to go out or come in, the basic gist is that IO usually blocks the flow in a traditional procedural style and you could try and use threats for that to solve that problem. You could basically wrap every connection a different threat, but kind of the problem starts to accumulate when you have 10,000 clients trying to connect to your server, which is kind of awful because then you have one threat for every client and thanks to Ruby's well MRIs, a threading model, you can basically only serve one client at a time still, but at least all the other threats, they can wait and they won't bother the main workflow and they won't bother blocking any other threat that's currently doing something. It's kind of an improvement, but it doesn't exactly feel right. It can't really be the sole solution to that problem, especially not when you're looking at the C10K problem, which is kind of an article that Dan Kiegel wrote like 12 years ago about servers being able to handle 10,000 client connections at one time and when you're doing threats that get kind of terrible and I'm not saying threats are terrible, it just gets a lot harder to handle and there should be a simpler solution for that and I highly recommend reading that article because it's from a very low level standpoint, it is kind of the introduction to everything that Event Machine and even Node.js and every other evented IO framework is based on, so highly recommended read and even 12 years later, still pretty sad that we still have to reference that article and talk like this, but it's kind of hard to serve 10,000 clients with just simple Ruby, so why would you need evented IO? Because lots of things are IO bound. When you build applications, you know, your database is IO bound, calling external services IO bound, a lot of things and there's usually little processing involved. The only processing you do, for example, in a web application is mostly rendering a view. That is kind of the most important part that does any noteworthy processing that hocks the CPU. And in the time you're waiting for the database to come back, you could basically render the response for a different request in the same time. So there's room for improvement. For some examples, for, you know, traditional for stuff that is very suitable for evented IO and proxy soft real-time apps, whenever you do, whenever you open a web socket, you know, what's behind that should in some way be evented because it's going to be terrible to scale up. When you use, for example, a separate thread for every client connection to WebSocket. There's going to be a talk on that tomorrow and you should go to that talk to see how that works out in production, actually, how to use, like, basically WebSocket published subscribe systems using Event Machine. And streaming firewalls API is like the Twitter search API, for example, or user stream stuff like that. It would be much more convenient to have something that's just, you know, when something new comes in, when a new search result comes in from a Twitter API, I just, you know, whenever it comes in, just do something with it. And, you know, if something, if nothing comes in, I don't want to bother. I'm just, I just may as well use, use the time to do something else. Messaging is another traditional example. If you used, if you've used stuff like RabbitMQ before, the standard, standard Ruby driver for RabbitMQ is based on Event Machine and it kind of makes a lot of sense. Publish subscribe systems. Like, you have one client pushes, pushes a message out and something on the server back end pushes, pushes the message to a lot of other clients, which kind of brings me back again to real-time web apps, but it's kind of, kind of a sort of use case of that. If you only have simple APIs that, you know, basically just go to the database, fetch something and return it back and may as well do something else in that time, may as well look into Evented I.O. And if you think about it, a lot of Rails apps do just that. But I'm going to go into Rails a little bit later. And the fun stuff is that you can build any kind of network server with it. Obviously, when you have a network server that does a lot of processing, it gets hard again, but Event Machine is kind of nice in that regard. And I'm going to be showing some examples. The basic gist is something that, like Evented I.O. makes a lot of sense when throughput is more important than processing. When you just want to push off data from one end to another, you know, it's kind of the nice fit for Evented I.O. and for Event Machine in that regard, or Node.js or whatever, what have you. It's just terrible with blocking I.O. because, you know, imagine a PubSub system where you have one client pushing messages out and a thousand clients subscribe to that one topic. That's a lot of threads or a lot of processes, which is kind of the, which sadly always has been the only known way in existence in Rubyland to scale out an application. Just fire up another process. It shouldn't be the only solution. What we really want is a Hollywood-style model. You know, don't call us, we'll call you. You don't want to pile on something, you don't want to pile on a socket for data. You just, you know, just say to the system, whenever something comes in, just call me. You know, whenever a new job comes in, just call me. Whenever new data comes in, just let me know and I'll do whatever. Or I'll just wait a little bit longer. The basics of Event Machine, how does it work? It does not work like magnets, but it's based on C++ and it's not, it's not scary at all. We just spend a lot of time in the last days browsing through the source code of Event Machine and it's, it's a decent code base and, you know, if you want to understand Event and IO, I highly recommend browsing through the source yourself. You know, just get over the scary moment of C++ and just, you know, it's basically, it's just a lot of procedural code. So, but I wanted to figure out how Event, how Evented code works on the operating system level. So, it was kind of a nice opportunity to dig into the code and read some C++ again, which I haven't done in a long time. Event Machine is based on two patterns, if you will. One is the Event Loop. The Event Loop is basically something that it's, imagine it's a while-true loop. You, I'm pretty sure at some point everyone has programmed the while-true loop and realized that it runs forever. And that's basically what Event Machine does as well. But, you know, on every loop, Event Machine does a couple of things. It checks, you know, it goes to, it goes to a list of sockets. It checks all the sockets if there is like new data has come in. It can fire a bunch of timers if you set them up. And then afterwards it will just fire all the logic that you assign to, for example, a single network socket. And that is basically the Event Loop. And when you look in the code, it's actually, the Event Loop is really just, I think it was like five lines of actual code in Event Machine. That is what is called the Event Loop. The other pattern it's based on is the reactor pattern. It's not in the Fukushima style. You'd have to think in a different direction for that. It's coming for, it's not coming from nuclear power plants, but from the fact that, you know, you react to something. Whenever data, whenever you open a socket and you wait for data and the data finally comes in, you react to it. Because Event Machine tells you, hey, here's some new data for you and here, do something with it. Do whatever. And that is kind of why it's called a reactor. So, the basic of, you know, of evented programming is that you go away from procedural programming where you get some data from a socket and you return it. You go to callbacks. And, you know, the closest thing to callbacks in Ruby is obviously blocks. And what are you going to do instead of just calling a method is you're going to call a method, give it a block and say, hey, here's this block. Just call it whenever something comes in. You know, that is kind of comes out differently from what you used to in Ruby with blocks because usually you give, you know, you give a block to a method and the method at some point will just execute the block in, you know, inside of its usual workflow. But here it actually means, you know, whenever, whenever fire this block, it can be, it can be 10 minutes, it can be 60 minutes, or it can be 10 milliseconds. Callbacks. Delicious. A lot of callbacks in your code. Event Machine can run callbacks which contain your, you know, whatever logic you have that reacts on, on data that comes in on a couple of events. When a connection is established, this is specific to network sockets. When you receive data from a socket and when a connection is closed or when a timer is fired. Timer is kind of nice because you can say, you know, in a second just fire this block. And you can do evented IO for sockets for external processes for the keyboard because kind of old school and stuff like file watchers. So, you know, like when this file changes, just let me know, which is kind of nice too. At the basic core event machine is a single threat is one process and the event loop runs in one single threat. So it's like, okay, why, how is it different from your usual, from your usual programming model and Ruby, you know, it's the main difference is that your code will not run procedure or the code will run at some point in the future. So it doesn't really matter that it's single threaded because it has one neat thing to it, you don't have to deal with threats. You can with, with event machine, but you don't have to. Basically in the event loop, event machine will go ahead and run a couple of IO operations, like send out data and fetch in new data from sockets or check sockets if new data came in. And always one processing operation at a time basically. You know, so that one processing operation that you have, you know, assigned to one single one server connection, you know, make it a fast one. Make it really fast because that that operation will block everything else. You're your big while true loop. It will be blocked and you don't want that loop to be blocked. So whatever processing you do, it needs to be very, very fast in the event machine. I'm not saying, you know, it doesn't, it can't be, you know, faster. It doesn't can't. Sorry. It can't be slower than 10 milliseconds. But you know, it's like, the faster you can get it, the more the more efficient your event machine loop will be. This is the event loop. It basically it's basically not more than calling EM dot run and giving it a block. And wherever runs in that block will run forever until you tell it to explicitly stop or until you kill your program. You can use threads in an event machine. But if you're doing IO, if you're sending data, or if you're receiving data, it always has to be on the reactor threads. Reactor threat. We've kind of we've kind of seen some odd things happen in production. If you don't do that, but you get some weird errors from event machine. So if you're doing if you're sending something over the network, it always has to happen on the on the reactor thread. There's a simple way to achieve that. And I'm going to show some examples of that. So let's start with the basic example of the echo server, which is as I said, the hello world of network servers. Probably build that in your first semester at college. And this is how you would do that in an event machine. And when you compare that to the example I showed earlier, which is kind of the traditional socket example. There's a lot less boilerplate code here. You don't have to explicitly open the socket. You just say event, say event machine. I want to have a server on this port listening on this IP address. And you know, whenever some data comes in, receive data is called. And I'm going to just send the data out again. So that's your basic echo server with event machine. You don't get any boilerplate networking code, which is kind of nice because you can focus on callbacks again. You can focus on writing your callbacks or even your application logic that does something with the data that comes in from event machine. How do we get rid of callbacks? You can specify a module instead. You just put a nice, create a module for your echo server, which has a predefined method, which is called receive data. That's kind of the convention of event machine. This method is called whenever data comes in. And we'll just send it back. And instead of a callback, we just give the start server method this module. And we'll just mix it in into a connection and call this method whenever something comes in. The client. The client doesn't look pretty similar. Once again, we put it in a module. And the difference here is that there is a method called postinit, which is basically the method that is called when the connection has been created. When the connection is set up, this is the method that's going to be called. And we're just using that moment to send our hello world out. And we're at some later point, receive data will be called again, and we're going to get data back. And you wrap it in EM run. And HTTP server and event machine would look like this. This is a bit of a longer example. It basically uses a little library called event machine HTTP server, which is a C based HTTP server. And you get some niceties on top, you don't have to parse HTTP yourself. But this is just a basic example of how you would do a hello world HTTP server. Event machine supports out of the box, a lot of protocols you can deal with like for clients, client programming with HTTP, SMDP, memcache, Redis, MySQL, Postgres, and a lot of other things. There's a whole bunch of protocols. And these all come with a default event machine, which you can basically start using to talk to memcache to your database or whatnot. Let's look at some examples for, for example, how do I fetch my IP using event machine? There's a nice library called EM HTTP request from Ilya Grigorek. He's done a lot of stuff with event machine. I'm going to be sorry, I'm going to be showing a couple of his libraries. And it is pretty impressive what they're doing. And this is basically to get a request from JSON IP, which would show me my current, which would return my current IP as a JSON, JSON string. And once again, you give it a callback. You can get in, you can have enough callbacks in event machine code. So we have that IP. Now we parse it, you know, we use in JSON parts, which is at the basic is at the most basic core is processing. And then we store it in Redis. We get another callback. We can use that callback to send an email. So basically, the workflow would be, you know, fetch this data, then store in Redis, and then send an email. And now we come up with this. It's kind of the problem with event of programming that you end up with a lot of callbacks in your code. And that's kind of, you could call it spaghetti code, but it's probably not necessarily, but it certainly isn't pretty. You know, yeah, whenever the, the command in Redis is set, you know, you get a response. And when that response comes in, you send it, you send an email and give that another callback so that you can say the user, okay, your email was sent. This is a stupid example, but you know, it's like you get the basic idea of how terrible callbacks can be. And it's the code, it's full of callbacks. And here's an example of how that would look in Node.js. Look at all the curly braces. You know, I'm not hating on Node.js here. It's like, it's just, I hate curly braces. That is all. It's not that different. You know, if you, if you really compare it, it's not that much more code. It's just a lot more curly braces. So wouldn't it be nice if we could just stick to our usual procedural model? If we just have, you know, like, like in the old days when we just, when we fetch a request and you know, we wait for the, when that data comes in, we return it to, store it in a local variable and then we parse the JSON that comes back. Then we store it in Redis and stuff like that and all just, you know, in a much better readable code. Anyone heard of Fibers before? Anyone use Fibers? Excellent. I can totally relate. Fibers are terrible. The basic gist is you can't use Ruby 1.8, but you know, you shouldn't be using Ruby 1.8 anyway. So is there anyone still using Ruby 1.8? Yeah, you need to, you need to stop doing that. You need to use Ruby 1.9 because Fibers were introduced in Ruby 1.9 as a means of lightweight concurrency. And if you look back at stuff that Python had for like years ago, it's at the basic core, it's either called a continuation or a co-routine. There's like, if you read Wikipedia, you get an excruciating detail how both are kind of different, but how one basically builds on the other. And basically Fibers means to say somewhere in your program's flow, okay, I'm going to stop here and return, you know, the control of the flow to something else until you explicitly tell me that I can start working again. And it's like, I just have this Eureka moment just like a week ago about how that would work with Event Machine. Hold on, let's see, I'm going to continue, sorry, actually there, Xavier Chay just posted a nice article on, you know, he dug into a new library built by Ilya Grigorek and his company, and he wanted to find out how, you know, they're doing this synchronous programming style about still using asynchronous, asynchronous behind the, you know, behind the curtains using Event Machine. And I just looked at one code example in that code and was like, yeah, that day I finally understood Fibers because the problem with Fibers is in most books you only find examples that will show you some generators and stuff like that, where, you know, you have a bunch of words, you iterate, you use the Fibers to iterate over them, but it's like, you know, you don't, that's why no one has used them in production yet, because, you know, when do you do that in production? When do you have to iterate over a set of words? When you have to iterate over a set of words, you just use an iterator. But the simplest Fibers support in Ruby 1.9 is very, very simple, and you actually need to require Fibers explicitly to get the full continuation support. And I'm going to have to go through this in good detail. This is the EM HTTP request code using Fibers. You get a current Fibers, which assumes that some Fibers has been configured before. I'm going to go into that in a minute. And you use your callbacks still, but instead of doing your application logic in the callback, you are saying, after Fibers.resume, and give it the request object, which is disappear. And down here, you're using Fibers.yield to say, okay, I'm ready, you know, I'm ready to give up control. From here on, I'm going to be sleeping. Just let me know whenever something happens. And something will happen in one of the next loops of event machine. In one of the next loops in event machine, you know, some IO will come in, or an error will be raised, which is kind of this urback method. It's kind of, you know, the callback that's run on an error, on a connection. And something will happen, and it will wake up the fiber, basically. And the flow will keep continuing here. And Fibers.yield will return the object that is given to Fibers.resume in the callback. So in a nice way, we just got our request back. We got the object that, you know, was given to the callback. And down here, we still, we have nice access to the object. And we can fetch the response. And that is, we just rewrote all our asynchronous evented callback code into procedural code. Just using Fibers. If you have a question on Fibers, please ask, because they're really terrible. It took me a long time to understand that, unfortunately. But that kind of clicked with me, that example. Because it's like, you know, this is like, you know, like real world application of Fibers. Not just some weird text analysis. And to really have that full, you know, to have the full flow, you have to wrap it in a new fiber. Because you never, in the real process, there's always one main fiber. But you can't use that. Because it's like, it's only there, you know, and you always have to create a new fiber if you want to run fiber code in that block. And basically, you just have to, that new fiber will do nothing on, you know, when you create it. You have to call Resume on it. And then we'll start, you know, executing the block until down here, until a fiber yield is called. And then when an event comes in from the socket, you know, boom, the method returns. It is really weird. But it's kind of, together with event machine, it kind of suddenly made sense to me. And once again, Ilya Gugorek built a nice library on top of that called EM Synchrony, which basically does all the fiber magic for you. It patches libraries like EM HTTP request or database libraries, memcast libraries, whatnot, to use fibers instead of, you know, just callbacks. And so what you get is a nice bundle of libraries, which you can use in a procedural way. You still get all the benefits from using Event at IO. But you know, your code will look a lot nicer. But now, now you get fibers. Which basically, you know, the easiest, the simplest comparison people have done with it is to call them lightweight threats because whatever stack is associated with them is a lot less than with threats. But still, you know, you basically starting to do your own, your own fiber, a fiber scheduling and pooling for that, if it comes down to that. Aaron Perez and posted a nice trolling cartoon of that. You know, like, I wish I would have found it because I loved that one because basically in the end it's like, oh, I'm managing my own fiber pool. Oh no! Back at threads. So fibers are pretty insane. It's like, yeah. I think I can, for me, I finally understood them with Event Machine. I hope it could help you understand them. But if you didn't, if you didn't, you still have Crescent Police see me or raise them now. There's a couple of nice libraries built on top of Event Machine by companies. This one, for example, is used at GitHub. It's called Proxy Machine, and it's basically a content-aware TCP router, which means you can do you can do something as simple as that. And those like three lines of code give you a proxy that will, you know, transfer all of the all of the traffic coming into a single socket to Google. Because, you know, Google wants all your data anyway. You may as well set up a proxy and send it all over. But that is kind of cool, and actually they're using that whenever you clone a repository. You know, they're using Proxy Machine underneath to figure out, you know, on which of the file servers this repository is located. And it's kind of neat. They have some examples and they read me of how they're doing that, but there is kind of where the data awareness comes in, because you can basically inspect, you know, the traffic at a layer, at a TCP TCP hybrid layer seven level. You can start, you know, if it's an HTTP proxy, you can start parsing the HTTP yourself and do something about that. But at the most basic level, you know, you can use this and trend, you know, proxy all the all the traffic coming into that proxy to Google, or whatever. It doesn't have to be Google, it could be Microsoft. The kind of different story about that is EM proxy, which is once again written by Ilya Gugorek. Did I mention he's done a lot of stuff with the event machine? He's kind of been a very early adopter and he's like, he played with the Fiber's idea, I think, two years ago already. And now that Ruby 192 is considered stable is like, there's no reason you shouldn't be at least playing with that kind of stuff, because it's really neat. And what this proxy is doing, it's basically an HTTP proxy. And, you know, we're going a level up, a layer up in the TCP IP stack. And what you can use this for, this example basically starts my proxy on a local host port 80 and sets up two connections. One is a production, one is a staging connection. My staging connection obviously goes to Google again. And what that little piece, a little snippet of code does, whenever a request comes in through to that port, it sends the request to both servers. And down here, it only returns the response if it's from production. I'm going to let you think about that for a minute. What you can use that for is you have a staging system, you have a production system, and you can redirect your production traffic to your staging system to measure any changes you've done to measure if they had any performance impact. And that's kind of nice, that engineer does something like that and it's basically where I stole that example. That is kind of neat because you can make this any kind of list of servers to send the data to. And you have like an application where HTTP proxy. It's like, of course you can use squid, varnish, stuff like that, but at some point it kind of makes sense to sprinkle some logic on top that is specific to your application. And it's like you program me Ruby all day, you might as well use Ruby for that. That's kind of nice. The latest example I already mentioned basically is called Goliath. And it's just, I think it was just released about a month ago and it's once again written by Ilya Gugorek. What it actually is, is an event framework and it has somewhat of a REC API and it uses fibers for everything. So basically you can only run it on Ruby 1.9.2 But the nice thing about all of that is, you know, if you used to building REC middleware or stuff like that, you know, it's basically very easy to get started with Goliath. And, you know, once again a Hello World example, it should look very familiar if you didn't, if you did any REC. And this is kind of boring. You want to add fiber power. Basically every request coming into Goliath is wrapped in its own fiber and it already uses EM synchrony underneath. So you get all this cool stuff. You can, you know, you don't have to use callbacks in this, in your web API and it's kind of nice and you don't have to do anything for that because Goliath already does like the whole plumbing of an overhead for you. You can just you talk to your database like you used to. I don't know about you, but I find that very appealing because I hate callbacks and curly braces. And there's Tramp, which has a nice name and is an evented ORM underneath. It's like it's, it's made by, what's what's his name? Pratik. You know who I'm talking about. He started building out a framework called Kramp, which is an evented web framework again. Web frameworks, evented web frameworks are very much in style right now. And Tramp kind of turned out to be the evented ORM version of that. And you basically, you know, you, it looks very similar to active record code. And, but instead of having like a synchronous flow, you're using callbacks again. You know, whenever you call safe, you get a status object and you can check on that status object and then do other things, assign more callbacks, and you come out with a lot of callbacks. This is just an example of Kramp. Unfortunately, it's hard to set up. It doesn't work very well out of the box, but it's, you know, more than example, there's been a lot of experimentation with event machine and, you know, like building libraries on top of that. But next to no awareness, you know, of event machine is anyone actually used event machine in production? Excellent. Excellent. There should be more hands through that because it's kind of, you know, what the Node.js community is enjoying right now. We've been having that for a long time. And it's been in stable production use for a couple of years as well. You know, Post Rig Goliath has been in use for more than a year now at post rank. And it's like Goliath is pushing a lot of traffic through a for post rank. So it's, I can only recommend starting to look into event machine because it's kind of even reading in C++ code because it's kind of, it's kind of a total brain twist on the one hand. But you can also do some nice things with it. But, you know, obviously it will take you away from a usual rails or whatever, whatever awesome things you do in Ruby. But, you know, it's like, it's a cool library. It can hurt your brain a lot. But you should definitely play with it. So the, the begging question, how do I run rails on the event machine? You know, this is my usual rails workflow. And, you know, if we sprinkle the event machine on top, this is how it looks like. Once again, Ilya Gugorek, this talk should actually have been about Ilya Gugorek. Once again, he has made a really tiny patch for, for like, you can apply to a Rails project and it will use evented libraries instead, and EM synchrony once again, which he did for Rails 3.0 and Mike Purhem did the same for Rails 2.3. And, you know, you have, Mike Purhem wrote this nice library called Rack Fiber Pool, which basically, you know, when a rack request comes in, it, you know, it creates a pool of fibers and wraps each request in, in one of the fibers from that pool. So basically, you know, you can put that rack middleware before anything else in Rails. And every Rails request will be wrapped in a fiber. And boom, you can suddenly start doing your, you can keep doing, you know, you can keep doing your synchronous programming, but you can use event machine underneath. I already talked about a couple of things in event machine, especially timers. You can say to your event machine, here's a little block and fire that off in about one second. It will never be exactly one second because, you know, if something blocks the event loop, how will, you know, it's going to be exactly one second. But, you know, it's kind of the, an approximation of when that block will run. You can do the same thing with a periodic timer. You can say, you know, fire this block every one second. We've been using that extensively for a, for a while and it's really a nice feature because it's like, you just don't have to care about, you know, looping and checking time again. It's just like, okay, no. Whenever that second is up, even when it's, you know, more than a second, just fire this block. Some more brain twist. I talk about blocking the event loop a lot because you don't want to block the event loop. You want to keep your stuff short and you want to say, maybe, you know, if you have something longer running, you want to split it up across several loops in your giant while true loop, you know, and that's what kind of what next tick is for. Next tick basically says, take this block and run it on the next iteration of the event loop. You know, when that loop is done, when that, when that iteration is done, you know, when it starts off again, just run this block. Basically, when you have a really long iteration over some, you know, a list of items and you want to, you want to do some processing on them, but you don't want to fully block the event loop for one go. This is kind of how you could split up that request. And more code. Yay. Basically, what we're doing here is, and here it kind of, kind of gets into callback recursion, which is beautiful. We're looping, we're doing a pretty useless loop, like from from zero to 100. And you know, for every step, we're just scheduling the next loop on the next event loop run. So this will basically, when that, when that iteration, when that stuff is done, the event loop will have fired 100 times. And on each time, this block will have been run. And whenever we haven't reached 100, we just reschedule the block for the next tick. And that is kind of a very simple and stupid example, but it's kind of the basic gist of how you split up things in event machine or in event programming to not block your event loop. I said you can use threads in event machine. And you know, this event machine runs on a single thread, but there's certainly situations where you still want to run threads. We've been doing that for a while. Some, in some cases, it was the wrong way of doing it, but still, event machine keeps a pool of threads available, which you can just throw any code at. And you know, a basic example is sleep because, you know, everyone loves putting sleep in their code. We want to sleep for five seconds. In a traditional way, if we were just to run sleep five, you know, in the, in the EM run loop, it would block the reactor. Nothing would run with the, in these five seconds. Everything would be blocked because you can only run one piece of code at the same time. You cannot check, event machine cannot check sockets or file descriptors for new data because you were clever to, you were clever enough to run sleep five, to block it. What we're doing here instead is using to, is using EM.defer to put this piece of code in a different thread. And sleep always puts the current, current threat to sleep, but not the whole Ruby, Ruby VM. So, this sleep will not block the main event loop, which is kind of represented by this periodic timer up here. You know, when you run this ping, this ping will always pop up every second or so. If you don't do that, you just have, you know, a long sleep, and then you have done, but nothing in between. So it differs kind of, you can still use Ruby threads, which still kind of makes sense in a way because when you have, when you have libraries that still use a blocking IO, you can just put them in a different thread because depending on the library that's underneath, the threat will not necessarily block the whole, the whole Ruby VM. It will just, you know, wait, when it waits for IO, it will just go to sleep unless we'll let something else run. So you can combine next tick and defer to do funky stuff. You know, from the next tick, you schedule something to run in a different thread and this, this piece of code down here will be run in one of the threads in event machine thread pool. And I said earlier that you have to be sure that IO always comes back on the, on the reactor thread. And, you know, to do that, you're just using next tick again because next tick will make sure that that block will be run on the event loop thread, on the reactor thread. So you can do some kind of ping-pong between threads and the event loop using next tick and defer. But, you know, it's like, if you can, avoid using threads all together and focus on, focus on the event loop itself. But it's certainly possible to do that and there are certainly situations where it's feasible to do. But obviously, the caveat in Ruby is always, you know, only one thread can run. Even in Ruby 1.9, you still have the infamous global interpreter lock. But at least you get thread scheduling at the operating system level. Event machine has some basic queues, which is not to compare with RabbitMQ, for example, for with a full distributed queue. You can just put stuff in and say, just run this code whenever someone else pops it, pops something back in again. And this, there can be many, many, many event loop iterations in the future. So basically, when I push something in here, down here, at the next event loop, this block will be run again. And the important thing to know about is that these queues always pop once and you just want to reschedule the pop. Again, within with the nested, nested callbacks, which is kind of nice because the nested callbacks always hurt the brain. When would you use something like that? It's like, you know, you only have one thread. Why would you communicate through something like that? And it's like, you know, like, for example, you have logging that you want to kind of be deferred. You know, you use a queue to push all your lock, all your log lines in. And at some point, something will come, pop all the locks from that queue and put it into a lock file. One example. The other one is, of course, you know, example, the tracking statistics, something like that. But logging was really the best example I could come up with and where it would be really useful to have a queue. But you know, there's, there's an extension of queues an event machine called a channel where you can basically do, like, publish, subscribe models. Which is kind of weird if you think about it, because you only have one thread. Why would I want to use that? But that's why event machine is a very nice playground because it's like, okay, what am I doing with all this stuff? And kind of fun. I've been talking about low level IO a lot, and this is kind of where I'm getting into that little detail, a little bit more. At the very core, on every iteration, event machine uses a select system call to check a list of file descriptors for new changes. And that only scales up until, well, it's known to work until 1025 sockets, but it's known to break in a lot earlier than that. So at some point, which is again mentioned in the C10K paper, for Linux, E-Paul was developed where you can just say, okay, here's my list of file descriptors, and I'm willing to wait for 100 milliseconds for something to come in on each of the file descriptors. If nothing comes in, just forget it, and I'm just going to run my next event loop. That is kind of the Linux version, and it scales a lot better than using the default select. And you basically, to use it, you just call em.epaul. Or if you're running macOS 10 or any BSD system, you use KQ, which is kind of the equivalent in the BSD world to E-Paul. There's always caveats involved, other than callbacks. Don't block the event loop. Whatever you do, don't block the event loop. Make your processing fast, and let the event loop just flow. If you have something that needs to be longer running, push it out into some other back end that is not doing the main work in the event loop. Just push it out to rescue, RabbitMQ, whatever, but don't block the event loop. Because callbacks run until finished, and you want to make them fast. Ooh. And again, avoid blocking code in the event loop. And with this blocking code, I mean, yes. Avoid blocking code in the event loop because it blocks the event loop. What I'm going to say with that is stop using the Ruby default libraries for any kind of network I.O. Instead of net HTTP, you're using EM HTTP request, for example. That is the gist of that. So, and the last question is, do I, shouldn't I, shouldn't I always run Rails on Event Machine because Event Machine is so such an awesome sauce for scaling, and the answer always is yes and no. It depends because processing kills the event loop. And still Rails, depending on what your Rails application does, if it only has fast and short requests, you can certainly try doing that. And you may, it's very likely that you get a benefit out of that. But if your view rendering takes too long, all the requests in Event Machine will just pile up. Yes. Thank you. I'm almost done. Processing kills the event loop. You can, you know, it's, I would certainly advise you to play with stuff, you know, that Mike Perham and Ilya Gugoreg did to put Rails into a fiber event machine style stuff. But, you know, it depends on your application if it's explicitly feasible because a surprising amount of time is spent in your Rails views and not in the database. And unfortunately, you can't do that asynchronously. It's a lot harder. To debug. Everything that happens will happen at some point in the future. And, you know, you just don't have a stack anymore. There will be no stack, only some weird stack trades that will just lead you somewhere in Event Machine. And it's a lot harder to find problems and to handle errors in that regard because it happened at some point later and, you know, just raising an exception is just not an option because, you know, where is it raised to? It's just, who handles it? I don't have a proper recipe for that because it's really hard to handle errors because at some point you may as well run into some low level Event Machine stuff and that's not pretty. But, you know, the Event Machine community is very helpful in that regard and so if you run into something, just ask them because, yeah, Event Machine is not a scaling civil bullet. Event at iOS certainly makes a lot of sense in a lot of areas, but, you know, just by using it, don't expect to do scale up a lot better. If your Rails application works fine, there's no reason to look in Event Machine, probably, but, you know, if the geek in you is inspired to look into something else, look at Event Machine, but don't expect it to magically solve your scaling problems for you. It certainly won't do that. It is not scaling bacon. Don't block the Event Loop. As a final note, to maybe give an inspiration, Erling does all of that. Erling does all of that out of the box for you. You don't need to do fibers. You don't need to do anything else, and it will scale that stuff much better over multiple cores. Because in Event Machine, you can only use one process unless you fire up multiple processes, and then you start solving the problem of inter-process communication again. Erling, I would highly recommend looking in how Erling does the Event Loop thing across multiple threads, because it's kind of, you get the procedure programming style in a certain way. And, you know, when you do an HTTP request for example on Erling, it will just, you know, the procedure that does that, you know, it's like, it will be put to sleep and called again when the data returns, and that kind of sounds familiar in the Event Machine style. But Erling does all of this out of the box, and it's kind of nice. It looks a bit weird, but it's certainly the next inspiration I should give you for, you know, for how Evented IO could be solved in the real world, because Erling is known to be functional. To solve the C-10K problem easily out of the box. And it's, Erling is awesome scaling sauce.