 I think I'm actually ready to go here. So I am Tony Arsiri, and this is the celluloid ecosystem. So really quick, how many of you have heard of celluloid before you came to RubyConf? Everybody in the room, practically, that's pretty awesome. So I work for Living Social on our site reliability team, and we're hiring just in case you didn't know that. If you don't know, Mr. Chad Fowler. So I like to think of celluloid as threads on Rails. So for a long time in Ruby, there were no sort of abstractions for building multi-thread programs, kind of like before Rails there weren't really good web frameworks for building web applications. So I like to think of celluloid as the quintessential concurrency framework for Ruby. So what she might be wondering is why should I care? Why should I use threads? Why are threads good? The main reason is multi-core, right? So in the past, basically we got speed for free. We could wait for all the hardware people to just keep cranking up clock right there. And you would just sit back and CPUs would get faster, and therefore your code would get faster. Also in the past, threads used to be really expensive. So Linux, for example, it took them a long time to get a constant time scheduler that could handle as many threads as you wanted and have new overheads, you added more threads. And that whole situation has changed. So the hardware designers have hit the power wall, they just can't keep cranking up the clock speed like they used to you. But what they can do is keep adding CPU cores. So what we're seeing now is exponential growth in CPU cores, operating systems are getting good schedulers and threads are getting cheap in the same way that performance used to be cheap, right? You could just sit back and wait for more and more CPU cores. So I strongly believe that multi-core is the future. This graph definitely illustrates that pretty well. CPU speed not going up, but number of CPU cores are going exponentially up. So maybe you're thinking I can just throw more and more VMs at the problem, like I'll get multi-core performance that way, right? This wastes RAM, this is a complicated topic, but I think Mike addressed it in his talk if you happen to see that, the psychic talk. He did a very good job describing why that's a problem. There's also serialization penalties. So if you have different concurrent parts of your system talking to each other and they're having to do IPC, there is a definite penalty to that. I would definitely recommend checking out this article by Kyle Afer there. So my question to you is, when we have 100-core CPUs, which will probably be before the end of the decade here, are you gonna run 100 Ruby VMs or are you just gonna run one? I think it makes a lot more sense to just won't run one at a time. So there are, Celluloid isn't a science experiment. There are people actually using it in the real world. So a couple of projects here, they're using Celluloid Sidekick. If you saw Mike's talk, it's a badass job execution engine there. And also Adhusion. So Celluloid is kind of inspired by Erlang. Erlang was originally created to do telephony, but there's a pretty cool telephony framework in Ruby called Adhusion. And that's using Celluloid to kind of do the same stuff they were originally doing in Erlang, which is command and control of telephone calls. So I definitely recommend checking out both of these projects. They were both built using Celluloid. So Celluloid is a combination of object-oriented programming and the actor model. Let me show you this quote. It's a little bit cut off there. Maybe I'll read it to you because it's cut off. But this is by Alan Kaye. It's one of my favorite quotes ever. He says, I thought of objects being biological cells and or individual computers on network only able to communicate with messages. So when I see this quote, I think of objects as being a really natural way to do concurrency, right? So we have multiple servers out there talking to network packets. We have cells, exchanging chemicals. All those things are ranked in parallel, right? Like multiple servers, multiple cells, all that. It's all happening in parallel and they're communicating using a messaging system. So I think objects are a good way to do concurrency. So what Celluloid does is combine these object-oriented tools that we're all familiar with, classes. Inheritance is a specifically important issue in concurrent programming. There's actually this whole concept of the inheritance anomaly where basically if you build concurrent programs and use inheritance, there's all these places where inheritance leaks. So if you can build a system that does inheritance well, I think inheritance and concurrency well, I think you're doing something right. So yeah, and messages are a very fundamental concept in object-oriented programming. Then we have this whole other slew of concurrency tools. They're completely different from the tools we ordinarily use to structure programs. So I think these really need to be combined into a single universal abstraction. That idea is the idea of an active object. So you have normal passive objects, but you can also have objects that are actually running inside their own thread. And particularly the idea is these objects are built on the actor model, which I'll go into in a bit later. So I call this abstraction a cell. You'll see celluloid talk about, in the documentation I talk about actors. Really celluloid is like an abstraction on top of actors. So I like using cell to kind of differentiate that from the typical actor you might find in a language like Erlang. So Erlang is definitely one of my major inspirations for celluloid. These guys were into concurrency before it was popular. It's Joe Armstrong, the creator of Erlang there. But basically everything, all the major ideas I got for celluloid came out of Erlang. And the central idea is the actor model. The actor model may sound complicated, but I think it's actually pretty simple. So you've just got these things, like they can be any type of computational primitive. It's easy to think of them as like a thread or something like that, right? In each of them is a address, and if you have their address, you can send a message and actors can create other actors. And that's really all there is to the actor model. I think it's pretty simple. So I'm certainly not the first person who has tried to put the actor model together with object-oriented programming. In fact, Pythons did it. So there was a very similar framework that was developed as a sort of doctoral research project in Python called Atom. And when I found this, I'm like holy crap, these guys created celluloid over a decade before I did. They created it in 1997, which was kind of a problem for them, I think, because computers really sucked back then, you know? Like here's 233 megahertz pinion with like 32 megs of RAM, awesome. So I think they were like really ahead of their time. And the big problem they were trying to solve back then was how do you do client server stuff, right? You got to stay on the server, you don't have a user interface for the client, how do you put those two together? And the web came along and solved that whole problem. So really people just kind of stopped looking into this. I think it's sort of this forgotten approach to concurrency, like there was tons and tons of research into combining the actor model and object-oriented programming in the late 80s and early 90s, and the web came along, solved the client server problem, and that all went away. But the thing is now we got multi-core computers, so I think it's time to come back and reevaluate this whole concept. So how does celluloid work? I've got a little example here. This actually comes out of a Railscast about celluloid. I strongly recommend you watch that. But basically we got a normal Ruby class here. It's got a method there. The only difference between this and normal Ruby class is we include celluloid, and that promotes any object of this class into a concurrent object, just simply by including celluloid. So when you create a new instance of this class, when we get back isn't just an object, it's actually a celluloid actor that kind of wraps that object. So first thing I'm gonna talk about here is synchronous calls. Synchronous calls work just like your normal Ruby method invocations. So you can call that launch method there. What it's gonna do is it'll count down if you remember it had a sleep, and that's gonna print that blast off. All right, so synchronous calls, so basically what's going on here whenever you send a method to that little handle you get back, it's actually a proxy object. It's gonna translate that into a actual message that's sent to this thread in a fully thread synchronized way. So those little chevrons there are actual objects they're being passed back and forth between threads. So everything synchronizes around those celluloid mailboxes there, so it gets the request, it processes it, and it sends the result back. So things start to get interesting is if you do asynchronous calls. So you can do .async.launch. So if you've seen celluloid before this syntax might be new to you. This was introduced in celluloid 0.12 which is the latest major release. So the main complaint I've gotten about celluloid and its API was that celluloid used to hijack being methods to be asynchronous. So people got really mad at that and I kind of agree with them. So in celluloid 1.0 I'm getting rid of that old syntax and everything will use the .async.style syntax. So, awesome. So when you call this .async.launch it's gonna happen, so that's gonna return immediately. But even though it returned immediately that method's still gonna run it's kind of running there in the background, right? So async calls are just straight through. You're just sending a message to the object and it's gonna process it whenever it gets around to it. And async methods give you this sort of easy parallelism, right? So you can create two of these launchers and call .async.launch on both of them. What's gonna happen is each of those are going to run in parallel. So you may be wondering like, okay, I can call method async what if I actually care about the return value? What if I want to know what it is? So there's a separate feature called Futures. I think a good metaphor for this is calling ahead to a restaurant to order your food. So when you show up it's ready and you're not going to the restaurant and ordering your food and waiting, right? It's got another example here. This is your basic non-close form like Ted Dizuba style Fibonacci function there. So what we're gonna do is call a future on that and that future is gonna return immediately or just kind of like that async method, right? It's gonna give you this sort of placeholder object you can use to get the value. So you can call about future on that. It's gonna block until it's complete. No, it's not, it's gonna give you the result. So futures are kind of a more complicated version of a synchronous call. So basically you have an extra object there to wait for the value. So putting this all together here, we have pools, so pools let you create a thread pool to do work. So that original class I was using in the futures example, instead of calling .new, you can call .pool instead and that will give you a pool of, in this case, 16 threads that you can call into. And you don't even have to specify that by default, it will give you one actor per CPU. So yeah, it'll just automatically scale to however many CPU cores you have. So the general pattern you're gonna wanna use when you're doing futures, if you wanna compute a bunch of stuff in parallel and then get the results back. So we can map across a bunch of numbers here, get a future for each of them, and then we can map across the result of that and get the value. So that first map is gonna kick off all the computation. It's gonna do that in parallel, at least if you're on AVM, like JRuby or Rubinius, which would be my recommended VMs for using celluloid, simply because of the GIL on MRI, but you can still do a parallel layer there, but basically this is gonna kick off that computation, and then when you map across those values, you're gonna get all of them back at the same time. So something that's really difficult to deal with if you've built multi-threaded programs in Ruby yourself is what you do if any of your threads crash. So basically you can write a multi-thread program, you kick off a thread, it crashes, and then your program's basically broken unless you have some sort of crash handler. So this is again where I look to Erlang for the solution to this problem. And the basic idea is, yeah. So your thing crashed, you just restart it, right? So celluloid builds in fault tolerance. So celluloid has this idea of supervisors and supervision trees. So you can model your whole application as a hierarchy of components. Some of them may crash, and when they do, the basic philosophy is just let them crash, right? You don't do a whole bunch of error handling, you don't try to anticipate all the possible errors ahead of time. You just let crash, you can actually link interdependent components together and ensure they all restart together in a clean state. So this talk is called the celluloid ecosystem. I just wanted to give you a crash course on celluloid there, but what I actually want to talk about are these three sort of celluloid sub-projects, which there's what each of those are. So there's diesel, it's distribution across a network. So you can have several VMs running celluloid talking to each other. Celluloid IO, this is kind of an alternative event machine, I'd like to thank. Just a vetted IO, but it gets you out of callback hell, basically. So the whole API is synchronous. And finally, I want to talk about Reel, which is a web server I've written built on celluloid IO. So first of these is diesel, which was distributed celluloid. If you've been tracking this project, you might notice it's been having a little bit of trouble. The build is still failing. I haven't gotten the linking I was talking about for the fault tolerance, working in a distributed scenario based on the latest version of celluloid. So it is a little bit bleeding edge but I have pushed a version of diesel to RubyGems. You can install a dash dash pre so you can grab this pre-release of diesel, which is everything working as far as I know except linking. So if you want to play around with it, you can install it straight from RubyGems. The basic idea is that each of these cells is a source service that you should be able to expose to the network if you so desire. So this is built on top of zero MQ. Everybody heard of zero MQ? Probably, okay. So zero MQ is just a really cool way to do buffering of messages. It's basically a burglar-less message queue. I diesel maps everything on to push and pull sockets if you're actually familiar with zero MQ there and how it works. And I built a separate gem called celluloid ZMQ, which is actually built on top of another gem called FFI-RZMQ by Chuck Reems. It's pretty cool. So what would you use diesel for? Here's some basic use cases. So the main one is, I've talked to a lot of like ops-coded people about using this for, you know, they have like their ops-coded agent. I mean, I still really like this idea of having agents on remote systems. So you can just tell them what to do and they go do it. Still like the Capistrion approach of like SSH into every box. It seems a little silly to me. I want this to be like a cool solution for building service-oriented architecture as I work at Living and Social and we do that. I wouldn't actually recommend going out and doing that right away, but that would be the end goal I think would be pretty cool. Also asynchronous background job processing. So, you know, I think it would be really cool to have a system somewhat similar to Sidekick or maybe Sidekick itself that can do like leader election type of stuff. So you could actually, you know, have any part of the entire system fail and have no single point of failure in a distributed background job system. And here's some diagrams of how we build applications today. I think Uncle Bob had a post with a very similar diagram to this. And basically when I look at this, it's like you're doing your work in triple hit, right? You're building a REST client to your REST server and really all that's trying to do is expose the domain object. The RE knows everything you wanna do. So if you have Ruby, you're talking to Ruby in this case, this seems like this isn't dry, right? You're tripling the work you actually need to do to build a service. I think it should really work more like this where you can just talk directly to those domain objects across the network. Here's another, one of my favorite quotes. This comes from Mr. Steve Jobs there. So you're not just had a system somewhat similar to Corba and Soap. I mean, I still, I think this is a really good idea. I think Steve Jobs was doing a pretty good job of explaining it here, but this dream just has not been quite realized yet, right? So distributive objects in practice have largely been a failure. Here's a big list of stuff we probably all universally revile. I mean, I like DRB, DRB is cool, but nobody actually really uses it, I think, in practice. I mean, a few people do, but not really. So my question is like, why did all of these distributed object frameworks fail? And having gotten a little bit of experience in distributed systems programming, the main thing I've learned is you need asynchronous protocols. So if you're building something like, say, PAXOS, which is a distributed consensus protocol, you need an asynchronous protocol to build this on top of, and all those protocols in my last slide are synchronous, so you just can't build systems like this. So they're not built on the actor model. The actor model is asynchronous, right? It's all built around sending messages. So I think the actor model is pretty awesome. I think the actor model can actually fit all these distributed systems patterns. And even more awesome is it gives you this sort of unifying abstraction. So you wanna build concurrent programs, you wanna build distributed programs, you wanna move little parts of the system around, you wanna take one that's running in the same VM, put on a different server, that should be really easy to do. And this isn't just a dream, it has been pulled off, so distributed Erlang has actually been successful at doing this, unlike any of those other systems, I think. So several things are built on the distributed Erlang protocol. RIOC, the distributed database comes to mind. If you're familiar with Boundary, they built an entire system around an implementation of distributed Erlang on the JVM called Scallang, which is pretty cool. They might have a few gripes with distributed Erlang, I think it makes a really good command and control protocol. Same with Celluloid, right? You wanna use this for command and control, you don't wanna use this for actually streaming like tons and tons of data around. But for command and control, I think this makes a ton of sense. So here's a quick example of D-Sull. So if you grab that pre-release gem, the thing is you're gonna need ZooKeeper. ZooKeeper may be not everybody's favorite tool in the world, so, well, let me just talk about ZooKeeper a little bit. ZooKeeper provides a total ordering of events in a distributed system. So this is actually a very hard thing to do. I mean, this is all ZooKeeper actually does. It gives you order in a distributed system. So effectively, ZooKeeper is a way of doing transactions and it's doing it in a fault tolerant manner. So what sort of stuff do you need that for? D-Sull uses it for its node registry, so it's knowing where all the nodes in your distributed network are. Also, if you have global data, D-Sull supports this. So if you have like configuration data that you wanna share with all the nodes, it will store that for you. You can do distributed locks, and this isn't like, it does leader election, this isn't really built in, but you can build your own fault tolerant leader election protocols on top of ZooKeeper. So I didn't do that myself. Fortunately, there's this really cool ZK gem that will do a lot of stuff for you. Particularly, it implements a leader election protocol, which doing that is somewhat non-trivial. So I'm glad other people are solving some of these problems for me. So installing ZooKeeper really isn't that hard, especially if you use D-Sull, you can just clone D-Sull and then there's these little rake tasks that will actually install and start ZooKeeper for you. If you run it, you should see that, and everything should be good to go. So I have a little example here. I'm gonna have two nodes in my D-Sull network. We could call one itchy and the other scratchy. So basically, we give these effectively the same configuration. By default, it's gonna use ZooKeeper. It's gonna connect to your local ZooKeeper so you don't really need to configure that. But what you do need to configure is a node ID and a address and port where you want D-Sull to run. So you're gonna do the same basic thing on the other node. The only difference is you're gonna give it a different node ID and a different port number. So to find the other nodes in the system, you can go to this D-Sull node class and it basically works like an array. Give it a node name, it will give you back the node. You can ask for all the nodes in the system and you can also ask for the current node, which is .me. So to find the remote cells, you can register them with the name. I'll go over this in a little bit here. But you look up the node and then you look up the service underneath the node effectively. So by default, every D-Sull node runs this info service that gives you some cool basic information about the system here. So we're asking this node about its info there and we're on JRuby, Java 167, we're on Core i7 there. Gives you all this basic information about it and then you can invoke methods on it just like it's a regular object. But every time you do that, what it's actually doing is it's going over the network, over 0MQ, talking about info service, going what is your uptime or what's your load average or whatever and that's sending the response back over the network. Sends up working a lot like DRB in that way. So to define your own service here, here's another really basic Ruby class, right? It includes celluloid, but all it's really got is that hello method. So celluloid has this actor registry. It's again kind of like an array. So you can assign the name greeter to a particular instance or a particular cell, a particular actor there and then invoke a method on it. Let's see, all right. So all the feature, except for linking which I mentioned earlier, which I will get fixed really soon, I promise. All the basic features of celluloid are supported by diesel. So you can do the synchronous calls like I just showed you, but you can also do asynchronous calls if you want to build asynchronous protocols. That works just fine, so do features. So basically the entirety of celluloid is exposed out of diesel and you can use any features you normally would for doing stuff in the same virtual machine. So next little part I want to talk about is celluloid.io. So yeah, this is the thing that's somewhat similar to that machine, right? So now we have like a nice abstraction about around threads, what do we do for IO, right? So my canonical response is just use blocking IO for the love of God. So I mean, I used to be an event machine contributor and I've used a lot of event machine projects and they have mostly been a giant pain in my ass. So if you want things to be simple and easy to reason about, just use blocking IO, just use threads, it's great. Blocking IO is okay, you're not gonna, in celluloid, you're not gonna block any sort of central event loop like you would in event machine or node or anything, right? Like each of these actors is writing its own thread. So you're totally free to block. There are a few cases where this can't bite you in the ass though. So you're talking to a database, you ask the database, like give me a lock on this and you run to like a locking problem in your database. Basically, you've got some other like locking bug going on in external service. This can creep out in celluloid. So you do need to be careful with blocking IO. The other important thing, if you're making a blocking call in an actor and you try to send it another request, right? It's gonna wait until the original blocking call finishes before it will service your request. So there is a way around this though if you want to have a combination of IO and actor messages and just have everything sort of like seamlessly multiplex inside that actor. So the way we do this is evented IO. So you wanna do this probably if you have or if you anticipate a large number of connections, right? So if you have a small number of connections, I would definitely recommend like doing an actor per connection. But celluloid.io lets you service, I haven't actually tested it with tens of thousands of connections, but the idea is you can have multiple connections serviced by a single thread. And the use case for this really makes sense is if you have mostly idle connections, right? So if you have like a chat server would be like the canonical example. And you wanna do this when you have an IO bound problem, right? So you don't wanna do a bunch of computation this thread because it's gonna block you from servicing all those other connections. And this is sort of like a general problem with a new event machine. And web sockets are like a really good example. I'll talk about this a little bit later. So the basic idea of celluloid.io is that these actors, these cells, each of them is an event loop. So it's similar to processing messages. And this is really similar to the type of event loop you would use in an event machine or node, except in those systems you only get one. And celluloid gives you as many as you want. So your normal actor, this is how it actually works. So you have your celluloid actor that's built on this celluloid mailbox. Inside there's actually a condition variable and that condition variable is what the actor is actually blocking on when it's waiting for work. So what celluloid.io does is celluloid has a sort of dependency injection API where you can swap out the mailbox. So it has its own mailbox with its own reactor and this thing actually waits for messages using a pipe. So by using a pipe, it can multiplex incoming requests. So if you wanna do synchronous, asynchronous calls, whatever, it uses the pipe to signal that if you wanna do IO, it can wait for that at the same time. So it can multiplex those things together. So do that uses this other library I wrote called NIO for R. So Java has this API called NIO that gives you selectors. The selectors are kind of the heart of a reactor, right? So this is the thing that's wrapping system calls like ePoll and KQ and that kind of thing, right? So yeah, so it's available on my GitHub there. It's inspired by Java NIO. I don't like maintaining a ton of code even though I appear to be writing a bunch of it I'm talking about right now, but I try to keep the API really small and really simple. It should be easy for any Ruby implementer who wants to build their own fast version that's specific to their implementation. They should be able to do that. So I wrote two back ends to this. One's built on LibBV. So I don't know if you're familiar, but I wrote another LibBV library for Ruby called Kulio, kind of another event machine alternative I gave up on because I think celluloid IO is a better way to go. I hate callback hell. I hate callbacks, I hate deferrals, I hate all that stuff. So I like these nice scenes synchronous APIs, so this is the second Ruby binding to LibEV I've written as a word. So I also wrote a Java extension for JRuby that talks directly to Java NIO to do this stuff to you. And for any of you who aren't using CRuby's, there are things that support the C extension API like Rubinius, there's also a pure Ruby version that just uses kernel select. So I think I've got all the VMs covered with this, although I would love for VM implementers to work with me on this to try to get like needed versions to their VM. So the core part of celluloid IO, I mean, it's mainly for doing sockets. So it exposes TCP sockets, UDP sockets. So celluloid IO TCP socket, I'm trying to build a invented doc type of TCP socket itself. So this uses fibers to defer, so it's sort of similar to the Lath if you're familiar with that. So it gives you a synchronous API but underneath the scenes it's still invented, non-blocking, all that good stuff, right. And the cool thing is you can hand a celluloid IO TCP socket to, like anywhere else in the system so you could give it to any other thread, maybe a thread that is a celluloid IO actor and will just seamlessly do blocking IO for you. So the gist of this whole thing is you can have invented IO and threaded IO. You don't need everything to be invented non-blocking. You don't need to reinvent the entire world in non-blocking manner. You can have them both at the same time. You can have your cake and eat it too, right. So really quick, here is a echo server example using celluloid IO. So the main difference here is instead of including celluloid, you include celluloid IO. And otherwise this looks pretty much like, you know, the same sort of thing you would write with the core TCP socket APIs. So you see that little comment there. Celluloid IO defines its own replacements for all of the core Ruby classes. So when you're asking for TCP server or TCP socket or that kind of thing, it's looking in the celluloid IO namespace it sees those. It's not using the core Ruby stuff. It's using its own replacements for all this stuff. So hopefully this is kind of like a drop-in replacement for doing blocking IO. All you gotta do is include celluloid IO. So finally here I'm gonna talk about Reel. So I wrote a web server built on celluloid IO. It's relatively fast. Here, these are actually a bit dated, but you know, it's relatively fast, right. It's relatively fast, relatively low latency. Here's some numbers for comparison. So thin is about like 50% faster there. Node is approximately twice as fast, that kind of thing. But I'm still beating Goliath, which is probably like the closest analog to Reel. And it's got web sockets, so let me see if I can possibly demo this to you. Might be a little bit hard here because I can't really see what's going on. Arrangement here. All right, so I'm going to show you the little example here. So if you just clone Reel, it comes with a couple, oh god, you can't see that at all. Awesome, let me move my window over here, all right. Everybody see that, there we go. Seems good, bigger, I don't know. All right. So this is just the example web sockets example that comes with Reel. Who went to Aaron Patterson's talk? Cause he, so this is really, really similar to what he showed. Everything he showed was kind of using core Ruby. This is all kind of using celluloids equivalents to core Ruby stuff. So yeah, I mean, maybe I should show you what this does really, I'll just tell you. So it does, it just shows you clock. I mean, that's all it does. It's like super trivial. So basically what we have, we have this time server and this is gonna be our event source. So if you saw Aaron's example, he was like trying to publish data to a bunch of clients effectively and his data was the condition of his sausages that he was curing or the humidity or whatever. I actually didn't see his talk today. I saw it before when he gave it. But so instead of publishing that, I'm just gonna publish the current time. So that little run method there, I should update that to the new syntax. That's the old syntax. But basically when we start the server, it kicks off this method asynchronously. So initialize returns immediately and then we just have this little run and it sleeps until it's synchronized to the current second. And then celluloid is built in timers. So we can say every one and one is one second there. We're gonna publish that time change event to everyone who's interested in it. So if you're familiar with active support notifications, celluloid notifications is practically the same API. It's just multi-threaded. So we can publish the current time and then use time clients down here can subscribe to it, right? So they're using that same API there. So every time a client connects, what it's gonna do is get this web socket here and then it's gonna subscribe to these time change events and whenever that time server fires a event by publishing it, it's gonna invoke this notified time change method in all these client threads. So very similar to what Aaron was doing just with web sockets, right? And then that web socket kind of is sort of a duct type of IO, right? So we can just write whatever we want to it and it's just gonna handle it completely transparently. So in this case, it's gonna call the time.inspect and just write that string back there. And the only thing we have to worry about is this real socket error which basically says the client disconnected. And then down here is the actual web server itself. So you use subclass, real server. So real is kind of like a web server toolkit, right? Like it isn't exactly like a unicorn or anything. It just, it's kind of more like Mongrel. It gives you this class that you can use to build your own web server. So we have this onConnection method. That's gonna get called every time somebody connects. And then we have this while loop, right? I mean, this seems like a little bit confusing at first but this is how it handles HTTP keep alive. So every time you get a connection you actually want to loop on that connection and wait for all the possible requests that are coming off there because there may be more than one. And what you're gonna get back from Connection.request is one of these two classes. Either be a real request, that's a standard HTTP request and then there's real web sockets. So this is, if the client decided to do like an upgrade, upgrade to web sockets, right? Will you get a different object back that will be the web socket? So we've got our out request, you know? So there's actually a web machine. Well, a web machine is a Ruby web framework written by Sean Cribbs. If you want a real router, if you don't want to write your router like this, which is kind of retarded, you can use a web machine, it has a real backend. But for this example, we're just gonna kind of examine that URL and all we're gonna do is serve a static page back. And then if we got a web socket, what we're gonna do is create this new time client class and hand it that web socket. And here's just the boilerplate web page I'm gonna show you. And finally, what we're gonna do is spin up that time server and the web server. And finally, we're gonna sleep the main thread. Sleeping the main thread is kind of important because in Ruby, if you don't do that, it's just gonna exit. So let me attempt to spin this up here. Okay, so we got the log there. So I'm gonna go to this address. Let's see, got a bunch of stuff going on there. All right, so there is that time server going over web sockets there. I can send multiple of these guys and they're all basically getting that same signal from that same thread and just streaming it over this web socket. So hopefully that example there is kind of enough to get you going. The real documentation isn't great. I do definitely want to improve it. Let me try to get back to my slides here. All right, there we go. So finally, I want to talk about lattice. So this is total of April where but I want to build a web framework of Solenoid. I don't have time to do it so I'd love volunteers to like stop up and do this for me. And the basic goals are recycle or rails as much as possible, right, like, so rails basically is everything good to go except you can't do scatter gather requests to other APIs very well. And that's the main problem I want to solve. So I don't want to like reinvent action view. I don't want to reinvent active record. I think all those things are pretty good. Basically I want to fix action controller and give you seamless scatter gather programming the multiple APIs. So as I mentioned earlier, there is a web machine backend for real. So the idea is you could have web machine servicing your actual requests and build an abstraction on top of that. One of the big areas where I think rails kind of falls down is you don't have a multi-thread development mode. I mean, in practice, I don't think this is a huge problem because you're pretty unlikely to run into concurrency bugs if you're just developing locally, right, doing a single request to your server kind of thing. Where you're going to see that falling down is when you have like, you know, tons and tons of requests coming into your server concurrently. That's when you're gonna actually run into these thread safety bugs. But I think the closer you can get your development environment to the real thing is always better if you can, right? And really the killer app here is this easy scatter gather for building service-oriented architectures on top of Ruby. So if you're interested in this idea, if you would like to maybe contribute to Lattice, hit me up afterwards, I'd love to talk to you. And that's all I got. So I am basculent on Twitter there. Kind of the entry point to all these cellular projects is cellular.io. So if you wanna know the URLs for all this stuff, just go there and everything's kind of linked from there. And I have a blog on unlimited novelty.com. And that's it. So I have time for questions here. Seems so. I've got one for you. A couple of years ago I was really fascinated to read about REIA when you were doing that kind of thing. Can you comment a little bit on your evolution of thought from going to a different language, inventing a new language to now it looks like figuring out how to bring all this goodness into Ruby itself? Yeah, so the question was about a programming language I created called REIA. So I tried to build a Ruby-like language on top of the Erlang VM. So that's kind of where I got some of the experience that I used to build celluloid. Basically what happened, if you're familiar with this guy, Josie Villeam, one of the principal authors of Rails, he created another language called Elixir and he did a much better job than I did. So I had like REIA is actually probably the open source project I've hacked on the most as like a few thousand commits. I spent like three years on it and what I learned is making a new programming language is really, really hard. It's not something you should probably do as like your first successful open source project, I don't think. I mean, that's seriously what it was. I mean, I had mild success with like Coolio or whatever. But really if you can tap into existing language ecosystem and kind of give people tools and let them use language they already know, instead of like, well, it's completely reinvent the entire universe and make a new programming language. I think that's like a lot more pragmatic approach. Celluloid isn't perfect. If I were to make like a new, I'm still kind of interested in making like a new programming language, but I don't have time to do that and it's really, really hard. So I think celluloid is just kind of a better way to go. So Reel does have a rack adapter. The short answer for what I think you shouldn't use it is rack middleware and fibers do not play nicely together. So really, I mean, the other reason I don't like rack, Reel is built with end-to-end streaming. So everything you do in Reel streams, when you're reading the requested streams, when you're writing the requested streams, there's no intermediate buffering and that buffering is a essential part of the rack specification. So if you're going to read the rack spec, you need a rewindable input. It's like dictated in there at a very fundamental level. So you have absolutely, if you're trying to do your requests, it's like say a multi-gigabyte file, you're going to end up writing that file to temp. Even if your whole goal is like, I want to stream this to like, say a video transcoder or something, right? Like you have a way to like process this in a streaming manner. Rack just doesn't let you do that. So I hope they fix this in the rack to spec. I, you know, this is on their big laundry list of should they need effects in rack to you. But I do, there is a rack adapter if you want to use rack. I wouldn't recommend it. A batch IO process. To do what with IO? Sorry, I messed up. IO, a batch IO process. Do some IO, batch IO process. Just say batch. Batch, batch. So what do you mean by batch IO, I guess? Make the calls parallel. Yeah, yeah. So definitely, so the easiest way to do that would be to use a bunch of actors. So you could use those pools, for example, right? You could spend up a pool and just make synchronous calls into it, right? And if you've exceeded that concurrency threshold, right? You're like, well, I only want to make a maximum of, I don't know, like 50 outgoing HTTP requests at any given time. You can make a pool and you can give it size 50. And then every time you call into there, if you've kind of exceeded your concurrency limit there, it'll just block until another worker's available. So that is, by far, the easiest way to use cellulite to do that kind of thing. And a diesel example, when you created and registered the actor on the other node. Yeah. Called it cross. Is it possible to use a pool on that side instead of an instance? Yeah, that's a little bit trickier. So if you use cellulite as this like supervision group feature, if you use the supervision group, you can tell it a pool as basically, so you can give it a name. And there's also a supervised as to supervise an actor. So basically you can do pool as that will register the entire pool with that name. And then you can call it just like any other actor. So yeah, supervision group is just actor basically, yeah. All right, I think that's it.