 So yesterday I got to look around our Twitter account and see what was going on. I saw some speakers doing some last minute beach time slide updating. I talked to some speakers. They hadn't updated any of their slides or looked at any of their slides. I talked to one speaker who freaked out that he thought his session was 30 minutes long and didn't know what to do. And apparently instead of cutting down slides, he was adding more slides. I'm not sure that really works out very well. But I did run into something special from our keynote speaker, Mr. Patterson. So I was wondering what this meant and what we could really do about this. So I'm guessing he must really want spam or he's a really big fan of spam. So what's the situation here? Did you get to eat spam yet? No, that's great because we have one for you right here. Yes. Thank you. So if you can come on up and I'll give you this spam moussivi. Awesome. Thank you. And since this is your time in Hawaii, you also have a delay. Thank you. Yes, spam. This is Hawaii. We can't let you come here and not leave without spam. All right. Who's ready for day one? Who here wants me to get off stage and let this guy talk? All right, let's do that. Okay. So I'm going to talk about Rails 4 and the future or as I like to call it, Rails 4 for you and me. I was told that you're not supposed to introduce yourself when you're giving keynotes but I'm not very good at speaking. So I'm not going to tell you my name is Aaron Patterson and that if you want to follow me on Twitter you can. So I'm just, for the record, I'm not telling you this stuff, okay? I'm not introducing myself. I just want to say hi to everyone. First I got to start out and say thanks to some people. Well, I got to say thanks to my employer AT&T. Without them I wouldn't be here. So thank you. I also want to thank the conference organizers for having a conference in a really awesome place. I've never been to Hawaii before. This is my first time. So I'm super excited and the reason I'm really excited about it is because I love spam. So thank you. Also my favorite TV shows are here like Dog the Bounty Hunter. I was looking around for Dog the Bounty Hunter last night, I couldn't find him. And also like Magnum PI, I love Magnum PI as you can see by my mustache. If you don't believe me that I love Magnum PI you can see like I name my computers after Magnum PI characters. So I wanted to tell you, I was flying down, flying here to Hawaii and I had to go to the bathroom while I was on the plane and the guy in the same road on my left also had to go to the bathroom. So he went first and then he came back and I went over to the bathroom and I got there and I looked out on the floor and there was a $5 bill on the floor right in front of the bathroom. So I was like, well, he must have dropped this money. So I pick up the bill, go back to his seat, give him the money and I'm like, hey, you must have dropped this. So I go to the bathroom and come back and they're collecting donations on the aircraft for breast cancer awareness and if you donate money then they enter you into a raffle and you can win some prizes on the airplane. So I sit down and the guy says to me, this isn't mine. I didn't drop this. And I was like, okay, well, it's not mine either. And he's like, why don't you donate it to the raffle thing? So I said, okay, that's a great idea. Don't know whose money this is. We'll put it to a good cause. So I donated it figuring, well, I'm not going to win any prizes. It's just going to get donated so that will be great. And then of course I win. And I was also the first one to choose the prize like I could have chosen a bottle of champagne or like chocolates or all this stuff and I didn't know what to do because it wasn't my money. So I picked chocolates, but now I feel like there's an imbalance in the world. Like I have these illegal chocolates. I shouldn't have them. So I don't know what to do with them. But I think I'm just going to eat them in my hotel room and not tell anybody that I want. But I guess it's too late now. Anyway, I don't know if you can tell this, but I am insanely nervous on stage. And one of the things that I have to do to comfort myself is I told a friend of mine, well, I get super nervous. What should I do? I love speaking, but I'm so nervous on stage. What should I do about it? And he said to me, well, when you're on stage, you just need to think to yourself, what would Freddie Mercury do? So every time I give a talk, I put this up and I try to think to myself, what would Freddie Mercury do? Now, most speakers think, well, it's common knowledge. Just imagine the entire audience is in their underwear. Well, actually for me, it's the opposite. I imagine that I'm on stage in my underwear. I figure that's what Freddie Mercury would do. Anyway, I also want to tell you I have a cat and his name is Gorbachev Puff Puff Thunder Horse. That's his full legal name. We call him Gorby Puff though. And I love him a lot. I love him a lot. And I thought he's the first cat I've owned and I thought owning a cat, it's going to be 99% fun in the sun. We're going to go riding bikes together, get ice cream, go swing in the park and do all the stuff. But it turns out that basically 99% of the time, he's sleeping. So I try to take pictures of him, but the only pictures I can ever get are him yawning. He's about to go to sleep all the time. And it just kills me. I like. So we never get to go bike riding together. But anyway, if you want to see more pictures of him yawning, you can follow him on Twitter. So all right. So we're going to talk about Rails 4 for you and me. We're going to look at some features of Rails 4 and we're going to talk a little bit about the future of the web. Now despite my looks, I'm not a television psychic. So I can't tell you exactly what's going to happen in the future, but I can talk about where I think it's going. And basically the point of this talk is to get you guys, get ideas flowing among you guys, talk about where I think we're going, and then hopefully have you guys pick up the ball and run with it. So what we're going to do is we're going to look at some behaviors in Ruby and then we're going to look at some changes in Rails and then we're going to look at how these changes in Rails interact with the web. So we're going to start pretty close to the server and then move our way out to the client. So the first thing I want to talk about is concurrency in Ruby or parallelization. And I like to shorten this down, but I'm not super good at spelling, so it's p56n. That's how I shorten it up. It's possibly misspelled, I'm not sure. And I know most of you probably know about this or at least heard about this, that MRI has a GIL or what is known as a global interpreter lock. And this lock prevents concurrent CPU execution. So what that means practically is that we can't schedule two threads to run on two different CPUs at the same time. So if you want to have an interpreter that's able to do that, I suggest you look at alternatives such as JRuby or Rubinius. These are GIL-free alternatives. But I want to share some good news. And where is Charlie? Is he here? No, he's not. Jerk. Anyway, I want to share some good news with you. The GIL was removed in 1.9. So we can be super happy about that. But I also want to share some bad news with you next, is that the GIL was replaced with a GVL, which is exactly the same thing. It just has a V instead of an I. So if you go read the source code to 1.9, you'll see many references to GVL. And that's what that is. So this leads to the question, well, is MRI useless for P32N? And I mean, obviously, yes, it's useless. Your programs don't actually work. It just seems like they work. I don't know if there's really too much of a difference to be made. If your program seems like it's working, does that mean it's actually working? I think this is a question probably left for philosophers, people smarter than me. But anyway, what I want to do is I want to take a look at the impact of the GVL on MRI and see what that means to us in our day-to-day Ruby basis. And the thing that I like to use for looking at how the GVL impacts MRI is the use of the Fibonacci sequence. And the reason I like to do this is because I used to work in online advertising. I don't know if any of you guys have worked in online advertising, but online advertising is basically all about doing Fibonacci sequence calculations. So that's how they decide what ads to show to you. So of course, how many lies am I telling you this presentation? So of course, I like to use this in all of my benchmarks because of online advertising. So all right, run this on my machine, notice that says Higgins. I'm not lying about the Magnum PI. I run this and it takes about 5.7 seconds on my machine. Okay, so we're like, oh, this sucks. I need to be able to deliver ads faster. So I'm going to calculate my Fibonacci sequences in threads. So I've got four CPUs on my machine, so I'll throw up four threads here and calculate the Fibonacci sequence, hopefully all of them in parallel. And then I run this and it takes exactly the same amount of time, just about the same amount of time. And the reason is because time spent in the VM can't be done in parallel. So hence it's called a GVL or global VM lock. So anytime, whenever we're spending time in the virtual machine, that can't be executed in parallel. So what do we do about this? Things to fix this are people use JRuby or Rubinius if they want to actually have threads scheduled on multiple CPUs. But very common is what people do is just use multiple processes. And this is what you're doing when you're running your Rails application with say like Unicorn. Right? You're running a whole bunch of different Ruby processes so that you can actually handle concurrent requests on multiple CPUs on your machine. So just run multiple Ruby processes. So I want to look at another thing and this is an example of using a slow web server or a slow web service. We have a slow web service here because, again, basically online advertising is all about Fibonacci sequences and slow web services. So if you look at this, it's just super simple. Just prints out hello world. But the important thing is like we have a sleep here for half a second. So each request takes at least half a second to complete. And we run our client here. We have a client and we just say, okay, well, go fetch some data from the server. And we run this and it takes a little over two seconds and this makes sense because we're doing four requests each at half a second, takes two seconds. Not surprising. So we're like, well, we totally forgot about the previous thing, the previous knowledge about the GVL. We've just somehow we forgot about it, haven't had our coffee yet. So we decided to throw this into threads. And then we remember, well, this isn't going to work out for us because we can't execute anything in parallel. So what is the point of doing threads? Well, we run it anyway, but it actually takes half a second this time. So we were able to execute this, make requests to this web service, even in MRI, using threads and decrease the amount of time spent. Now, how did this work? Like how did this work inside of Ruby? Well, there's different things, there's different methods inside of Ruby where we know where the interpreter knows, well, okay, nothing can actually happen inside of the Ruby VM while we're trying to read off of a socket. We can't actually execute anything right there. So what we're gonna do is we're gonna release the GVL, let other threads execute on the CPU while we're waiting for data off of the socket. And then once we actually get data on the socket, we're gonna acquire the GVL again and enter back into the Ruby VM. So there's certain things that we can do this trick with which is any type of IO operations. And for those of you who are writing C extensions or looking into MRIs and turtles, the function you actually wanna look for is this function RB thread blocking region. This is the thing that actually unlocks the GVL. So you give it a function pointer, it'll unlock the GVL, call your function, execute some code, and then reacquire the GVL. And you can actually use this function when you're doing things like, for example, cryptography. Where you're doing things that are like CPU intensive, but you're never actually gonna be in the Ruby virtual machine. You're just doing computations in C. Then you can use this function as well. So what does this mean? What does it mean that we can actually run things in parallel on MRI? Well, I mean from an advertising perspective, obviously it means that we need to build Fibonacci as a service. Or what I like to call F-Faz, which is the next big thing. Yes, I'm looking for investors now, so come talk to me after. I promise to use your VC wisely. But really what it means is it means two things. It means that, number one, a block to VM is a block to VM. It's always a block to VM. Block to VM is a block to VM. So if you find some library that's like, well, we're using fibers to make your stuff like super duper fast and in parallel, it's actually probably a lie. It's just adding complexity to your code. The VM can't actually magically switch out when you're using fibers. So the more important thing is that threads matter. If you're doing IO on MRI, threads matter. And I'm guessing many of you have web apps, probably. Or at least some of your code does IO, I'm guessing. It would be nice if we didn't have to, but you know. So even on interpreters that have a GVL, using threads is important. And I think that web servers like threaded web servers like Puma are going to become more important. Because while we're doing IO operations, we should be able to serve up CPU operations at the same time. So having solutions like multi-threaded servers that are running in multi-process should become more popular, I would hope. Or using gil-free solutions like JRuby with just a straight up threaded server would also do as well. So the next thing I want to talk about is thread safety in Rails. And I'm going to talk about some of the changes that we've made in Rails to make it more thread safe. And what it means to you as developers, and what we had to do. I want to talk about common problems we ran into and how you can fix those things in your applications. So the first thing that we did in Rails was we deleted config.threadsafe. So we didn't actually remove it, it's actually still there, it's just a no op. So you can still call it and it'll probably output a message saying something like cool story bro, you're already thread safe. So the question is, well why should we delete this? Why should we delete this configuration option? Like what is the point of deleting this? Why do we want to delete this? Well, in my opinion, we should probably just always be writing programs that are thread safe. So if that's the case then it boggles my mind why we would have some particular flag that's like okay, now you should run thread safe. Or now you should run thread safe. And if you think about it, so does that mean like are there branches in the code where it's checking okay, is it thread safe now? Are we thread safe? If we are then let's do it in a thread safe manner. Otherwise let's do it in a totally crazy manner that could blow up. It just doesn't make sense to me. So this is my opinion why we should remove this configuration flag is just because it's ridiculous. But the other thing is that it simplifies the Rails code base. So we can actually find those branches where we're saying, okay, well now let's do this in a thread safe way or let's not do this in a thread safe way and we can eliminate those branches and actually simplify the code. So is it safe to remove thread safe? The next question you might be asking and in order to figure out the answer to this question we need to understand well what did thread safe do? So let's take a look at that. What thread safe does is it set these four different configuration options inside of Rails and we'll look at what each of those are. But the first thing that I want to say is that loading code isn't thread safe and I put a star there because it's not actually true. Require is now thread safe on trunk. I think require has been thread safe in JRuby for a while. I'm not totally sure about that. But the thing is it doesn't actually matter because well, I'm sure many of you have seen warnings that are like circular require is considered dangerous. You might have seen this somewhere. And the problem with that is that, well, if we decide to take out a lock and do this, do these requires, a circular require can lead to a deadlock, right? And we don't actually want to, it would not be very fun if when we're booting our Rails application it just deadlocks. I wouldn't be too excited about that. So in Rails, we just treat all code loading as not thread safe. And we say, okay, we're not gonna do any sort of threading now. We're just gonna load it all in the same thread, get it done. So, configuration options. The first one, we enable preloading frameworks. And what this does is it says, okay, we're gonna load all of Rails into, we're gonna load up all of Rails. Now, I'm not sure, this Rails is typically lazily loaded if you don't have this set, which means, okay, I'm gonna reference, when I reference ActiveRecordBase, then it's actually gonna load up ActiveRecord. And it also does this for all of your code, so it won't actually load your model files until you actually reference that constant. And why the framework and your code is treated differently, I don't actually know why, but this is one of the configuration options is, okay, we'll do it this way. The next thing we do is we enable caching classes. This makes sense because, well, if we're gonna load up all of our code, we don't wanna be reloading. We know that it's an axiom that loading code is not thread safe. So we don't wanna be reloading code in production. That would not be a good thing. We lead to deadlocks, our app dies, not super excited about that. The next thing we do is we disable dependency loading. And this is the option that says, okay, when you reference a user constant, like an application constant, like your user model or whatever, we go load up that user model. So we disable that because we're hopefully preloading all of your code. So it doesn't make sense to go out and find these constants because hopefully we've already found them all. We're done. Now, the next thing, the next option is we enable concurrency, we allow concurrency. This is my favorite option. Oh, and that was a lie too, I hate this option. So what this option does is it actually removes a middleware called racklock. What racklock does is it wraps up your requests. So it says, okay, when a request comes in, we're gonna obtain a lock. So thread one comes in, it gets a lock, it reads from the socket, and we enter your Rails application and we try to process stuff. Well, let's say we have a second thread that comes along. It tries to obtain that lock, but thread one already has that lock taken out, right? So thread two just sits there until thread one is done. It writes to the socket, releases its lock, and then thread two can say, okay, it's my turn. So it goes through the entire process and releases the lock finally. So with things like multiprocess setups, like if you're running Unicorn in production, racklock doesn't make any sense because you only have one thread in each process. You only have one thread processing a request, so what's the point of taking out a lock? If no other thread can acquire that lock, then what is the point of the middleware? So if you're using a multiprocess setup, this simply just adds overhead to your application. So the other problem is that if you noticed in the way that we process requests there, it only allows one request at a time. So if you're running in a multi-threaded server, all of a sudden you can only do one request at a time, so what is the point of running the multi-threaded server? If you can only do one at a time, you boot this up, you boot up your server and you're like, why is it only running one request at a time? I guess I need to start up multiple processes of my threaded server and you're like, why is this happening? Well, this is why. So in the best case, it's extra overhead. This is our best case scenario with rack lock. In our worst case, we can only process one request at a time. So I think that options like this, or default configurations like this, is one of the reasons why nobody chooses thread-safe servers. They start up their thread-safe server and they're like, why, I can only process one request at a time, this web server sucks. I'm gonna move on to something else. So what is the impact of removing thread-safe bang? We're just gonna say, all right, we're gonna enable this for everything. It doesn't matter anymore. The impact is that boot time will probably increase. So boot time and production will probably increase because we're gonna preload all of your code. But the thing is, you actually had to pay all that time before, it's just that you paid it over the course of a few requests as your server warmed up. Now it just happens all up front. If you're a middleware, so you should have a slightly smaller stack size, maybe one or two stack frames. But it is slightly smaller. Multi-proc servers should stay about the same. So after you boot up, you should see about the same profile, speed-wise, and threaded servers will just work. So now we don't have that anymore. Oh, that reminds me. There was a survey out that was like, who, Rails survey, what web server do you use? Like, oh, who has the most popular web server? And what I think is funny is that I don't think Web Brick was listed anywhere in there. I don't know if you guys know what Web Brick is, but it's the web server that comes with Ruby. And if you don't specify the web server that you use when you deploy to Heroku, you use Web Brick. I think many people don't know that. And the other interesting thing about Web Brick is it is a threaded web server. So I'm pretty sure it's the first threaded Ruby web server. Anyway, so fun fact, that survey is probably wrong because I'm guessing there are many people who have pushed applications to Heroku without specifying a web server and they're using Web Brick in production. Woo, Web Brick! Anyway, removing thread safe wasn't the only thing that we had to do to fix, or the only thing that we did in Rails to support threaded applications. We actually had to fix bugs where we were doing unsafe things. And what I wanna do is look at some of the common scenarios where we were having bugs in Rails and what we had to do to fix them. And hopefully you can use this, look for these types of issues in your application too and fix threading bugs in your Rails apps or your gems. Now, I guess we're kind of lucky because 100% of our bugs were race conditions around caching. So we didn't actually have any deadlock situations which was pretty nice. So what we're gonna do is look at a few different caching race conditions and what you have to do to fix them. So people don't seem to notice this, but or equals is a form of caching. So you're caching the right hand side of the statement, you do some calculation, do or equals and assigns it to some instance variable. We're lazily initializing that instance variable. And the way that this is a problem is we say, okay, we have a check then act, race condition. The thread comes in and it says, okay, is that instance variable nil? If so, let's calculate it, set it and return it. But the problem is while we're calculating that value, another thread could come along and be like, hey, is this instance variable nil? And it says yes, it is nil. So now you have two different threads doing the same calculation twice. Now one thing to note is that this particular operation is only dangerous when you're sharing it among threads. So if you see this happening, if you're sharing this data among threads, that's really where you have to worry about it. And what you can do is you can eagerly initialize it. And of course I'm using Fibonacci sequence here, as I said, Fibonacci sequence, very important. We eagerly initialize this instance variable on the class. And we know that booting the application and requiring files is considered to be not thread safe, so we're guaranteed that this is only gonna happen inside of one thread. So we pre-calculate this cache stored on the class and then we're good to go. The other fix that we can do is we can lock. So this requires that we add a mutex and we say, okay, we're gonna go synchronize on that mutex, calculate the Fibonacci sequence and then return it. So the other thing that we can do is we can move this to instance methods. Class methods are gonna be shared among threads, so depending on the particular problem that you're trying to solve, moving to an instance method may be a better solution. So you say, well, I'm not gonna share this instance among threads, we can actually generate a new instance per thread and we just do the calculation and initialize and we have a cache there. We can also do a lazy initialization if we want to using a synchronized block and initializing there. Now, if you really, really, really need this to be a class method, another thing that you can do is create a singleton, store that on, store that as a constant and use that and actually we found that to be fairly handy throughout the Rails source because we can actually instantiate that object and test it. So dealing with singletons is kind of a pain because maybe if they store any type of state you have to reset that and it's just much easier to test against an instance of something. Now, another problem that we ran into was hash.newblocks and this is kind of an insidious problem because you don't notice at the method level that, well, this isn't gonna be thread safe. You just look at the method and you're like, well, I'm pulling a value out of the hash. It's totally fine, it's gotta be fine but actually what's happening is the hash is doing the same issue that we had in the or equals, the or equals section. It's saying, okay, we have to check, then act. Do we have this key? If we don't have the key, then we need to go calculate a value for it. One thing that we can do is synchronize around, synchronize around key fetching. The other thing that you can do is which I recommend is getting this thread safe gem from Charles and I have to say, I wish that the type of stuff that Charles has in this gem was in Ruby's standard library because I feel like that's another problem with thread safety in Ruby itself is we don't have a lot of the primitives available to us in the standard library like thread safe hashes, thread safe arrays, even some things like I don't know, futures, latches, barriers, those types of concurrency data structures just aren't available to us in the standard library so I feel that that is something that's keeping Ruby developers from writing thread safe code. So this is well and great but what about at the app level? I'm talking about all this stuff from the perspective of somebody working on the framework itself and I suspect most people are actually working on Rails applications so what do we do at the Rails app level? Well the answer is actually very easy. It's actually really easy and you shouldn't be afraid of making your code thread safe. The main thing you need to do is avoid shared data. Once you learn how to spot where shared data is happening in your application, it's actually pretty easy to eliminate that or put locks around it and the reason I say that this is the most important thing you need to remember is because most people don't actually type thread.new in their applications. It's very rare. So if you're not actually spooling up new threads then mostly what you need to do is watch out for shared data and you need to look for things that are global. We're gonna look at a few things that are global, like this obviously if you're using global variables it is a global and probably shared among threads. Another global that you need to watch out for are constants, constants are gonna be shared among threads. What's kind of annoying is if you set the constant twice you're gonna get a warning Ruby will complain to you but you can actually mutate a constant value like down here on the bottom and you won't get a warning about that. You're modifying global data but you don't know. And I think the one that's most common are class level methods, class methods like this. You have to remember that all these classes these classes are all shared among threads and when you set a method on that class that method is also shared among all of your threads. So you need to be careful about these. So like I said avoid global data in ad locks. The next thing I wanna talk about is streaming. And to be honest this isn't a new feature in Rails 4 it's actually we just tried to make it easier to use. So we're gonna look at streaming and then we'll look at template rendering how template rendering works today we'll look at streaming and we'll look at some features beyond Rails 4. So I wanna look at template rendering from a high level a very, very high level perspective. When we process our ERB templates the results are buffered and stored into memory and as such all the Rails internals are built around buffering up this template and then spitting it out to the socket spitting it out to the client. Now what sucks about this is it means that clients are blocked while Rails is working. So when somebody makes a request to your web server they say give me the index page and Rails is like okay we're gonna calculate the index page and it's sitting there churning away calculating the index page while the client is just sitting there going okay what am I gonna get some data what am I gonna get some data. When you could be sending data down to the client and letting them fetch like assets or process JavaScript in advance. So the client is blocked while Rails is working. The other annoying thing about the way that template processing is handled is that we have to fit the entire page into memory before we spit it out to the client. Usually this is fine but it means that we're constantly resizing strings and we'll see the process growing as it builds up the page and then hopefully reducing again hopefully and then spitting out to the client. So most people expect that they have to return something from Rack here they have to return the entire page and I think Rack encourages buffering. If you look at the Rack API this is a Rack application that third parameter in the array is actually the page buffer itself and the simplicity of this API makes it seem like well I have to buffer up the entire page before I send it off to the client because we have to actually return this value up the stack. So the thing that's annoying about this is we know that even in MRI even in MRI we can do IO and CPU in parallel. So why are we buffering up this page? What is the point of buffering it up? We could be sending data down to the client actually getting some parallelization P37N out of this. So this is where Action Controller Live comes in and what this is is a module that you mix into your controller and it gives you an IO type an IO like API to send data down to the client. And here's an example the reason we stick with this module is because today people are expecting that everything gets buffered and we don't wanna break that assumption in Rails 4. We don't wanna break your applications when you're upgrading so this is an opt-in feature. So here's an example of using it you just mix in Action Controller Live and then you get this stream object on your response object that you can write to it acts like an IO. That means you have to close it like an IO. It acts like an IO. So it's natural for us to do computations with IOs. So Action Controller Live gives you an object that quacks like an IO. This IO API is important to me because on Unix systems everything is a file. Everything is a file. So why aren't we treating our output like a file as well? So I wanna look at how this works. How does it work? We're gonna look at some of the internals and how to actually build this. So this is the API we want. We want something where we set the status, we set some headers and we write out to our stream and then we close it. That's what we wanna, ideally, that's what we would have our API look like. Now the problem is if we look at the Rack API, we don't have that with the Rack API so how can we accomplish this? X whatever equals heart. Well, what we need to do is we need to wrap this up into a response object and have people write to it. So we wrap it up in a response object and it looks like this, like here's our response. But the problem is down there at the bottom that's our Rack application, sets the response on the controller and then it calls the index and then returns back up the stack. But the problem is that this still doesn't stream. This doesn't solve our problem. We're calling into the controller, waiting for the controller to return and then returning back up the stack. So how do we fix this? Like, what do we do? Well, we can call the action inside a thread. Ah, now we're seeing why thread safety in Rails is important. We can call the action inside of a thread and then return back up the stack outside of that thread. But this still isn't good enough because if you're looking at this carefully you're saying to yourself, well, ah, man, this could return back up the stack before anything has actually happened inside the controller. Maybe nobody's set the response status. Maybe nobody's set the headers. What are we gonna do? So how do we deal with this? Ideally what we wanna do is we wanna wait. Right here we'll say, okay, we're gonna wait on the stream. Wait until something has actually been written. And the way that we do this is we have a buffer class and this buffer class has a latch in it and we call a wait on the buffer class and that just blocks there until somebody has actually written. And you'll see down here in the right method that releases the latch and then we return back up the stack. Okay, cool. So there is our internal implementation and this is an exact code but it's very similar inside the Rails source code. So you can go look for this and it should seem familiar. So cool, but what can we do with this? What can we do with this streaming stuff? Well, I wanna say like this really excites me from a Rails internals perspective because we can use this to build streaming ERB. We already have streaming ERB but this greatly simplifies the process and we can see this by taking a look at how ERB does processing. So here we have an ERB template, we output the source and this is what the actual source of the ERB template is. So we see that it can catch a bunch of stuff. But what's cool is that we can control, we can control the way that ERB writes things out and we can control the variable that it writes to. So in this example, what we're doing is we're saying, okay, I want you to use the right method and I want you to call the right method on standard out. So now if we take a look at the template source, we're actually writing the standard out. This is awesome because we can refactor Rails internals to more easily produce streamed template output. This is the result of that template. So how is this cool and wet? That's awesome for Rails internals, but how is this cool for web applications? Refactoring Rails internals excites me, but maybe it doesn't excite you guys very much. So let's take a look at how we can use this with web applications. One thing we can do is we can build infinite stream API server similar to Twitter. But I don't actually think that's as exciting as the other things that we can do with this. The most interesting thing to me now is using server-sent events. I don't know if you've seen server-sent events, but they're basically infinite streams where the browsers will fire a JavaScript function every time it sees an event. This is a JavaScript API. So this is what an SSE response looks like. It has this content type of text event stream, and every time the browser receives a particular event like this one, it'll fire a JavaScript function and pass this data payload to that JavaScript function. So this is the source. This is the JavaScript source to set up an SSE. And you can see here this actually makes a request back to your server, so it calls, it makes a request to slash control on the server, and it actually keeps that socket open. Now, we add an event listener on the reload event. So every time we get an event named reload, the browser will execute a JavaScript function, and we can have the JavaScript function do whatever we want to. And in this particular instance, I'm having to just reload the page. And I think this is cool because we can have the server notify the browser about events and actually have real-time browser communication. So I wanna show a video of an example of this. We execute, fire up our server, load the page, woo, users, yay. And every time we change a file, we can actually have that notify the server. So it changes it, say, what do I say, OMG, I think. And it'll actually notify the browser, hey, you need to do a reload. It's time to do a reload. We can have it watch assets, too, so if you change your CSS, it'll automatically do reloads. Change it down there. We can see the user's background has changed, delete it, and it'll go away. And the other cool thing is we can do it from other processes, too. So we can say, well, let's fire up the Rails console and we're gonna modify the database. We'll create a user here, and as soon as we create the user, it notifies the browser, hey, data changed, you need to reload, or do whatever you want to, whatever we define that JavaScript function to do. So how does this work? When the file system changes, we use FS events, and when the file system changes, the file system notifies our web server saying, hey, this FS events gem says, hey, something changed, you need to do something. And then our web server says, okay, cool, we're gonna send an event down on this control socket that we have with the browser. And then the browser does its thing and reloads. Now what's interesting is that all these three boxes are actually within the same process. These are all running within our web server. So yes, all these three boxes are in the same process. Now the way that it works with the console that we were looking at, that console example is a little bit more tricky. We actually fire up a DRB server inside of your web server, and it opens up a socket, and the console opens up a socket to our DRB server saying, hey, and across that socket, we can send events to the server and have those send it down to the client. So other possible input sources besides file system changing or database changing to get your mind churning are we can use embedded systems. For example, I have my meat curing box hooked up to a web server. We can use it with telephony or maybe other users, for example, chat systems, like we could use this to build say, oh, I don't know, IRC or something like that. So we looked at these three topics and P72N parallelization, thread safety, parallelization and streaming, and hopefully I was able to relate them all to each other. So you can see how each of these are related inside of your applications as well as inside of Rails internals. Now, I think that cores are increasing. I don't think that this is a stretch to say that every time you buy new machines, the cores are increasing on your new machine. I just bought a MacBook Air and a new MacBook Air and now I'm up to four cores and that's crazy. It's really awesome. But the thing is we need to start utilizing the entire machine, which is why investigating things like doing parallelization with our virtual machines is so important. We need to understand how we can make the most of these machines. The other thing that I think is changing is we have high latency clients. We're starting to get more and more high latency clients, like people who are out on their really terrible edge server or edge connections on their cell phones. And we need to get data down to these clients as quickly as possible. Having them wait on our server to process templates is unacceptable. We should be getting data down to them as soon as possible. The other thing that I think is changing is that patience is decreasing. And I mean this among people. Like I think everybody's patience is decreasing and personally I blame this on Honey Boo Boo. But I think that people are expecting, like they're starting to expect more instantaneous responses from their web servers and we need to figure out how to do this. And I think the way that we need to do this, I was thinking about this, the way that we need to do this is we need to lie. When somebody is asking for a particular calculation, we should be using cache data where we can't. We're not actually performing that calculation, we're caching and we're lying to you about it. We need to cheat. So updating partial pages, partials on the page, we don't update the whole thing. You do a request, we're only gonna update the part of the page that changed. So we're cheating, we're not actually returning the entire page and we also need to steal. And I mean steal computation from our end users. So moving calculations into JavaScript and having those calculated on the client side. So for the future, what I want all of you to do, I want you all to lie, cheat and steal. In other words, I want you all to be good engineers. Thank you. If you have questions about this, please come see me after. I don't know if we have time for questions and I'm not sure what the protocol is on questions here, but if we have time, I'm happy to take them. If we don't, come find me. So thank you. Do we have time for questions? Are we gonna do that? Oh, okay. Five minutes, questions, go. And go. So I was wondering, if we don't have questions, I was wondering like, so are there like boutique spam places here in Hawaii? Like, is there, I wanna try, I wanna see if I can find some like organic shade grown fair trade spam and like try that. Does that exist? Is it a thing? Deep fried spam. Ooh, that sounds delicious. Questions, questions. Anyone? Yes, Constantine. So the question is, in template streaming, how do you handle, how do you handle exceptions that happen during streaming? And the answer is, don't have any exceptions. No. So there's things we can do like, spit out JavaScript to redirect you or some kind of hacks, but really they're total hacks. I don't have a good answer for you. Total hacks. Yes, Kobe. Anything on timing with the release candidates of Rails 4? Timing, the question is about timing on the release candidates of Rails 4. Any dates that I give you are gonna be total crap. We thought we would have a beta out at the beginning of September and obviously it's past September, but I think we're mostly unblocked now. We're hoping to get a beta out shortly and a final before the end of the year. Like, so soon-ish. Yes. The question is, is there a way to take advantage of the new queue system right now? And let's see. Yes and no. Yes, because it's basically just a queuing API and really the only thing that's really in Rails to support that is a fancy hash where you set a queue type object. So if you get, I think rescue has, well, okay. If an in-memory queue is good enough for you then just use the queue object. If you need something that's serialized, say like Redis, Jeremy Kemper is working on a queuing API for rescue, wraps around rescue and you can use it. I'm not sure if you can use that today but it will be out along with Rails 4. Sidekick has a queuing API but it's on an experimental branch. So if you wanna use it today, tweet at me and I will help you get set up. How's that? Yes. People who work with jQuery know about futures but can I talk about latches and barriers for a second? Yes. All a latch does is say, it allows you to coordinate two different threads. You're basically saying, well, one thread, you share the latch among multiple threads and you say, well, okay, I want this thread to go to sleep until this other thread has done some particular calculation. When that other thread is finished with its particular calculation, it'll unlock the latch and let the other thread go and typically you use like a countdown latch or something like that. So you say, well, I need five different threads to finish their job before I'm gonna continue. So it'll just count down on each of those and then barrier is, I believe a barrier is basically the same thing except that you can also do cyclic barriers. So basically reuse your latch. The thing about latches is that they're one-time use so you can't reuse it but if you have a cyclic barrier, you can reset it. Question use, yes. Yes. So the question is, let me see if I'm getting this right. Is there going to be a plan for standardizing SSEs inside of Rails? Is that... So giving you like a default when you start up, when you first get started for using SSEs. So the question is, is there going to be like some sort of default for people to be able to use SSEs when they fire up their web applications? The answer is there's no plan for that right now. And it's also, it's kind of hard because it's kind of hard to do something like that because right now like action controller live doing streaming is an opt-in thing. It's not default for all of your, it's not default for all of your controllers. My personal plan is I would actually like to make that default. I'd like to refactor the Rails internal such that we're always using streams. And I think once we get to that point, then it would make sense to say like, okay, we're going to give you easy sane defaults for doing SSEs. But right now it's like, it's total DIY basically. So if somebody did a, I think what would really help and what I, I've been, I'm going to admit I'm a really terrible person. I've been copying and pasting this stupid little IO object around between all of my applications that just spits out SSEs. So I write an object to it and it translates that into a JSON object that spit out as an SSE. Probably somebody should write a gem for that. Hint, not me. I hope that answered your question. Okay. Do we have time? Anything else? Last question. Am I going to come to the spam jam festival next summer? I didn't know there was one. That's awesome. If I'm not somewhere else, yeah, totally. Does anyone else like spam? Am I the only one? Yes. Yeah. All right. Thank you.