 Hello, everyone. How's it going? I want to welcome you all to warm, sunny Florida. Yeah, right. And this is AXIS Conference, so make sure you act like you're at a conference as opposed to like a pool party. That's okay, too. AXIS Conference. Luckily, no one came in their bathing suits. Good stuff. Yeah, you guys know what we do. We do the Rails-NV podcast on a weekly basis, amongst other things. And in this talk, we thought we'd go over some of the innovations that we see while doing the podcast. We've been doing it for over a year now, and give you a taste of a couple different libraries that we find really useful. So I give you Jason Seifer, which is a great gift. I will turn this conference right around to take you home, whoever did that. Okay. So we're going to take a little trip into the deployment landscape of Rails with my first innovation. And that is going to be a mod passenger from the Fusion guys. But in order to really take a look at this, we have to go back in time. Okay. The Rails deployment landscape has changed considerably since Rails came out. Back when Rails 1.0 was around, the preferred way of deploying Rails applications was mostly fast CGI. This was really, by today's standards, not one of the best ways to deploy Rails applications, as has been graciously pointed out by some members of the Ruby community. Because as we all know, deploying Rails with a fast CGI process needs restarting about a thousand times every 10 seconds. Zenshaw, despite his exit from the Ruby community, did give us Mongrel, which changed the Ruby deployment landscape considerably, and a lot of us are still using this today to deploy our Rails applications. And Mongrel works a little bit like this. So you have your web server, and what your web server is going to do, you're going to set up a proxy. The proxy is going to go out to all of your different Mongrels that are going to be serving your Rails application. This saved considerable memory, and it gave you a lot of speed boosts when deploying your Rails applications. After Mongrel, we got Thin and Ebb, and those different servers as well, that kind of built on the Mongrel way of deploying your Rails applications. But there was some contention in the community, because everybody thought deploying Rails applications should be easier, and by everybody, I mean dream host. So I'm sure everybody remembers this drama. Dream host kind of put out this blog post that said, hey, we would really like to be able to deploy Rails applications much easier for our customers. And DHH wrote back and said, yes, I agree, do it. And then the Fusion guys came out and said, you know what? We're going to do that. You want the job, you got the job. So they came out with ModRails, which is ModPassenger, and that's their website. It's awesome. See, look, you got Rails in a box right there. That's how easy it is. And it's really easy to install and really easy to deploy. All you have to do is basically touch a restart.txt file in your temp directory to restart your application. ModPassenger has gotten a lot of widespread use now. 37 signals guys are using it. Dream host is definitely using it. Greg has a couple of blogs deployed on passenger, on Dream host, and the world is good for Rails deployment, quite innovative. So it changed from that mongrel that you saw before with the web server in your proxy to basically just having your web server and defining how many different Rails instances you wanted to have. You did this in your Apache configuration. But what exactly was ModPassenger without Ruby Enterprise Edition? What is talking about speed without some benchmarks that I pulled straight from their website? Ruby Enterprise Edition uses a different garbage collector to speed up the application, reduce memory, uses a different implementation of TC malloc. I don't exactly know what that does, but it's awesome as you can see from the graphs here. Lower is better, yellow is the best, see, right there. By using Ruby Enterprise Edition and ModPassenger, you can get significant speed savings and scalability with your application. That's right. I'm talking about Rails scalability being improved with ModPassenger because there are some naysayers out there about Rails scalability. If you should passenger help that. Those are mine, by the way. I just got my first Google ad payment the other day, $104, but that's also from ismurbrails.com as well. It's kind of skewed since there's three sites and I only put two up here, but thank you everybody for clicking on the ads. If you did, talk to me, I'll buy you a beer. Next up, shoulda. I find shoulda to be innovative. It was not the first behavior-driven development testing framework for Rails, and I'm doubtful that it will be the last, but shoulda brought some really nice macros to the table to help you test your code much easier. Everybody's testing, right? Okay, good. I have to throw that in there every once in a while or people get angry. So here's what shoulda code looks like. It gives you some very, very readable macros to put in your code. This is a test for a model which kind of took the application of don't repeat yourself and applied it to testing. Like I said, there were macros before, but shoulda by the thoughtbot guys packaged it up and made it very easy to do. Here's what a controller test looks like. One thing that you get is different contexts that you can use in your application testing as well. One thing that makes testing much easier is factory girl. How many of you like fixtures? I didn't see one hand for the record. Not one hand, no applause. That's right. Fixtures make your tests really brittle, so what kind of came out of the, hold on, not really brittle, kind of brittle. You have to keep refactoring your fixtures and tests as you go along. So the thoughtbot guys, again, taking the Rails community by storm, came out with factory girl. And what factory girl lets you do was have fixture factories to define your test, like your test data that way, so you can go through and create all sorts of fixtures. Well, not fixtures, but you can actually create your data on the fly. It is maybe negligibly slower than fixtures to the point that it really doesn't matter. I've sort of converted all of my applications to using factories instead of fixtures, just because it's that much easier. And you can really keep all of your different logic and date-like test data in the different contexts. So here's how you would define a factory with Shuda. You can see right here you get some easy ways to get the next thing for an email address, and that's how you would use it in your application. This will create it in the database for you while using your tests. So next up, cash money on a scaling roll. This is not the cash money that you guys are probably thinking of. Ten years of bling volume one, I would just like to point that out. Volume one, like there's going to be ten years of bling volume two. Do they have to wait another ten years? And how much more bling did they accumulate? But I digress. So what is cash money? I took this straight from the Cash Money website. It's a plug-in for Rails that basically gives you transparent, write-through, and read-through caching in your Rails applications to help you scale. We saw the scaling slides before, right? Can Rails scale? Okay. It can scale. Cash Money is used by Twitter right now, a version of it is used by Twitter. And so basically one of the big problems when you have a really large website that you need to have a lot of data replicated across your databases is something called replication lag. So what can happen is, let's say you have three database servers, a master and two slaves. A user goes into their account and updates, say, their login. Now as soon as they update their login, there's a chance that by the time they actually get back to that page and refresh the action, that the data will not have propagated into the other slave databases. This is where Cash Money comes in. What you do is, so you have a separate memcached, memcached D, is that it, memcached D or memcached? Okay. So you have a separate memcached D cluster and what you're going to do, what Cash Money will do is it will write that data into your memcached D cluster and into your database at the same time. When you go back through, it reads from the cluster. So that provides basically transparent, I don't want to say scalability, but transparent write and read through caching. Helps immensely when you're dealing with these larger scale websites. Oh, and it uses all of the active record functionality. That's why it's transparent. All right. Next up, WebRat. How many people have heard of this or used WebRat? Awesome. Half the rooms. You guys know why it's awesome. It gives you really, really easy integration testing. This is an example from the WebRat documentation about a sign up process. Instead of actually doing all the, you know, let's say you were going to get the sign up method in an integration test, you don't have to do get sign up, yada, yada, yada. It provides a really nice DSL for filling in all these things doing the submits following redirects. And you can still do all of your asserts or should equals if you're using our spec or matchy or something like that. Merb actually used WebRat for its testing. It was compatible out of the box with Merb. And as we all know, Merb is being merged into Rails 3. Next up, JRuby. I actually don't have a lot of slides on JRuby, but JRuby was kicking ass this year. JRuby is just about the fastest widespread Ruby implementation out there. 1.9.1 beats it in some benchmarks. But for the most part, JRuby is the fastest Ruby implementation that we have. And they're getting 1.9 compatibility very, very rapidly if they're not there already. One other thing JRuby did really innovatively this year is give you the glass fish gem which I think is production ready now. I'm not entirely sure, but the new glass fish gem provides very, very simple and easy JRuby deployment of your Rails applications. Basically, all you have to do is go into a directory and type glass fish. Your Rails app is up and running. All right. Another innovation from the Rails community this year is sample applications. I'm really not too involved with other web development communities, but I have not really seen anything like this anywhere. A common theme among these sample applications that we have now, including different plugins, RESTful authentication is pretty common, we'll paginate. And Rails already provides a really good base for your application to get you up and running. The sample applications further that along by including all the different plugins, Capistrano recipes, basically even take you to the next step with user authentication, login system, things like that. And they're fully tested. One of the reasons I find this so innovative is because new users can see how you're supposed to code a Rails application when they get into it because following tutorials doesn't always cut it. So it's really good to see some code in there that's actually used in the wild. Some different sample applications that we have out there, Bort, Blank, and Suspenders. Bort is from Jim Neath, Blank is from, hold on, got it right here, oh, I spoiled the whole thing. Sorry, guys, James Golic does Blank and the Thoughtbot guys do Suspenders. Next up, as you guys have probably guessed, is Workling. Last year was the year of the messaging queue with Ruby and Rails. We have so many different solutions for messaging queues. How do you decide what to go in your application? Well, this is where Workling comes in. Just like ActiveRecord kind of abstracts away the database, Workling kind of abstracts away your messaging queue. It works with a bunch of different messaging queues and what you do is define jobs. This is from their documentation, say, you know, we had a cow that we wanted to move. We would just define our worker, which inherits from Workling base, and then to call it from our controller, we would just have a asynchronous move. Who doesn't want that? You can actually download this sample application from the Workling project, there's a link that we'll be providing at the end with links to all of these different projects. Workling supports all these queues, Starling, RabbitMQ, Root Queue, Background Job, or BJ, and any queue that follows the, yes, I said it. You guys. Okay, moving on. Next innovation from the Rails community, open source social networks. I don't know how many of you are freelancers, but there was a time where about every other week someone would call and say, you know, what I need is a social networking website for my startup. It has to combine YouTube, MySpace, Facebook, LinkedIn, and Yahoo. I need it done in about a month, and I don't really have any money. So if you guys want to work for equity, that would be awesome. But the Rails community answered this and said, you know what? We can help you out. You need to repeat all this code every single time you're doing a social networking application. So there are open source ones. First one that came out was Love by Less. Let's give them a hand. Stephen Bristol, stand up. Stephen wrote Love by Less. Another one we got, Ensochi and Tog. Tog is a little bit different from Love by Less and Ensochi in that it just gives you the different parts of the application, sort of as plugins that you can integrate into your own application. And the world was good. You had a really solid base to start with. And finally, the last innovation is not from the Rails community, but the Rails community uses it extensively. It's GitHub. Yes, I know, the guys who wrote it are Rails users. You guys know the config.gem syntax in Rails 2.2 in your environment.rb? You can use them from GitHub. No longer do you have to wait for all your gems to be updated by authors just for the project. Develop the gem yourself. GitHub has contributed a lot to innovation in the Rails community by providing such an easy framework for developers to work with it. And with that, I'm going to turn you over to Greg. Yeah, let me show you guys. We put up a little page on Rails.mv.com. Let me open up. Those are Greg's kids. It's not stock photography. Yeah. Okay, that's not going to work. Try that. If you go to the following web address, which you'll see in a second, I know you can't see nothing. There we go. All right. Sweet. If you go to this web address here, you're going to see a list of links to everything that we talked about. So if you want to follow up, get some more information, got it, got it. So about how long ago was it? About four or five months ago, Jason and I put out nvcast.com, where we tried to do our own, you know, felling screencast thing for $9 just like Jeffrey Rosenbach. And the most recent one that we put out was Scaling Ruby. Can I get a show of hands? How many people here have downloaded it? Yeah, not too many. A couple. Okay, cool. Cool. Cool. I appreciate it. CS later. Yeah. I was going to say get out. And you know, putting out these nvcasts are good, but yet, you know, not everybody accesses them. You know, they've got a good amount of downloads. Not making a crap load of money. But you know, I'd like to do our own part to help along the community, but it would be really cool if we could release these for free, like something like that for free. And so I was trying to think a few months ago, how do we take that and release something like that for free? And you know, we do the podcast, right? And we get sponsors with the podcast. And so I thought, well, what if I go one of the sponsors, like, say, New Relic, and see if they'd be interested in sponsoring the next screencast, which was going to be Scaling Rails. And they were open to it. And so what came of it was a bunch of screencasts, which we're actually going to announce today and put out the website today, which has, I created the 13 little mini screencasts. We're releasing the first five today. And I'm going to show you guys the intro video to these screencasts right about now. You see this on the website. Over the past few years, as Ruby on Rails has grown in popularity, there have been several big success stories. We've seen websites that allow you to do e-commerce, some that allow you to do social networking, others that allow you to get video on demand, and even some that allow you to stay closer to your family. But unfortunately, there's been some doubtcasts. No thanks to these guys that, well, Rails can't scale. Thanks to the support of New Relic, over the next few weeks, we're going to be releasing a series of screencasts, which will show you exactly how to scale Ruby on Rails. And I think what you'll find is that Ruby on Rails scales pretty much just like any other web framework out there, with the exception that we use Ruby to do it. The Ruby programming language allows us to be more productive as programmers, write code that's more understandable, and thus, more maintainable. And in my opinion, it's just more fun. As you can see here, we've got a great number of topics to cover, starting with the basics with page caching and moving all the way to using memcached, and even we're going to get into reverse proxy caching, some advanced stuff. At the very end, we'll be talking to three Ruby on Rails professionals and asking them how they recommend scaling Rails websites, starting out with Taylor Wiley, who works for Engine Yard. After we talk to him, we'll be doing a little discussion on database feed, followed by Jesse Newland, who works for Rails machine, enter discussion on deployment strategies, and finally, we'll be talking to Jim Goche, who works for New Relic, and we'll be looking at some of the advanced New Relic RPGM features, and how they allow you to scale your Rails app. I've got two favors to ask of you before we get going. First of all, please be telling you guys to subscribe to the RSS feed. Basically, here's the website that we came up with, that you were all down there. It's also on that web page that I gave you guys a minute ago. Yeah, so you can go on here, you can subscribe, we've got lots of links here. I tried to frame it a little bit like Railscast, because everybody's really familiar with Railscasts and how that format is set up. You can subscribe to it, you can subscribe to iTunes. If I clicked on one of these links, I'd be given code samples straight out of the screencast so you can have a quick reference. Now the one thing, if you scroll down on this page, the one thing I get kind of annoyed at with Railscast.com, when you go back to those screencasts, the comments, there's people troubleshooting in there, and some of them are months old, maybe years old now. And so instead of people submitting comments, I changed this so people can actually just submit links. So if somebody watches the screencast and says, oh, hey, Lino, I know of this great blog entry that describes how to do this advanced feature that has to do with page caching or performance, I'll submit the link here. Or if you have an opinion after watching the screencast, you go, oh, he did this thing wrong here. I'm going to write a blog post about it so people know how to do it right, you can then submit the link here. So hopefully what we'll end up with a few weeks from now when all the screencasts come out is a centralized resource for all the scaling information you could possibly want with Rails. So of course what I hope to do with this website is a couple things. I'm not only educate Rails developers, but I'm hoping to get enough publicity so it'll make an impact sort of on the perception that Rails maybe can't scale. Because there's still people in the enterprise that have that perception that maybe it's not the best framework to go to if I need to create a big website. And that's completely not true. And so I'm hoping this will clear up those misconceptions. So, Jason? Where do you think they got that idea? Does anyone else can scale? Yeah, I have no idea. So, and notice over here on the right hand side, there's a little promote link. If you want to do me a favor, click on those links. You're not making me more money by doing it. So once we're releasing, we're doing the official announcement next Tuesday, but feel free to publicize it and help us get the word out. I'd appreciate it. Of course, nobody's reviewed this on iTunes yet, either. That's another idea right there. But yeah, you can subscribe to it just like any other podcast, which is nice. All right, now back to innovation. The first topic I'm going to talk about is really rat cash. That's where we want to end up, is have a full understanding of rat cash. But we have to have a base understanding of a couple different things before we get there. So it's going to be sort of a long road. And rather than taking the perspective that I was going to talk about a couple little things, I like to educate. So I want to educate you guys with a couple of topics. Some of this content, you'll see again in the screencast, but I kind of refactored it, add some stuff for this presentation. So here we've got a Rails server. And when we talk about caching, server side, Rails comes with four different ways to cash right out of the box. You've got page caching, action caching, fragment caching, and then sort of a low-level object caching. But with Rails 2.2, we were given a way, a couple of different ways, to control the cash on the client side, which we didn't have before, with the power of three different headers. The max age header, e-tags, and last modified. Who here feels like they have a solid grasp of what e-tags are? OK, oh, good couple of people. So I'm not going to spend too much time on it, but I do want to review these different tags, these were headers. So here we've got our common controller. And to use max age, we basically add expires in 10 minutes. What this is going to do is it's going to set a header, which gets sent back to the server, max age 600. So it's saying, for the next 10 minutes, this content can be locally served, locally cached. Content's valid for 600 seconds, unless, of course, refresh is pushed. It sort of seems to be a default with browsers. If somebody hits refresh, then it's going to go to the server anyway. But if a server clicks off, I mean, sorry, if a client clicks off of that page and comes back to that page, it's going to load it out of their local cache. For the next 10 minutes, then you've got e-tags. E-tag is basically a key we can use to see if the page is the same in a nutshell. And by default, Rails uses e-tags, whether you know it or not, your application's using it. What it's doing is every time a page is requested, the whole body is being generated. That body is getting turned into a hash using this function here, and then that's what gets sent back to the client as an e-tag. With Rails 2.2, we were given the ability to specify a custom key for that e-tag. Typically, this is where we have models. So we might call the post.cache key method, which I'll explain what that is in just a minute if you're not familiar. Let's take a look at a function with e-tags. So we get a user, and now we're going to call ifstale e-tag user. What this one line of code does is two things. The first thing that it does is it calls cache key on the user model when this comes in. It's going to create a string which looks like this. This is basically the ID of the model and the updated add field of the model. It's going to take that, work into a hash, use that as the e-tag which gets sent back to the user. Now, if the client comes back to that page, it's going to send that e-tag back. It's going to check to see if the e-tag that it was given the first time matches the one it just generated the second time it came in. If it matches, it's simply going to send back head not modified with nobody back to the client browser. The client's going to render what it has in its local cache. So we can actually add some functions here in the conditional statement. So if the e-tag doesn't match, if it's stale, we can actually do extra stuff, render out a web page, whatever we need to do. Another way to write this is to use the fresh win command. The benefit of using the e-tag is that we don't have to do any extra rendering. By default, Rails out of the box is going to create the e-tag by kind of hashing up the entire body of the page, whereas here, we're only hashing the values that came out of the user model using the cache key method. I hope that makes sense. It's a pretty complex topic. But we need this to get further towards reverse proxy caching, which is where we're headed. So ratcache, varnish, squid, and akamai. These are reverse proxy caches, and I don't know about you guys. But when I first saw this, I was like, what the hell is that? How many of you guys feel like you know what reverse proxy caching is? OK, a couple of you guys. Well, I'm going to try to define it in very simple terms, which is what I like doing. But before we define what reverse proxy caching is, we need to figure out what a proxy cache is. So here we have a big corp. This is a corporation. A lot of people from this corporation like to visit the wallstreetjournal.com. They go out and visit that. Now, they might have a lot of traffic, a lot of traffic going in and out. So the boss guy goes to the sys-edman, or the IT person, and says, hey, we need to reduce network traffic. There's always people visiting, always websites. Let's save some money. What can you do to reduce network traffic? The sys-edman, one solution, might be to put a proxy cache in the middle between the server and the users. So now when a user comes out and goes to wallstreetjournal.com, the first time that happens, it's going to cache the wallstreetjournal.com locally in the proxy cache, send it back. Now, the second time somebody goes to that website, it's going to load it straight out of the proxy, saving network traffic. So now that we know what that is, what is a reverse proxy cache? So here we've got our typical client, browser, and server configuration, as most Rails apps have. Let's see. And again, boss goes to the sys-edman and says, hey, we need to make our website more responsive, handle more throughputs, and save bandwidth, maybe, save CPU cycles. So what the sys-edman might do is create a reverse proxy cache on the server side. So a client comes in, requests a URL. That gets pushed through to the server, back to the reverse proxy. Reverse proxy cache is that, sends it back to the client. Now, of course, the second time a different user comes to that same URL, and it's going to load it at the reverse proxy, rather than going all the way to the server on the server side over here. So to review, proxy cache is an extra layer of caching on the client side. Reverse proxy cache, or gateway cache, as it's referred to sometimes, is an extra layer of caching on the server side. And this is where rat, cache, varnish, square, to knock them out come in. These are all reverse proxy caches. Now, with this diagram here, if you went to implement it, there's one thing that I've kind of left out that you need to be able to deal with. And that is, how do you deal with expiration? It's something we deal with a lot with when you're dealing with all types of caching. How do you invalidate the cache? How does that happen? And it turns out that you can do expiration by going and using the headers, which I just talked to you about a second ago, using maybe max age. So how would we deal with expiration using max age? So here's Bob. Bob goes into the reverse proxy, goes to the server. The server maybe has this Rails code, which I showed you a second ago, that gets sent back to the reverse proxy with max age 600, that gets stored in the reverse proxy, and back to Bob. Now, if Bob, at any point in time within 10 minutes, goes back to that same web page, well, it's going to load it out of his local cache, right? However, if Cindy, a different user, requests that same path, it's going to load it directly from the reverse proxy. It's not going to hear a Rails server anymore. So for the next 10 minutes, all requests to that URL are going to get served through the reverse proxy. So the benefit of this, of course, is we avoid lots of server hits. We can handle more throughput. Maybe it's important for your website to stay up to date, but maybe if it's only up to date every minute or so, that's OK. If you can change it and set max age to 60 seconds, you're going to save a lot of bandwidth. OK, now eTags is another way to deal with this. Bob requests there, does a request, goes through the server. We set the eTag, gets sent back to the reverse proxy, stores it along with the eTag in the reverse proxy, and that gets sent back to Bob. Bob then requests the same page. What's going to happen here? It's actually going to send to the server again. It's going to validate the eTag, send back head not modified, and send back head not modified to Bob. So we're saving server side time here, because we don't have to render out the entire page. And we're saving client side time, because the client isn't having to get all this data from the server back and render the page. They simply render what's in the local cache. Now here's where it gets really, really cool. Different user, Cindy comes into the web page, requests a URL. This is where it's really cool. What the reverse proxy is going to do is it's going to append the eTag to Cindy's request. So it's going to go to the server with the eTag that it has stored. If server returns head not modified, it's going to return what's in the cache from the reverse proxy. That's really cool. So max age eTags and last modified. I skipped over last modified, because I was trying to keep things short. But last modified, it's a lot like eTags. You can watch the screen cache to learn more about it. I go into it in more detail. So the benefit of using these is we can decrease server load, increase the speed of our website. However, with a lot of these, as you saw with eTags and last modified, we're still going to hit the server on each request. That might not be too good. So what do we do to deal with that? Well, we might end up using a combination of both max age eTags or max age and last modified. Here's where things get really powerful, and you can start serving millions of requests. So what would that look like if we combined the two? Bob comes in again. And now on our server, we're going to use both the eTag and it expires in max age header. If I could send back to the reverse proxy, send back to Bob with max age. Now here comes Cindy. Cindy requests the URL, and it's been within that 60 seconds, that minute of time. Well, it's simply going to return the HTML and the eTag back to Cindy. Now Cindy comes back and requests that same URL, and it's been after 60 seconds locally. Oh no. If it hasn't been after 60 seconds locally, well, it's just going to run out of her local cache. But if it's been after 60 seconds and it's been after 60 seconds at the reverse proxy level as well, well, it's going to go to check with the eTag. I guess I'm missing the eTag there. Send back head loss modified. And send back head loss modified again. And now what's in the reverse proxy cache is going to be valid for another 60 seconds. Now just to give you another perspective of how scalable this really is, let's take a look at a convoluted benchmark that I hypothetical benchmark. Let's say we have a million page requests at 0 seconds, a million refreshes at 30 seconds, and another million refreshes at a minute. What is this going to look like? So first of all, 1 million page requests at 0 seconds. We've got down at the bottom here how many times a Rails server was hit. First thing that's going to happen here is we're going to hit the Rails server for that first request. So we have one server hit. It's going to store all of these HTML eTag in MaxAge in a proxy cache. It's then going to reserve the remainder of the hits out of the cache. OK, at 30 seconds, what's going to happen here? Well, it's going to serve them all out of the proxy cache. What's going to happen in a minute? Well, now it needs to revalidate to make sure that data is valid. So it's going to check to see if the eTag is fresh. So hit our server once. And then it's going to serve the remainder of the requests out of the proxy cache. Now, obviously, this example has some flaws in it, but it just goes to show you how amazingly scalable something like this is using Rails and a proxy cache. So Akamai. What is Akamai, and how does that work? So here's our Rails server. What Akamai does is it puts reverse proxy caches all over the world. So if we were dealing with 3 million page requests really, that's going to be split up over different proxy caches. So maybe each proxy cache is only handling 600,000 each if they were evenly distributed. And each one of those, when it needs to get the code, when it needs to get the page, well, that's going to hit our Rails server. So in this case, our Rails server is going to get hit maybe 10 times, and not only that, but we're going to get a quicker client load because all of the data is going to be loading from a closer server to our client because things are locational. So now to Rack Cache. We took the path. So now what's Rack Cache? Rack Cache, in my opinion, is reverse proxy training Rails for your Rails app. It runs inside your Rails app, and you can run it as middleware in Rails. And I'll show you guys how to do that in a minute. So here's how we do it. We can install the gem. And in our environment, we do something like this. It might be a simpler way to do this, but this is what I had to do to get it working. And here we're using Rails 2.3 syntax, where we're simply saying use this middleware. So what does that look like when we run it? Basically what this does is run a reverse proxy inside of our Rails process. So when a request comes in, it's going to hit the reverse proxy, go through that first, then it's going to go to Rails. Now what you might be thinking, when I first thought, when I saw this, is wait a second, what if I run multiple Rails processes? Is each one going to have its own reverse proxy with its own reverse proxy store? Well, no, because it's more intelligent than that. You can use a file store, or you can just put memcache in the middle to store it all. Pretty simply. So I went ahead and tried this. Right, I was like, let's get this working with the configuration you see here with these different headers. And the first time I tried it with ratcache, it didn't work at all. Totally did not work at all. I was like, what the heck is going on here? Well it turns out that Rails by default sets this cache control header, which looks like this. So all of your requests, all of your responses have this cache control header, which looks like this. And this private variable right there is saying to proxy caches, don't cache this. Whatever you do, don't cache this. So not only at the reverse proxy side is it saying, don't cache this, but the real reason we have it there is because we don't want maybe the regular proxy to cache this information either. Oh, it's like that's annoying. Why would we want things to get cached as much as possible, right? Rails runs faster, let's make everything cached. So I was like, why isn't this public? Why don't they create this header as public? Well it turns out, well when you have this sort of scenario and you've got this proxy cache from this big company. If the wallstreetjournal.com was coded in Rails, that's fine, we don't care, Wall Street Journal, it's news, that'd be fine if the default was public, it'll cache everything possible. But what if somebody coded a bank application in Rails? Okay, and what if they forgot to set the cache control header? Now it's gonna store your bank information in your company's proxy cache. Not good, right. That's why the cache control here is by default set to private. So how do we deal with this? Well, you gotta add a little bit more code, look something like this, so in order to make these work, you have to make sure that the public, that the header is set to public, and you can do that using these code bits right here. This isn't very pretty, I hope it gets changed. Maybe because you hoot is sitting here in the audience, it will. But this is how you do it right now. So I thought I'd put together a little screencast so you can actually see this in action. So check this out, here's my little basic blogger Rails app. And we're gonna use rack cache with the new middleware in Rails 2.3. So add a top, now at the bottom here I'm going to say that I wanna use the middleware. That's all there is to it in the environment to config. You can see here, I've got the typical post controller just for a blog, nothing out of the ordinary there. I'm gonna go ahead and start up my server. Oops, go to the index action. And as you can see here, up at the top, it's using a rack cache, so it's intercepting every request and right now it's just a cache miss because I haven't set any of these headers. Now I'm gonna add my expires in piece of code there. So expires in 10 seconds, I have to set this private false public true so it'll set the appropriate header, save that. Now if I go back to my server here and I do a refresh, now it's gonna be using rack cache. And you can see every time I do a refresh down here, you see these numbers, it's saying it's valid for five seconds, three seconds, it's loading out the cache, one second, and now I do a refresh. And hey, the cache was stale, it then stored it for another 10 seconds, did the request down here, and it's storing that in my reverse proxy cache. Now let's do some ETag business. I'm gonna go down to the show method and I'm gonna say if stale ETag, so I'm simply gonna use the post model in there, need to set the appropriate header. Now if I go to the show page, I can see down here that it rendered the page. Now the second time I do the request, check it out down here, I can see that it sent over 304 not modified. Cool, but that's not the real test. The real test is using another user, remember Cindy? So I've got a little browser up here, I just called it Firefox, so this is a brand new request, Firefox is gonna go, and hey, it also did 304 not modified, and it delivered what was in the cache. So basically appended the ETag to this new request and sent that back to the user as you would expect. And of course you can have some fun with Firebug, with Firefox, so I can see right now if I do a refresh on this page, you can see this time it sent out 304 not modified as you would expect, loaded the content out from my local cache. So Rack Cache is training wheels, meaning of course that it's still in beta, right? It's not a full French production application, don't go and throw this on your production app, but it's really good for getting familiar with using these conventions and starting to really take advantage of a reverse proxy. So an interesting configuration might be to use this maybe in development and staging, and then in your production interface, use one of these, because each of these is going to behave in the same way, it might have a slightly different configuration, but it's going to behave the same way and you can take advantage of ETags and max seconds and whatever. So, okay, there's lots of other middleware that you can install with Rails, although I haven't seen too many people blogging about how to do it, I'd like to see more people do that. There's a list of middleware, the URL to get to this page is of course on that Rails envy page that I linked you guys to, and there's a couple other options here for middleware. This isn't just for Rails, this is for any rack application. One of these in the list is CloudKit. CloudKit is awesome. CloudKit makes web services look so easy. I mean, kind of like the first time you looked at Rails and you saw how easy it was to create web apps, I get the same feeling when I look at CloudKit for web services. Basically, CloudKit, all you have to do is this, and you've got a JSON web service, which properly uses rest, as most of us are already familiar with, so that you can add new notes and add new projects and manipulate them in that. This is all you need to create a web service using CloudKit. I'd like to see somebody use this with Rails. I haven't seen anybody write a tutorial on how to use this. It might be useful if, for instance, you've got some models on the back end that you simply want to expose as a web service. So you just plug it in, and all of a sudden you've got a JSON web service that goes straight into your models. No brainer, pretty easy, without having any, why create the scaffold for it? If you don't need all that scaffold, if all you want is a web service, why not do something like this? Next thing I want to talk about with innovation is internationalization. There's been a lot of really great work with internationalization over the past few months with Rails 2.2 coming out with IE-TNM. I found this page, the other week, which is a wiki, listing a bunch of great tutorials and a bunch of great libraries so you can very easily make your web application international. It's fantastic, the support we have for that now. One of the most recent libraries that I saw that we talked about in the podcast this last week was one called Translate. So with the internationalization in Rails 2.2, what you end up with is a bunch of YAML files, right? You've got your English translations, you've got your Spanish translations, you're each in another YAML file. But when you've got a big website and you have these hugely YAML files, they might get a little more difficult to maintain. You might want to have outside people, translators come in and give you translations, but you might not want them missing around in your YAML files, right? So what these guys did is created a plugin which gives you a web interface which will manipulate your YAML files for you. So you can give them a web address and they can say, okay, I want to translate from English to Spanish, go, and it'll list out everything in forms and they can just type in the translations, update the YAML files for you. Awesome. Metric Foo is also worth talking about just because it seems to bring in all of the great code metric applications. Every time a new one and standardized one comes out. Jason and I do a lot of code reviews, right? Companies come to us and they say, can you take a look at our application? Can you help us refactor it, let us know where we need to improve? What's the first thing that we do? Run Metric Foo, it'll show us exactly where things need to get refactored, where the tests are lacking, where their code smells, you know? So it's a great way to figure out why do you need to improve your code because it runs a bunch of different metrics on it. It's also worth mentioning. If it rails innovations, we have to mention MIRB, right? With MIRB 1.0 coming out earlier this year, and now merging with Rails, I mean, it's taking all the innovation which they've put into MIRB, the MIRB team, and bringing that into Rails, and Rails is gonna be better because of it. I'm really excited to see where Rails 3 goes. Also, Ruby on Rails guides, I'm really curious here, how many of you guys have read through one of the Rails guides? Okay, so most of you guys, that's good. Some of you haven't yet. There's some really amazingly useful information in these guides, like here's the list. There's a lot of stuff in there. Some of the really cool stuff, like I read through the securing Rails applications guide earlier. Oh my gosh, so much information about security, everything you need to know about that. The kind of information that, you know, you go to like an enterprise corporation, you know, someone's bound to ask, well, what are the security concerns with running Rails? And this guide goes through everything you need to know. It's awesome. Also something I was really impressed with it, came out recently is Noel Rappin, came out with railsprescriptions.com, which is like a blog, slash a book, slash a PDF. And on here, he's got this awesome PDF which shows you how to do test-driven development with Rails for so long. There was not a good guide for showing people how to do this. And, you know, he's got stuff like this, you know, first test first, and he walks people through, okay, we're creating our test for our application, now we're gonna go implement it, and here's how you do it. It's like a 60-page PDF, shows you how to do TDD. It's just awesome for beginners. And if you're not entirely sure how to do TDD. Lastly, I thought I'd mention HTTP, or HTT Party by John Noonmaker. John Noonmaker, if you're not subscribed, who's blog, like Rails Tips, please do, comes out with a ton of great information every week. HTTP Party, it evolves out of some of John's project, because what he ended up doing, you see here, he wrote like a Twitter API interface, Last FM, Magnolia Delicious, all of these Ruby interfaces, like Gems and a valued interface with these APIs, right? And what he found is every time he implemented one of these APIs, he was like redoing the same code over and over again to kind of take in the XML, you know, translate it, put it into objects, blah, blah, blah, right? So what he did is he kind of took the common patterns and he created HTT Party. So if you ever need to create a web service interface for Ruby, you know, interfacing with anything, this is your starting point. Look at HTT Party and use his gem, because it'll make things a lot easier. And you know, that's all I've got. If you guys have any questions, you wanna come back up here, Jason, about any of the things we went over, I'd be happy to take them. And don't forget to check out the screencasts, the free screencasts on Scaling Rails. I hope you guys find them useful. Any questions? Is it too early to call? Yeah. So for HTT Party, if there's already an active resource declaration, so that you can process it, you can create an object pretty automatically, if you, the fact that for that it's more for services that aren't implemented in active resource declaration, that's a good, well, active resource. You know, it's good if somebody, if a web server is really stuck to the web principles, right, to the, like the RESTful principles, like, you know, we all do, but most APIs out there aren't gonna stick to the standardized REST principles, so if you really needed, if you wanted to use active resource, you could, but you'd end up having to probably hack it up a little bit more, and I think you'd probably be better just going with HTT Party to interface with it. What's the short answer? Any other questions? Yeah? Just to follow up on what you said, active resource is really intended to talk to your own Rails apps mostly, like for an API between your own, we haven't had a long conversation about this, not only about extending active resource to make it more viable for API that are kind of an active resource-ish, at the conclusion, one of which probably is HTT Party, and we're not really having active resources more for talking to your own APIs. Can you go ahead? No, I mean, Hoot is up here, but he was saying that active resource is more for just talking with your own Rails applications locally, and if you need to interface with other web apps, web services are probably better off just using HTT Party. Any other questions? No? All right, thanks for your time, guys, we appreciate it.