 Okay, hello everyone. Thank you. Thank you. Welcome to welcome. Let me be the first to welcome you to RailsConf So our our talk today is Heroku 2014 a year-in-review It is gonna be a play in six acts featuring Terence Lee and Richard Schneemann So of course this is a year-in-review and Heroku does measure their years by RailsConf So this is from Portland to Chicago RailsConf year the standard RailsConf year as As some of you might know that we are on the the Ruby task force and in fact that makes us Ruby task force members and Of course this was a big year. We're gonna be talking a little bit about app performance some Heroku features and Community features so first up to the stage. I'm gonna be introducing the one the only mr. Terence Lee You might have recognized him in some other roles He hails from Austin, Texas, which has undoubtedly the best tacos in the entire world so he Them's fighting words friend So that he's also sometimes known as the chief taco officer Or the CTO and something something very interesting about Terence is recently. He was inducted into Ruby core so congratulations to Terence. All right, so without further ado act one deploy speed Thank you, Richard so At the beginning of the rails standard year we focused a lot on deployment speed We got a lot of feedback and realized deployment was not as fast as it could be and we wanted to make it faster So the first thing we have to do was to actually do a bunch of measurement and profiling to look at where things were slow and how we could make it better and to kind of gauge Like the before and after and know when at good points Where to kind of stop and move on To other things because you can never make You can never you'll never be done with like performance improvements so After about Six months of work on this we managed to cut down the deploy speeds For across the platform for ruby by 40 percent So it's a pretty decent speed improvements And in order to do this we mainly looked at three various ways to speed this up The first thing was running code in parallel. So running more things Running things like more than one thing at one time If you cache stuff, you don't have to do it again and in general just like cutting out code that doesn't need to be there So with the parallel code we worked with the bundler team on bundler one five There was a pull quiz sent by cookpad That was sent in to add parallel bundler install for bundler one five So if you actually aren't using this yet, I would recommend upgrading your bundler to at least bundler one five and The bundler added this dash j option which allows you to specify the number of jobs to run And this is basically if you're using MRI it forks and does a These number of sub processes and if you're on j ruby or rubinius it actually just uses threads here And the benefit of doing this is when you actually do bundler install the dependencies that get installed get downloaded in parallels You're not waiting on network traffic sequentially anymore And in addition, you're also installing gems in parallel And this is mostly beneficial especially when you're running Native extensions. So if you have something like nohigiri that takes a long time Oftentimes, you notice you just like hang and wait for it to install and then it stalls next thing So this allows you to install that basically in the background and go and install other gems at the same time Also in bundler one five Richard actually added This function that allows people allows bundler to auto retry failed commands. So Initially before this we would when we run bundle install and something would fail because of some odd network timeout like during one chance you would have to basically repush again No matter where you were in the build process. Um, so by default now bundler actually will retry clones and gem installs for up to three times by default So it will continue going during the deploy process So is anyone here actually familiar with the pigsy command? So just Richard um, so pigsy's is parallel jizip and the build and packaging team At hiroku worked on this feature Or worked on implementing this at hiroku Using the pigsy command and in order to understand the kind of benefit of using something like this When you push an app up on hiroku during the compile process it actually builds these things at hiroku that are called slugs and Basically, it's just like a tar of your app directory of everything after the compile phases run And originally we were just using squash fs initially and then we moved to kind of just tar files and We noticed that one of the slowest points in the actual build process was actually Just going through and compressing everything in that file directory and then pushing it up onto s3 after that was done And so one of the things that we looked into is is there a way we can make that faster? so If you ever push a hiroku app and then you basically like wait when it says like compressing and then it goes to done like that's the compressing of the actual slug and we managed to Use slug pigsy to now improve that by sniffing them out. I don't remember the actual performance improvement but it was pretty significant and The only downside was in certain slugs the slug sizes are a little bit bigger But the performance trade-off was worth it at that time The next thing we started doing was looking into caching so anyone here using rails for So pretty good amount of the room So one of the things that we did which differed from rails 3 Thanks to a bunch of the work that's happened on the rails core team with this was that we can now cache assets between deploys This wasn't possible in rails 3 because the cache was You couldn't actually reuse the cache There was times when the cache would basically be corrupted and then you would get like assets that wouldn't work between deploys So the fix there was you actually have to remove the Assets between each deploy and some rails 3 builds, but it wasn't consistent So sometimes it worked and sometimes it didn't and on heroku. That's not something we can rely on in an automated fashion But luckily a lot of that stuff has been fixed for rails 4 so now we cache assets between deploys on rails 4 and so if we look at um Rails 3 I guess this got cut off, but um, they're supposed to say about like 32 seconds for a rails 3 deploy and then on rails 4 it got For the average we we measure the steps in the in the build process and on rails 4 The perk 50 was about 14 points something second. So pretty significant speed improvement there Both due to the caching and other improvements inside of rails 4 for the asset pipeline So the other thing we also looked at was just If there's code that Is doing extra work if we remove that it will speed up the build process for everyone who's deploying every day So one of the first things that we did was actually stop downloading bundler more than once. So initially When you when we do the ruby version detection, we actually have to download bundler and then Basically run that to get the version of ruby to install on the application And then again, we would then and download and installed again because it was run in a separate process For the actual like installing of your dependencies And one of the things we did was to actually just stop doing that and we would cache the bundler gem So we don't have to download that two or three times during the build process. So cutting network io and other things Um We also started removing there was like duplicate checks between detection of what kind of app you're using so and Been detects. Uh, we would use it to to figure out what kind of app you have like if is it a ruby app a rack app A rails 3 app a rails 4 app stuff like that. Um, and then again Uh, since it was a separate process and been compiled we would have to do it again um, so Richard actually did a bunch of work to Refactor both detect and release and so now detect is super simple It literally just checks if you have the gem file file there And then all the other work is now deferred to bin compile So that means we're only doing a bunch of these checks, uh once like examining your gem file checking what gems you have. Um So not doing that two or more times and If you haven't watched this talk, uh, he gave this talk at agency ruby I don't actually know if the videos are quite up yet, but Richard does a talk about testing on testable. So if you're interested in learning how we test the build pack Um, you should go watch this talk So I'd like to introduce Richard. Um Because he's gonna present on the next section. Um, so Richard loves ruby so much that he got married to her Uh, I think he got married last last year Yeah, right for our last rail scum. I remember that He's also on the rails issues team and he's won the top 100, uh, rails contributors. Um, according to the rails contributor sites And uh, you might also know him for his This gem called sextant that he released for rails 3. Um basically I remember back in the day developing rails apps when I want to basically verify routes I would run the rake routes command and it would you know boot up the rails environment You have to wait a few seconds and then they would print out all the routes And then if you want to like rerun it using grep, you would keep running it again. Um, so A lot of us when we're doing development already have like rails running in a servers, uh, Where we're testing things and whatnot So what sextant does it allows, uh, it supports basically looking at the routes that are ready in memory And just allowing you to query against them programmatically and then it has a view for doing this And this was also just merged into rails 4. So if you're using rails 4 higher, you actually don't need the sextant gem and it's now built in um Richard and I both live in austin and so, uh, when people come visit or actually when i'm in town, which isn't Often, uh, we have re meet-ups at franklin's barbeque So if you guys are ever in town, uh, let us know and we'd be more than happy to take you to a meet-up All right. So the uh, for the first part of this, uh, this act, we're going to be talking about app speed But before we talk about app speed, we're actually going to talk about dimensions uh, so The the document dimensions are Here we go We're originally written in widescreen, but the screens here are, uh, standard There we go. So, uh, you're actually going to get to see all of the slides as opposed to just having some of them cut off So, okay, um on on app speed, uh, the first thing I want to talk about is, um, is tail latencies anybody familiar with tail latencies Okay, the guy is in the hiroku t-shirts and uh, somebody else Okay, so this is this is a normalized distribution We have on one side the number of requests On the on the other side we have the time to respond so the further out you go the slower it's going to be And, um, we can we can see this is the distribution of our requests. So over here Superfast like you love to be that customer. You're super happy Over here, we have a super slow request and you don't want to be that customer and you're pretty unhappy So right in the middle is our average and i'm sure they talked a ton about, um Why the average is really misleading in the in the last session with skylight i o But but we're basically saying that Roughly 50 of your of your customers 50 percent of your traffic is going to get a response time at this or or lower So like this is this is pretty decent. We can say like 50 of the people who come to our website get a response before then Moving up the distribution To something like perk 95 we say 95 of everyone who visits visits our traffic will get a response by now So i'm going to be using those terms perk 50 perk 95 that refers to the percentage of Of requests that come in that we can respond by So this is kind of theorized. This is an an actual, um application And one thing that you'll notice is that it's not perfectly normalized like it's not like both sides are not symmetrical We kind of like steeply shoot up and then we have this really really long tail And and this is kind of the what i'm referring to what i'm saying tail latency So yes, somebody actually might have gotten a response in zero milliseconds You know, I doubt it But somebody for sure did get a response in 3000 milliseconds And that's a really long time to wait for your requests to actually come in and get finished So even though somebody is getting really fast responses and your average isn't bad your average is under 250 milliseconds One customer might be getting a really slow response and a really fast response and and the net is a bad experience So the net it just it's a very inconsistent experience So whenever we're talking about application speed, we have to consider individual request speed and average but also Consistency how consistent is each request? So how do how do we do this? What how can we how can we help with this? One of the things that we launched this year was was px dino's So a px dino a typical dino only has 512 megabytes of ram. It's a shared infrastructure A px dino has six gigabytes of ram and eight cpu cores, which is little little nicer a little better a little bit more room to play And and it's also real hardware. So Or it's it's not on the same shared infrastructure So you can you can scale with dino's You can also scale inside of dino's and that's kind of two two important parts that we're gonna gonna have to cover So of course whenever you have more requests than you can possibly Process you want to scale up and say i'm gonna have more dino's but what happens if If you're not making the best use of everything inside of your dino previously with 512 megabytes of ram You could just you know throw a couple unicorn workers in there and you're like, oh, i'm probably using most of this Like if you put two unicorn workers in a px dino, you're not making the most use of it all So recently i am super in love with uh with puma evan this is evan phoenix's Web server that was originally written to kind of showcase rubinius. Guess what it's really nice with mri as well Recently we've gotten some puma docs And so i'm gonna talk about puma for just for just a little bit So if you're if you're not familiar I was totally off on the formatting So puma handles requests by running multiple processes or by multiple threads and it can actually run in something called a hybrid mode Where each process has multiple threads. We recommend this or i recommend this like if one of your processes crash It doesn't crash your entire web server. It's kind of nice um And so the multiple processes is something that we're pretty familiar with uh as rubius. We're familiar with Forking processes. We're familiar with unicorn But the the the multiple threads is a little bit different even with mri Um even with a something like a global interpreter lock. You are still doing enough i o you're still hitting your database frequently enough Maybe making api calls to like facebook or github status being like hey, are you still up? And uh, and this will give our threads time to kind of jump around and allow others to do work So you you can get quite a bit of extra performance with there So we're actually going to be using puma to scale up inside of our dino So once we give you that eight gigs of ram We want to make sure that that you can uh, you can you can make the most use out of it In general with puma more processes means more ram And more threads are going to be more uh cpu consumption. So you want to um You want to maximize your processes and maximize your threads Kind of without going over as soon as you start swapping as soon as you go over that ram limit Your app's going to be really slow and that kind of defeats the the purpose of trying to add these resources um another issue is uh that I had kind of never heard of until I started looking into all of these multiple um web servers is a slow client So if somebody is connecting to your website via like, uh, 2g over like a nokia Candy bar phone uploading like photos or something like that like that is a slow client And um, if you're using something like unicorn it can DDoS your um your site because Each one of those requests take takes up an entire unicorn worker Whereas puma has a um has a buffer and it it buffers those requests as uh similar to the way nginx does um One other thing to consider with puma is so i'm mentioning threads i'm talking about talking about threads ruby We're not necessarily known as the most thread safe culture uh thread safe Community and so a lot of apps just aren't thread safe and so some you might take a look at puma and be like hey That's not for me Um, you can always set your maximum threads to one and then now you're behaving just like unicorn except you have the slow client protection And whenever you get that gem that's bad or you like stop mutating your constants at runtime or something um Then uh, you can maybe bump up and try multiple threads Okay, so i'm i'm talking about consistency and i'm talking a lot about puma. How does that all kind of boil down and help? um So does anybody think that sharing Distributed state across multiple machines is like really fast maybe Okay, good, uh, what about sharing state inside of memory on the same machine? Is that faster? Okay, all right. I think we're we're in uh in agreement so uh a little bit of a point of controversy um, you might have heard of the the hiroku router at some point in time um, and this the router is actually designed uh, Not randomly, but it is it is designed to use a random algorithm Um, and it basically will try to deliver request as fast as humanly possible or computerly possible to individual dinos So it's like it gets the request and wants to get it to your dino as fast as it possibly can And adding any sort of additional overhead of distributed locks or cues Is gonna be slowing that down Once inside of your in your your process puma or unicorn has in memory state of all of its own processes And is capable of saying oh, hey, this process is busy. This process is not busy. I can do Really intelligent um in routing and and basically for free It's really fast it it took a little bit of convincing for me. Um, so does anybody else need to be convinced Okay, good because otherwise I could totally just skip over the next section of slides So this is this is a graph uh produced by the fine Um developers over at at rap genius and on one side we will actually see a percentage of requests queued And on the bottom we are going to be seeing number of dinos So the goal is actually to minimize request queuing like this is this is time that your customers are waiting that you're not actually doing anything Uh, so you you want to minimize that queuing with the smallest number of resources So the smallest number of dinos This top line we actually have is what we've currently got now the random routing with a A single threaded server and like this is pretty bad It like starts out bad and it like it doesn't even like trend towards zero like this is probably bad So this is using something like web brick in production. So Don't use web brick in production Or or like even even thin um in single threaded mode. So um On the on the very bottom we actually have a Like mythological if if we could do all of that distributed shared state without uh and locks and cues Without having any kind of overhead We can see that basically it just drops down to zero at about you know, in their case about 75 dinos and then just You know, it's straight zero. There's no queuing and it's great and this would be amazing if we could have it But unfortunately, there is a little bit of overhead What was really interesting to me is this second one, which is not nearly as nice as that mythological intelligent router But it's kind of not too far off This is still our random routing and uh, and this was actually done with unicorn and uh workers Set to two so basically once we get the the request to your operating system It's like one of those two workers is free and can immediately start working on it Some some interesting things note about this is For the non optimal case for the we basically don't have enough dinos to handle this So that might happen if you know, you got on hacker news or whatever slash dotted reddited snap chatted secreted, I don't know And uh, it does actually eventually approach ideal state So it gets even better. Uh, and unfortunately they kind of stopped at at two processes But it gets better the more concurrency that you add so if you have three or four workers or Again, if you're using something like puma and each one of those workers is running like four threads Now you have like a massive amount of concurrency that you can deal with all of these requests coming in um, so The the the if again, we're looking for consistency We want that request to get to our dino and immediately be able to process it So you can use puma or a unicorn to maximize that worker number Um, and again distributed optimize distributed routing is slow in memory routing is relatively quick uh On again just in in the whole context of speed Ruby 2.0 came out and uh, this was a while ago. It's got gc. It's optimized for copy on right In in ruby extra processes process forks actually become cheaper So the first process might take 70 megabytes the second one 20 and 10 and 7 and 6 So if you get a larger box, you can actually run more processes on them If you get eight gigs on one box, you can run more processes Then you can if you had eight gigs across eight boxes um, so again more processes mean more concurrency and more concurrency means, uh, consistency Uh, if you are using workers, you can you can also scale out with um with rescue pool And uh, if your application's still slow, we rolled out a couple of really neat platform features One of them is uh is called htdp request id. So as a request comes into our system We will actually give it a uu id and you can see this in your router log And then we've got documentation on how to configure your rails app So it will actually pick this up and use that uu id in tagged logs So like how is this useful? So if you are getting like an out of memory error or if your request is taking a really long time and you're like, uh Like that request is timing out and you know Heroku is returning a response and we don't even know why Now if the request id is tagged, you can actually follow along between your two logs and be like Oh, it's hitting that controller action Maybe I should be sending that email in the background as opposed to having to actually block on it So you can trace specific errors We also launched a log runtime metrics a while ago And this is something that we'll actually put Your your runtime information directly into your logs. You can check it out Librata will automatically pick it up for you and make you these these really nice graphs And again, if you're doing something like unicorn or or puma Then you want to get as close to your ram limit without actually going over Okay, so uh the The next act in our in our play Again, introducing Terrence is we'll be talking about ruby on the Heroku stack and in the community Thank you. Uh, so I know we're at railsconf, but I've been doing a bunch of work with ruby. So I want to Talk about some ruby stuff So who here is actually using ruby 187? Wow, no one that's pretty awesome. Oh wait one person You should probably get off of it. Uh, but Who's using ruby 192? few more people 193 Good amount of people here. Uh, so I don't know if you guys were following along but uh, ruby 187 192 got end of life at one point and then there was a security incident and Zachary Scott and I have volunteered to maintain security patches till the end of june So if you are on 187 and 192, I would recommend Hopefully getting off uh sometime soon unless you don't care about security or want to backport your own patches And then we recently announced that ruby 193 is also getting an end of life in february 2015 which is Coming up uh relatively quickly. It's a little less than a year away now at this point Uh, so please upgrade to at least 200 or later And uh during this past rail standard year, we also moved the default ruby on heroku from 192 to 2.0.0 We believe people should be at least using this version of ruby or higher And if you don't know yet, you can declare your ruby version in the jump file on heroku to get that version Um And we've also are pretty Pretty serious about supporting the latest versions of ruby basically the same day that they come out. So we did this for 2 1 0 and 2 1 1 And in addition, we also try to support any of the preview releases as whenever they get formal releases. So We can as a community help Help find bugs and test things like put your staging app on your versions of ruby If you find bugs, then hopefully we can fix them for they actually make it to the final release And uh with regards to security patches if there are any security releases that come out We make sure to release them that day as well. We take security pretty seriously So once a security patch has released and we've patched those rubies You have to push your app again to get that release And the reason we a lot of people ask us like why we don't do that Why we don't just automatically upgrade people's rubies in place and the reason the reasoning here is that Not there might be a regression in the security patch or maybe the patch level is not actually 100% backwards compatible There's a bug that slipped through but you probably want to be there when you're actually deploying your application In case something does go wrong You probably wouldn't want us to deploy something and then have your site go down and then you're like not at your computer at all You're at dinner somewhere. It's like super inconvenient to get page there. So We publish all this information all the updates to the platform But also all the ruby updates including security Updates to the dev center changelog. So if you don't uh, this is I think devcenter.heroku.com slash changelog And if you don't subscribe to it, I would recommend subscribing to it just to keep up to date with what is happening On heroku for platform changes in addition to updates to ruby specifically On heroku And there isn't too much traffic like you won't get like 100 emails a day. So Highly recommend subscribing this to just keep up to date with things like that here Um So the next thing I would like to talk about is uh mats is ruby team. So if you didn't know Back in 2012 we hired three people from room core. Um, we hired mats himself, uh, koichi and nobu and as I've gone around over the last years talking interacting with people I've realized a lot of people have no idea who Besides mats who koichi and nobu are so I wanted to take the time to kind of update people On who these people were and kind of what they've actually been like we've been paying them money And what they've actually been doing to kind of move ruby forward in a positive direction. So If you run a git log, uh since 2012 since we've hired them, uh, you can see the number of commits they've made, uh To ruby itself. So the so nobu here Who we've hired has basically more commits than like the second guy by many many commits And then uh, koichi is the third highest committer as well And you're probably wondering why I have six names here on a list for the top five And so there's this so there isn't actually someone on the ruby core team who has the handle svn It's not actually a person. Uh, so I find out, uh, the hard way Who this person was? Um, so when I made my first patch to ruby after being on core I found out that if uh all the Uh, date information is done in jst And I of course did not know that and put like scumbag american dates And so there's basically this bot that will go through and like Fix your commits for you and like so he does like another commit and like, uh, like you actually put the wrong date Let me fix that for you. So there's like 710 of those commits. I think I did this like a month ago So this is a number of commits for a month ago So the first person I like to talk about is nobiyoshu nakata. Um, also known as nobu Um, and he's known I think on ruby core as the patch monster Uh, so we'll go into why uh, he's known by this So what do you think the result of time dot now equals empty string? I'm sure you thought it was an infinite loop, right? Um Or uh using the rational like so if using the uh rational number, uh library and standard lib Like what do you think the result of doing this operation? Yeah, so this is a sec fault Thank you. Thank you for reporting the bug Uh So these uh, so eric hodl actually reported the other bug the time thing And he found this in ruby gems I believe but these are real issues that are in ruby itself And so if you actually run those two things now and you're using later patch levels, you should not see them but uh the real issues and um Someone has to go and fix all them and so the person who actually does this is nobu And uh, he actually gets paid full time to basically do bug fixes for ruby So all those two thousand seven hundred some commits are bug fixes to ruby trunk to make ruby run better. Um And I Thanked him when I was just in japan last week for all the work. He's done. Um It's pretty incredible. Like there's so many times when things sec fault and other things and he's basically made it better Um, I was at oedo and there was actually someone giving a presentation about like 30 tips of like how to use ruby And someone was talking about open uri and there was uh code on the screen And he found a bug during like the guy's presentation and during it He committed a patch to trunk during that guy's presentation. So he's pretty awesome. Um, He doesn't he doesn't do he hasn't done any talks, but uh, I think people should know about the work he's been doing Um, so this last bug actually that I wanted to talk about that he fixed was Are any of you familiar with the regression from in ruby 211? Um with regards to hash so Uh, I'm sure you're familiar with the fact that if you use ruby 211 on rails 403, it just doesn't work like they're in rails We we use this we fetch uh In hashes we use objects as keys and if you override the hash and equal method When you fetch like you won't get the right result back um so inside of rails in 404 they actually had to work around this bug and uh Nobu actually was the one who fixed this inside of ruby itself So these were just like the three most interesting bugs that I found from within the last uh year or two of stuff He's worked on but uh, if you look on side ruby core, you can find like Hundreds and hundreds of bugs that he's done within the last year of just like seg faults and other things. Um, So he's great to work with uh It's the next person I want to talk about is koichi sasada um, he's also known as co co one co e one, uh And uh, he doesn't have a nickname in ruby core So me and richard spent a good amount of our talk preparation Trying to come up with a nickname for him. So we came up with the performance pro And this is a picture of him giving a talk in japanese um so If you use ruby one nine at all, uh, he worked on yarv So basically the new vm stuff that made ruby one nine I think it was like 30 fast than one eight for like longer running processes Um, more recently he's worked on the r gen gc um And this was introduced in ruby two one one And it allows faster code execution by having basically Shorter gc pauses so instead of doing full gc every time like you can have these minor ones. Um So just he spends all of his time thinking about performance uh in ruby and that's like what he's paid to work on uh, so If anyone actually cares about ruby performance, you should Thank this guy for the work. He's done. Um, if you've looked at the performance of ruby since uh in the last few years Like it's improved a lot A lot due to this guy's work And I was just I was talking to him and he was telling me that he basically like when he was working on r gen gc He like he was just like walking around the park and he had a breakthrough So he like spends a lot of his time even off of work hours just thinking about this stuff Um other stuff that he's been working on as well is uh profiling work. So Uh, if you've used any of them on stuff for two one one, um With the member profile and other things, uh, he's been working on With him to introduce hooks into the internal api to make stuff like that work So we I think we understand that profiling be able to measure your application For ruby is super important. Um, so If you have basically comments or uh suggestions on Things that you need or think that you can to improve this thing like it's worth talking reaching out and talking to Koichi about this And some of the stuff he's been working on In this vein has been like the gc tracer gem So using this to basically get more information about your garbage collector An allocation tracer gem to see how long live like objects are And then even in two two, uh were as a team we're working on there is an incremental gc patch and then also, uh Or he's working on making the gc better with incremental gc And then there's symbol gc for security things which would be super good for rails So we can't get like dos because of the symbol table being filled up Uh another so one of the things when I was in japan, uh, we had a ruby core meeting and we talked about ruby releases and uh releasing rubies kind of a slow process And uh, I was I wasn't really sure why it took so long and so I kind of asked the question and uh Naruse who's the release manager um of 2 1 was telling me that it requires Uh, lots of human and machine resources basically ruby has to work on many different configurations linux distros You know on os x and other things and in order to release like the ci server has to pass and like you kind of have to pass on like Various vendors and whatnot. So like there's a lot of coordination and like checking. Um To like make an actual release happen, which is why things don't release super fast. Um, so some of the stuff that kawichi and uh My team and other people on ruby core will be working on is like working on infrastructure and services to help with basically testing ruby To kind of hopefully automate and like basically do that per Either nightly or per commit or something along those lines. So hopefully we can get uh releases that are faster and out to users sooner um If you have ideas for ruby 2 2 like I would love to hear them. We have a meeting Uh next month in may about uh, what is going to go into ruby 2 2? So I'd be more than happy to talk to you about ideas that you have uh that you would like to see there. Um I'm just gonna skip this stuff since I talked about earlier and we're running short of time. So here's shneems to actually talk about rails Okay, has anybody used rails? Have we covered that question yet? Okay, welcome to rails conf. Um, okay. So rails 4 1 on heroku, uh, a lot of things in a very short amount of time We are secure by default. Have you heard of the secrets.yaml file? Okay, so secrets.yaml file is actually reading out of an environment variable by default, which is great we love environment variables separate your config from your source and um So whenever you push your app, we're going to set this environment variable to just like literally a random value And if for some reason you ever need uh to like change that you can do so by just setting your The the secret key base environment variable to to whatever you want. Maybe, you know, like another open ssl bug comes out or something um, so Another thing that was worked on a bunch is the database url environment variable This is something that we've spent a lot of time looking at and it's actually support has been in rails for Surprisingly large amount of time to just read from the environment variable, but never quite worked due to some edge cases and random Rate tasks and so on and so forth So this this december around christmas time spent a lot of time getting that to work So i'd like to happily announce that rails 4 for 1 actually does support the database url environment variable out of the box. Whoo Uh and so some to describe a little like the behavior is uh Bears going over. Um, if the database url is present, we're just going to connect to that database It's that's pretty simple. It makes sense. Um, if the database yaml is present, but there's no environment variable Then we're going to use that that also just kind of makes sense Um, if both are present then we're going to merge the values Make sense, right? Okay, so we that sounds crazy. Uh, bear with me, but um, a lot of people You you want to put your connection information in your database url environment variable? Um, but there's also other values you can use inside of your database yaml file to configure active record itself Not your database so you can turn off and on prepared statements. You can change your pool size all this kind of thing. Um And uh, we wanted to still enable you to be able able to do this. So, uh, so the results are actually merged and um For for somebody like hiroku or like if you're using another container Um, we don't have to have as much magic if you if you didn't know, uh database url We actually had to over whatever your database url was. We were just writing a file over top of it And it's like forget that we're going to write a custom file So people would put stuff in their database url or their database yaml file And they'd be surprised when it wasn't there or like a different file was there So we no longer, uh, we no longer have to do that and rails plays a little bit nicer with um, with this Containerized style environment. Um, it also means that you could actually start putting your active record configuration in that file Um, another note if you were manually setting that, uh, your pool size or any of those things via a after reading an A article on our dev center go back and revisit that please before upgrading to rails 4 1 Um, some of the syntax did change between rails 4 0 and 4 1 So if you can't connect to a database, then maybe just like email shneems and be like, I hate you What's the link to that thing and i'll i'll help you out? um Okay, uh, I think probably actually the last thing that we have time for uh is uh asset pipeline who Like if asked in an interview would say that their favorite thing in the whole world is the rails asset pipeline Oh Just rafael we have a bunch of like rails core here, by the way, so you should you should come and thank them afterwards um for for other things not for the asset pipeline Uh, so the asset pipeline is the number one source of of ruby support tickets At haruku just people being like hey this worked locally and like didn't work in production And we're like yeah, that's just how asset pipeline works. That's not haruku um, so uh, so rails 4 1 added out of the couple things. Um, it's going to warn you in um, development if you're doing something that's going to break production Like if you've ever forgotten to add something to your pre-compile list, well now guess what you get an error Um, if you are not properly declaring your asset dependencies, then you're going to get an error. Um And this is even better actually in rails 4 2. Um, as some of these checks aren't even needed anymore We can just automatically do them for you, but unfortunately those have are not in rails 4 1 yet so In general, I have a personal belief that in programming Or really in life the only thing that should fail silently is This this joke Um, so, uh, thank you all very much uh for for coming. Um, we uh, we we have a booth and um Later on what what time three? Yeah from three to four thirty. We'll actually have a bunch of rails contributors um, coming to uh To talk about um Oh, yeah the slides. Yeah. Yeah Yeah, three three three to four thirty. We'll have community office hours with, uh, some nice people from rails core, uh, contrib So come ask basically any rails questions or anything you want Um, and then shneeman will also be doing a book signing of his haroku up and running book, uh today and tomorrow at 230 So if you want that Yeah, so get a get a free book and then come and ask questions and just like hang out and uh Uh anytime you stop by the booth, uh, feel free to ask, uh haroku questions and thank you all very much for coming