 Presentation, thanks for coming. Welcome to Enginyard's sponsored talk. I'm Alan Espinosa and I'm a support engineer at Enginyard. Today I'll be talking about the aspects of deploying rails applications with Docker. My background is that I'm an operations engineer for the past few years. I've used Ruby mostly from my day-to-day work so I've used Ruby back in the day when it was still popular to build systems before Go came along but it's still my go-to language of choice. I'm not really a rails developer but my last last time I used rails was six or eight years ago I think but I do a lot of Ruby development, not just with web. So I'm the author of the Docker high-performance book by Packet Publishing. So given Docker changes a lot, the book will probably have obsolete content the next few months. So when writing the book I tried to make sure what are the concepts that can last through the Docker updates and I'll do a bit of talk of that in this session. So I'll be talking about a bit from the second chapter of the book on how to optimize Docker images. So and then I'll tailor it so that you can figure out how to do your deployments using Docker or with rails in mind. So optimizing the way you roll out rails in Docker. So when we say about optimization, so most of us think about having faster response times and like having your controllers respond really fast. So you have asynchronous workers spinning up those things but all in all if you look at the broader picture, like performance is all about improving the experience with our customers. So from the experience of car customers, you trace the value stream, then you can start doing things like refactor your controllers, your business logic based on the feedback you received from basically production traffic when interacting with your customers. So another way to optimize down the line is like you can tune the middleware. So you can figure your unicorn workers or Puma threads and then set the memory allocation so that you're utilizing your machine. You can tune the SQL queries so queries are fast. So all these tuning you don't operate in a vacuum. Like these tuning is in the tuning you make is informed through your instrumentation in your application and your machines. So you put in like logging and then application metrics where they say like my application is okay, like how users are interacting so even something as simple as Google Analytics can give you a lot of insight and then you can correlate that with system metrics in which then you can use that to inform your scaling decisions. So you scale up your application to keep up with the demand from your customers so you add just architecture, you add caching, you add capacity. So like how many of you knows when do you need to spin up a new instance of your Rails application? How do you know the limits? So having good story about how you operationalize your application is important when optimizing. So you try to tune this your application so that it performs faster but in the end like you need to roll changes to your environment. So if your deployment and process is slow then this the tuning you might do now might be obsolete by the time you get to production. So there's also a need to tune the delivery of your software to production. So here in this talk I'll focus more about that. So even though I focus mostly on Docker this concepts are fairly broad and general. So most early adopters like rode in the container hype so now we're at starting to get past that point now but so in the end like even though you can package Docker containers and stuff like it all boils down to focusing to what's the value of our application and Docker is only a tool to reinforce the way you deliver. So I was in our booth at engineered earlier and I started talking to people on how they use Docker and most of the time like there's a lot of like okay so now we're just starting at testing. We're trying to convince people to run it in production and there's a lot of resistance with that and as an operations person like I can understand some of that hesitance so trying to know what it means to change your stack to a container base will help people convince if you think really think Docker is for you because it all boils down to delivering your app basically. So in delivering the app so it's normally normally we just talk about deployment but there's also like the build phase. So there's a natural tendency to think of the build phase as compiling code of binary like you have your C or go code to be binaries. This doesn't seem intuitive at first for Rails and Ruby developers because Ruby is an interpreted language but if you look at how things are like there's an equivalent of binaries in Rails so when I say binaries it means that anything that's needed to be dropped in the environment like in production in order to run the application. So it's important to know what will get deployed so that in case I get paged at 3 in the morning I know where to look at. So in Rails like you have your your gem packages so they're a nice way to do so you do a gem install in production and you're done but aside from that like there's it's not really the final binary itself because when you do a gem install sometimes if you depend on native bindings like Nocogry or the FFI library you install you compile stuff and produce the shared object files. So the code in your Rails app is part of the binary so the controllers the models in the routes so if you don't get LS files it's everything there is part of your binary and then you have the dependencies the gem dependencies for that application so you do a gem install dash g or a bundle install and then finally you have your Rails assets so there's a lot of binaries when it comes to making a ready-to-run Rails app so the the thing with Docker is that it gives a nice I guess interface to wrap our brains around because Docker has the notion of a Docker image the container which needs to run so all those Rails binaries needs to be merged into one binary called the Docker image that needs to be deployed so you build it in your build server like Jenkins and then you push it to what they call a Docker registry basically it's an artifact repository like RubyGems where from git commit you build the Docker image you push it to the Docker registry and tell your ops team mates that okay I have my image ready you can now pull it and deploy it in so when you so there's just one thing that changes in your application basically you add a Docker file which defines how the Docker image is built so so for those of you who who start just starting with Docker so this is just a basic Docker file to define the image so here you have the environment you want to so from Ruby 2.2 and then you have you add your current directory in your build basically it's all the files needed by Rails and then you do a bundle install to pull in the dependencies and or you compile assets and then in the end you you also define how to run your your application so here you run real server or in production you should be running Unicorn passenger or Puma instead of web brick so in the build process we have something like this so if you run the Docker build command and specify the name so here I'm naming it Rails app so you can see it's starting to compile the Docker image so here you see it's it's adding the Rails directory and to your application and then after that it pulls independency like bundle do a bundle install so here you can see it's compiling binary native bindings to live XML because I have no cover installed and then here's just a short view of what the build it will look like and so when you do a Docker build it will run for a few minutes because you're pulling in gem you're downloading gems and you're compiling the gems so here it took one and a half minutes so the the the feature in Docker that helps in the build process is a it's concept of a build cache so if you run the same build again without any changes to the code the build will finish right away so here just took one second so behind the scenes you can see that since there were no changes to your application it's you so since you built it earlier it will reuse the cache to rebuild the image same with bundle install okay however however if you make a change to your application and you make a small change for example you updated the routes or you change the model so the build will take just as long because since in the Docker build build steps so you you had a new new content in your application so it created a new what they call it image layer so the next preceding steps would need to be rebuilt because the dependent one is is a new layer so it had to run bundle install again so this is not much of a problem if you're starting out but once you have a lot of teams or you're trying to do a larger factor like trying to do a bundle install every every time you make a change starts to get painful those one minute will start piling up so what you can do is you can optimize your Docker builds wherein you separate your application according to which one doesn't get updated and which one gets more updated so here I split my gem file and my actual application so that I'll be able to exploit the cache more often so it's the same concept as having a separate rate rate task for your unit test where they finish right away versus your integration tests when where you need to spin up a database or a cache or anything everything else in your in your stack so the initial build is the same like around one and a half minutes it will take just as long but if you make a change to just your application it will finish as if it it was nothing was changed well actually something changed but here the change happened at the later step when you added your application code so sin if you didn't add anything in your gem file or gem file that lock it will reuse the cache you had earlier so it it will greatly improve the the build time so in the the concept of like having a build process when making your rails app is being able to get feedback as fast as you can if the artifact or the rails binary that's ready to deploy is is good it's actually good to deploy so after producing the Docker image you vet it through a series of tests like your unit test your integration test in your delivery pipeline and guarantee that it's good to go and when it's good to go you're now off to deployment so I found this on the internet where they substituted the compiling with deploying since like it deployment takes most of the time I think especially on a Friday night yeah yeah we've had a lot of customers especially on a Friday afternoon doing the deploy and supporting them is yes so we've had customers who talk to us for support where their deployment takes 30 minutes to finish and those things so without without a rapid deployment process then the you don't have the valuable feedback of knowing if what you change is actually useful for your customers so I'll show a a few items that can delay the deployment process and show like how this can be improved so as an operations engineer I don't really like doing this type of deployment process where you log into the server and do a get pull of the latest deployment and do a bundle install like I guess it's a personal preference but it's it's well one it's low because you have to if something there's a lot of change you have to pull in a lot of gems and recompile everything like in the build process so so sure you can parallelize it across your fleet so you have kapistran or like some other thing that SSH in parallel to do the bundle install but in terms of being able to roll out the changes safely and and the ability to roll back so you want to do it little by little so your your your parallelization is limited by how much you want to update at the time so if you do a canary deploy you want maybe to deploy one first and then next two three four and until you finish your whole fleet of servers so that will slow down the the process so contrast with that to deploying Docker images so the Docker has command called Docker pool which basically downloads the image from the Docker repository so the deployment workflow is just download the image and run it so it's it's simplifying your deployment process and if you have a canary deploy you can do that as well so you're now bound to how fast you can download your images from your Docker registry like Docker hub and not from like other sources like Ruby gems so I'll talk more about that a bit so in in the end even though we rely on a lot of community packages to make our application it's still us who's ultimately responsible for the availability of our application so like this was this this site got popular in Twitter during the NPM left pad thing but its concept is very powerful for everything so it's the site is who owns my availability dot com it says so if you refresh that it will send out random articles about availability and the concept of reliability and like the notion introduced by human operations so it's a nice site to check out if you want to do reading on operation related stuff so so so this is a typical architecture of but how the stream the value stream goes so we have our customers relying on our application to be up all the time because it serves their own business usage well and then conversely we are dependent ourselves on other services for their availability for us to be able to make our application so we're dependent on Ruby gems and if you're using Debian you're dependent on the app mirror of app repository and we and we introduced Docker in our stack so now we're dependent on Docker hub to pull our images so I guess this is where your operations team mates are having their hesitance because you're adding another dependency that can cause things to break so so if you're relying on these services it's good to be able to vendor them and not rely your deployment process on them so that even though Ruby gems goes down Docker hub goes down your application can still be deployed or you can still roll out changes if you need to update things so you don't need to declare a snow day or something like that for your team and so so what I like to do when even on my dev machine is to add proxies everywhere so there's a notion in in corporate environments where developers hate the configuring the proxy settings for their app for their development environment and there's a lot of friction there but like so there was a talk yesterday from Jamie Lee diesel about trying to understand ops teams so where they come from and what's the source of the grumpiness so trying to understand and show empathy goes a long way and we can all you can also learn a lot of stuff from our teammates in other departments so yeah so here actually in my dev machine I have like three proxy servers for each type so I spin up servers I have a mirror for app and then I have a mirror for Docker registry and I have a mirror for my ruby gems so you can do you can do this things in your dev machine or your Jenkins server so you can so bundler has the mirror setting which basically says if in your gem file if I'm if I have a source ruby gems org it will download to another endpoint so it's like man in the middle attacking your dev environment and it produces this and yeah it produces this entry in your global user level bundler config so like I also do this one in my gem RC so I actually remove ruby gems org in my gem sources and add my local one and then I install like any of these proxy repositories so you have artifactory and nexus Docker registry for Docker images so they they off off out of the box like nexus and artifactory supports different formats so so I so I actually just install one like just nexus because it's free like so I have my own proxy ruby gems and and my own Docker registry so when I do a bundle install I it's it's I'm not dependent on downloading images all the time so actually when I do development I can do a git clean dash FD and then it will remove all the cache gems all the compiled gems and then I do a bundle install again and I can install it right away and not depending on the dependencies also also works for databases so if your database is down then your the application the real application should be able to degrade a degrade successfully so you your your master database may go down but so people cannot post updates to their like accounts but if you have a slave database then the real application can read from there so you can still serve the request on a read only basis so you have everything doesn't just fall down like it's bit by bit falling down but you can it's like if you have a hole on the ship you can start to call that bucketing out the water while another part of your team does the plugging in of the hole so it's it's trying to handle failure gracefully so in conclusion even though containers and rails abstract abstract a lot of information from us so that we can focus on actually writing our app like it's still need we still need a good reliable infrastructure that we can build upon otherwise it all crumbles down if you have bad foundations so it may be the the magic and the elegance of Ruby and rails that attracted us to our careers but growing us engineers knowing the magic behind what we're using and knowing the higher level first principles can can help us accommodate changes in our stack in our application when doing a deploy debugging things when things are on fire in production knowing when things fail so knowing all these things will have will make us have a more operable application environment so that we can folk we can focus on serving on serving the needs of our users and with that I'm done with my top and if you have questions I can take them and if you go to buy our booth in and the engineered booth we have limited copies of my docker book so if you pass by and you can talk to me and talk tell me about your story in using docker or convincing management use docker and so on thank you