 Okay, very quickly. How many of you came to JRuby from Ruby? How many of you came to JRuby from Java? Okay, so those of you that came to JRuby from Ruby may be familiar with Twelfth Factory. You may have heard of it before. But today I'm going to talk specifically how it relates to JRuby. My name is Joe Cutner. I also go by CodeFinger. And I'm the JVM platform owner at Peroku. This is actually I'm very proud to say my fourth year speaking at JRubyConf. One more and I get JRuby Diamond Medallion status and first class upgrades. But every year I'm here, I basically speak about the same thing. Deployment, deployment, deployment, deployment, deployment, deployment. It's kind of become my thing. I just, I love deployment. What can I say? I love it so much. I love it so much that I wrote a book about deploying JRuby applications. But don't buy this book. Actually, I don't think you can. I thought of print because it is wrong now. There are lies in this book. But to be fair, I wrote the book or I started writing it more than four years ago, about four years ago. And since that time JRuby has changed a little bit but really the ecosystem around JRuby, the tooling, virtualization, the cloud, they've changed as well. And they've led to different ways of deploying applications. So what I write about in the book primarily is this. The traditional model for JVM deployment. You take your code, you package it up into a war file and then you drop that war file into a running Tomcat or JBoss container. And this model deployment was great 15 years ago when we didn't have the cloud, when we didn't have virtualization. When we were deploying our apps onto a piece of metal, we needed that abstraction layer between the app and the underlying platform. But today, modern JVM deployment looks more like this. You take your code, you package it up into a jar file, and then you take that jar file and you can deploy it anywhere there's a JVM. Whether it's the cloud, a Docker container, a Heroku, it doesn't matter. The only coupling between your application and the underlying platform is the existence of a JVM. It's a truly portable application that you can move around. I guess what I'm saying is make jar, not war. That's also... That's right, that's right. I've got more. War is hell. War files, what are they good for? Absolutely nothing. But you don't have to deploy your, you can deploy your application as a jar file. If you saw Gradle this morning, you can do that with Warbler. Actually, you're probably wondering, I maintain Warbler, and I'm saying this. But to be fair, a lot of the work that I've done with Warbler, especially recently, is making Warbler capable of running as an executable war file. It's actually more like a jar in that case. What does this mean to JVM? Even if you're not deploying as a jar file, this still has implications. That's what I'm going to talk about today. Also, the shift from war files to jar files is not the only change that's happened in the last 5 or 10 years. We've also moved from an emphasis on application servers, these big, heavy containers that did a lot of work for us. We've moved from that towards microservices. We've moved from a hot deployment which was the goal for a long time, to be able to deploy your application into one of these containers without ever restarting the virtual machine. But due to memory leaks and system crashes and system upgrades, that just never really came to fruition. So instead, today we focus on continuous deployment, the expectation that your application will be deployed multiple times a day maybe, and that you will restart your application, and you should be able to turn that around very quickly. And then, of course, 10 years ago, I was deploying my applications to a box in a closet down the hallway, but today I think most of us are deploying to the cloud, to Amazon, to Heroku hopefully. I know some of you are. So at Heroku, we host a lot of Ruby applications, like millions. I don't know if we're supposed to say exactly. So we see a lot of applications, and we see a lot of problems. We see a lot of solutions to those problems. We see a lot of things that work. And from these experiences, we've derived a set of principles that we believe leads to better deployments, better applications. And we've compiled that into a thing called the 12-factor app. The 12-factor app is a methodology. It's a set of principles, a set of best practices that we have found lead to better scalability, better maintainability, and better portability. So this is truly a portable methodology. It's not specific to Heroku, even though it's derived from what we've learned on our platform, it applies to any kind of deployment. So the 12-factor app achieves these things through immutability, immutable infrastructures, ephemerality, the idea that your application is not persistent, that it will be restarted and disposed of at some time, declarative setups, declarative configurations, and automating as much as possible. So these are the things I'm going to talk about today. How do you do this with JRuby to make a 12-factor app? So without further ado, here are the 12 factors. Memorize them. There will be a final exam. You will be quizzed. I'll go over these one by one. They kind of come in groups in my mind, though, so I'm going to stop and do a demo in the middle. The first factor is code base. This is a principle that states you should use version control. Now I know what you're thinking, but bear with me. You're going to learn things in this talk, but there are some things I have to say, because there are people who do not use version control. But even if you are already using version control, you should be using it in the right way. And that is one version control repository per application, and then you deploy that application to multiple environments. You should not, of course, manually fork the repository for different environments. That would be undermining the version control system. But you should and this is more common, you should also not use a single repository for multiple applications. The reason this is a problem is that you start to commingle the commit history for these two applications that are residing in the same project. You lose the ability to do isolated rollbacks and you lose track of the coupling, where the coupling spots are between these two applications. So this isn't very common with pure Ruby applications, and that's because the deployment tools, the build and packaging tools for Ruby kind of trend toward a single app per repository. But with Maven and Gradle, I see sub-modules used for multiple applications within the same repository. Now the advantage of doing that is you get to manipulate or automate these multiple projects with a single Maven command. So if you need to do that, just use Git sub-modules. So you still retain that independent commit history and use your standard tooling to automate the processes. One of the other advantages of using tools like Maven and Gradle to manage multiple projects is that you can manage dependencies without creating your own Maven repository or whatever, which is always a pain. So that actually brings me to the next factor, which is dependencies. In the 12-factor app, dependencies are explicitly declared and isolated and you should never reply on the implicit existence of system-wide packages. What I'm really saying is don't check jar files into Git. Now that one's a dot too, but I guarantee you most of people working on a Java application have done this at some point. It's particularly problematic or has been particularly problematic with JRuby because the dependency manager for Ruby, Bundler, has no concept of jar files. So many times if Warbler does this too, you check a jar file into your repository because that's the only way to manage it. But there's a great tool called JBundler, Christians in here somewhere. We can thank Christian for that. This is a great solution to this problem. Gradle, as you saw this morning, is also a great solution. JBundler does not replace Bundler. It sits alongside Bundler and it uses a jar file for configuration. So just like you have a gem file, or in addition to your gem file, you define in your jar file the Maven jars that you depend on. You then install the JBundler gem. You run JBundler install and it allows you to do things like JBundler console which will give you a REPL with those jar files on your class path. Once you have that set up, then somewhere in your application you require JBundler and then you can Java import these classes very easily and keeping those jar files managed in the proper way. And of course you can vendor the dependencies too. As you can see it puts the jar files there under jars very similar to as we saw this morning with Gradle putting them under I think Phil Libs. Alright, so dependencies are a type of configuration that brings us to the next one, but they're not the kind of configuration that the 12 factor app talks about. The type of configuration the 12 factor app talks about is anything that changes between deployment environments. So things like resource handles to the database Mcached, credentials for Amazon, Twitter, and then per deployment values like hostname. So this does not apply to dependencies or maybe your config routes.rb things that stay the same through different environments. These kinds of this type of configuration should be strictly separated from your code. It should not be checked into your repository. Don't check passwords into Git. Now that's another duh, but this is probably the most commonly violated principle 12 factor app that I see. I'm certain you would not check your own personal password into Git, but you probably checked in a database password Twitter key and Amazon key. That's a problem for security reasons, but it's also a problem for portability because when you define that type of configuration within the code base it means that deploying to a new environment requires a change to your code. So you have to actually make a commit in order to set up this new environment. It makes your app less portable. So a good litmus test for this factor is can you open source your application at any moment without compromising the credentials to any of your systems? If you can, then you satisfy this principle. So we need a place to put this configuration still, and the place to put that is in the environment, specifically stored as environment variables. That's what they're for. So what we're trying to do at Rogue is get people away from using a database to store all of these different credentials and instead extracting those credentials or that resource handle from the environment variable. In this way you can stand up a new production environment with its own database without changing any of your code. So the database, your database is a type of backing service. That brings us to the next factor. Backing services are things like Postgres, your database, Redis, Memcache, and in the 12 factor app, these backing services should be treated as attachable resources and they should be attachable via some kind of URL stored in the environment. So this example of using the database is exactly the same way as you should attach a Memcache or Redis instance. This allows you to swap these different resources in and out if you're doing backups and restores without disrupting your code base. It allows you to easily switch between environments, so again increasing portability. The next factor, and the last before I do a little demo, is build, release, and run. This is a principle that states that your deployment should execute in three discrete steps. Build in which you compile any code, maybe prepare some JavaScript assets, and create some kind of artifacts that are ready for deployment. Release, you then combine those build artifacts with the configuration for a particular environment because that configuration should be separated from your code, and you create some kind of release image or release artifact that represents that release of your application. And then finally you give that release off to your deployment environment, which then knows how to run it. So ideally these three steps are automated, but regardless there should still be these three discrete parts to it. So some examples. Build might be a GRubyC, might be a rake assets pre-compile. You might build your war file, you might run Gradle. You're creating those deployment artifacts. Release really depends on the environment you're deploying to. If you're deploying a Warbler war to Heroku, it's a Heroku deploy jar. If you're using the Maven plugin, it's a Heroku deploy. If you're using Docker, it might be a Heroku push. Again, just depends on the environment, but it should still be a single step. And then finally running, again, kind of depends on your application, but if you are running an executable jar file, it's a java-jar command. If you're using Ruby server, it's a maybe a Puma command. But the key here is that it should be a single command. It should not be what I used to advocate starting Tomcat in the background and dropping a war file into it. Starting it as a service or yes, it should not be starting TorqueBox in the background and dropping a war file into it. That's something I sort of advocated for in the past. But what we're starting to see with TorqueBox with Immutant, which is sort of the TorqueBox for closure, even with Wildfly, which is the new JBoss and Wildfly Swarm, even JBoss is going towards this inverted approach to deployment, creating executable jar files. And of course, if you can automate all these into a single step, that's great. If you're deploying to Heroku, it's a Git push Heroku master to do all three of those. So let's take a look at what that looks like. I don't do live code because I make a lot of mistakes. So I pre-record my demos. You're welcome. Alright, so I have here a very simple JRuby application. It's actually a JRuby NETI application. And I'll explain what that means in a moment. It has a gem file with some gem dependencies, a dependency on jbundler, so I can use that. It's JRuby9000. Alongside the gem file, it has a jar file, which defines my NETI dependency. NETI is a high-performance Java library for creating HTTP and TCP clients and servers. It's used extensively by Twitter. I also have a proc file, which tells Heroku how to run my application. As you can see, it's a single command, to run the server rb. And that server rb contains a bunch of Java imports, which are made possible by jbundler, and then it contains a NETI HTTP handler. So the code for the NETI HTTP handler is kind of complicated, but that's because NETI is more akin to RAC than to Sinatra or Rails. There are frameworks like FNAGLE and RAPPAC that are built on top of NETI. We don't have one of those for JRuby. So get on it. Somebody build this. I need it. I think that would be a great project. So it's a much lower-level tool. I have this application. I can prepare it by running jbundle install, which downloads my NETI dependency, puts it into my local Maven repository. I can then run my application, which requires jbundler and sets up my dependencies. So there it is. It's running on port 8080. Very complicated application. All right. So now I have this thing working. I can deploy it to the cloud first by creating a Heroku app. So if you have the Heroku tool belt installed, you can run Heroku create, which provisions a new get repository for you. You can then push your code using get push Heroku master. I suspect most of you are familiar with Heroku, but what's happening here is it's downloading my dependencies, including running jbundler and creating a slug, which is a release image for Heroku that my app is then run from. All right. So I can use Heroku PS to check on the status of my process. And you can see the rubyserver.rb command is running. There's one instance of it, and it's up. I can use Heroku open to view the app running in the cloud. There it is. And I can use Heroku logs to check on the status of it. So yeah, you can see they're starting netty server just like it did locally. So when I push to Heroku, it's first executing that build phase, which there isn't really much to do for a Ruby application. And then it creates the release image, which I mentioned is a slug file. So each time I deploy, I get a new one of these slug images, and I can actually see the history of those with Heroku releases. So if I deploy changes that are breaking or cause them to fail for some reason, I can quickly roll back to a previous slug and then debug the problematic slug in a separate thread. All right. One last thing is I can use Heroku run to start a one-off process on the Heroku server so I can start a JRRB session. I should have run jbundler console. I don't know why I didn't. I can require jbundler and then import Java classes and do things from there. I'll talk more about that in a little bit. Okay. All right. So the next factor is, and actually the rest of these factors relate less to the process of deployment, which is what the first factors were, and more to the deployment architecture, how you sort of structure your application for deployment. The first factor is processes. In a 12-factor app, your processes should be stateless. Don't use sticky sessions. So don't keep session information within your process that needs to be there. If that process shuts down, it should not take anything with it. That's not too common of a problem in Ruby applications, but port binding is much more important. So this is a principle that states that your application, a 12-factor app, should be completely self-contained and it should export HTTP as a service by binding to a port. That is, it should not depend on any other piece of infrastructure to bind to a port and query requests for it. So what I'm referring to here is the old model of JVM deployment where we drop a war file into a container that is binding to a port and handling request process for us. Our application should know how to do that, either by embedding Tomcat or Jetty into our application or by using NETI as I showed in my example. The application itself should be completely self-contained. And this is what we're seeing with JRuby, but it's also what we're seeing with the rest of the Java ecosystem. So I mentioned Wildfly Swarm, Spring Boots, Play and Finagle, both used NETI, which I showed earlier, DropWizard, these are all tools that are becoming, that are gaining market share in the Java ecosystem and they're all containerless. They're using this modern deployment style. Alright, the next factor is concurrency. You're probably thinking, I got this one. JRuby's great at scaling up, right? But you also need to be able to scale out. And you scale out by diversifying your workloads. So decomposing your application into different parts that do different kinds of jobs, allowing you to if you need to handle more web requests, you can scale the part of your application that handles web requests without scaling the part of your application that's doing background work. Now this is something, this differs from what I used to advocate, which is application servers that can do all these things for you. But it turns out that scaling with an application server is much more difficult. This workload diversity could be interpreted as me advocating microservices, if you take it that way, then great. But whether you embrace microservices or not, there are still principles from the microservices methodology and architecture that you should embrace, regardless of whether you claim to embrace that or not. Another one of those principles from microservices that you should embrace is disposability. This is a principle that states your application should be quick to start up, resilient to failure, and graceful to shut down. So quick to start up, meaning ideally it should start up in less than a minute, ideally under 30 seconds. As you saw with my NetE application, it started up very quickly. It should be graceful to shut down when it receives a termination signal. Within 10 seconds it should release any connections, clean up any resources and shut down cleanly. And it should be resilient to failure, which actually kind of falls out from the other two. If your application crashes or you need to scale it down for whatever reason, you should be able to replace those instances very quickly. So this really gets back to how we used to think of our applications, how we used to think of our servers. We used to treat our servers as pets. When they were sick, we would take them to the doctor, we would want them to get better. We wanted them to live forever. These were our application servers. These were the pieces of metal running in the closet that I was talking about. They were very near and dear to us. But today we should be thinking of our servers as cattle. So if they're sick, you kill them. If you need more servers, you go to the market and you buy them. If you have too many, you get rid of them because you don't want to pay the overhead of keeping them around. This is possible now because the cloud is cheap, virtualization is good, we can do this. So yeah, treat your servers as a commodity, not as a precious. So that said, application servers are not disposable but microservices are disposable. So yeah, I'm pretty explicitly advocating microservices here I guess. But again, it's really not about microservices as much as it is the principles that back up this methodology. Microservices are easy to replace, they're easy to modify, and they're decoupled from external infrastructure. So this is something we're seeing embraced by places like Netflix, Twitter, and Guilts. Netflix is using all kinds of JVM languages. Twitter and Guilts are both using Scala and they're all embracing microservices and they've open sourced a lot of their applications that they use at Twitter and Guilts. They're built on Netty. Okay, the next factor is dev prod parity. This is a principle that states your development environment should be identical to your production environment and every environment in between. So the most common place I see this violated is at the database layer. But if you're using Postgres in production, you should be using Postgres in development. I also see this violated at the server layer. If you're using Puma in production, you should be using Puma in development. The idea is to get that parity between environments. This is, we would desire this for a number of reasons but it's often mistaken as simply a development nicety. That having this parity allows you to more easily debug and prevent the introduction of new bugs by having a system that's exactly like your production environment. But that's true. But it also makes it easier to get to onboard new employees because if you can stand up your environment because it's identical to all the other environments, they can get started more quickly. So it makes your applications more reproducible. And that also allows you to stand up new test environments, new staging environments and new production environments. And if you can stand those new environments up very quickly, then you can also dispose of them. So this dev prod parity is actually a key step in the way to disposability. So these factors are very much tied together. Alright, I'm going to skip over logs. It's a very interesting factor but it's a little orthogonal to the others. And I'll go to admin processes. So admin processes, things, jobs like database migrations, one-off tasks, should be run in isolated processes. In short, you should not be logging into a production server to run your database migrations or one-off tasks. It steals resources from an existing process and it creates the potential for accidentally bringing that server down or bringing that existing process down. When you run those one-off jobs, you should run them in isolated containers. And that's exactly what was happening when I ran the Heroku Run command. I was getting not only a new process but a new container for my application, a new isolated environment. It's the same way as when I scale my application up, when I create more web instances, except with the admin instance I'm just not starting that web process. So I'm getting just another identical environment for my application. Okay, so to recap, codebase, you should use version control, but one version control repository per application. Your dependency should be explicitly managed. Do not check jar files into Git. Use jbundler. Configuration should be strictly separated from your code. It should be stored in the environment, not in your Git repository. Backing services should be treated as a, should be attachable via some URL stored in the environment. Your build process, your deployment process should happen in three discrete steps. Build, release, and run. Processes should be stateless. Portbinding should be done by your application, not by some external piece of infrastructure. Concurrency scale up as well as out. Your application should be disposable. This should be quick to start up. Grace will shut down and resilient to failure. And dev prod parity, your development environment should be identical to your production environment, and admin processes should be run in isolated containers. You can learn more about the 12 factor app on 12factor.net. We have sort of a, I guess it's a manifesto for the 12 factor app. It's a language agnostic methodology, so there's some things specific to Ruby, but nothing specific to JRuby in this manifesto. But on my blog I very often write about specific characteristics like the ones I talked about today. So what next? You know the 12 factor app. Congratulations, your name. What do you do about it? So this is a list of things that I would like you to do after leaving this talk today. If you have a JRuby application, try running Warble Executable War. It'll produce one of these executable jar files that I talked about. Alternatively, you could try JRuby Gradle. Just take your application, see if you can do that. Install Jbundler. Oh, there's supposed to be an R on there. Even if you don't have any jar dependencies, just install it, create a jar file, see if you can get it working. Remove all passwords from your code base. Find a place to put them in the environment. And deploy your application to Heroku or to some other cloud platform. If you deploy to Heroku you can deploy in one step. I think that's really the goal. Wherever you're deploying, try to get that down to a single command. Alright, again, I'm Joe Kuttner. I go by Codefinger and these slides are on the web. Before we do the mandatory applause, I have some games here today and I'm looking for some people to play them. So if anybody knows how to get into the great refractor telescope at the Libna's Institutes, I would appreciate it. Thank you.