 Yes, it was great, it was very cool, I thought it was a good day in the hospital. Yeah, yeah, yeah, we talked about it. No, I thought it was really good, I actually got some good ideas out there as well. Yeah, there is actually no time for any of the rooms that are in hospital now really. I will. Why? And that, that's a microphone, but only for the recording, so you have to stay in the right areas of recording and recording. Okay, I'll do what I can. Great. Stay around. Okay, cool. I can introduce myself, my name is just calling Harrison. Harrison. Okay, okay. Okay, good. I'm going to send them your way. Okay, so welcome everyone to the next talk. Harrison Ritz will tell you something about putting your legacy back application into development. Cool. How many people here are not red hat employees? Oh, wow, awesome. This is, that's great, awesome. And the people who are red hat employees, many of them are on my team. So when you ask me questions, I'm going to go, great question. And then I'm going to ask them. All right, so my name is Harrison Ritz. I'm an engineering manager on the atomic open shift at red hat. And I would like to open this talk by asking you all a very personal question. That is, can you really love an application? How many of you have actually written an application more complex than like a hello world application? Good. Okay, good. This is good. We're in the right place. Excellent. Okay, so this past summer, I actually was playing around with an app that I did not write, but an app that I use every day. And I started trying to think about, all right, how am I going to take this thing and make it more resilient, make it reproducible and scalable? How do I get this thing into the cloud? And the result of that effort, I documented in a blog series that I called that app you love, sort of a weird name for a blog series. I called it that because the app I love was ZNC, which is an IRC bouncer app. Not everybody loves ZNC, but I wanted to write this blog series in a way that it would be useful for the general case. So I said, hey, forget about ZNC because we're really talking about that app you love, whatever it is. Same process is going to apply. So today, I'm going to summarize some of that talk, but I'm also going to take it one step further. So forget about that app you love because we're going to be talking about that app that you wrote. A little backstory here. In my leisure time, one of the things I like to do is play tabletop role-playing games like Dungeons and Dragons. Does anybody, Dungeons and Dragons, does anybody know what that is? Some people do. They're kind of like, yeah, I know what that is. Well, back in 2010, I happened upon a new tabletop role-playing game called Stars Without Number. You might guess this is a sci-fi themed tabletop role-playing game. And I'm not going to tell you the rules right now, but in the back of the rulebook for this game, the author included dozens of random tables for rolling up every detail about multiple galaxies worth of information. It was incredible. We're talking planets, alien races, space corporations, space political parties. It was amazing. So one night, I sat down with pencil, paper, dice in the rulebook, and I started rolling myself up a couple of galaxies. Well, I got about an hour into it before I decided, I think that I could do this faster by writing an application. Classic software engineer response to a problem, right? This looks hard. I'll just write a program to do it instead. So at the time, in 2010, I was a Pearl developer. I've been coding in Pearl for 10 years. And I was also a web application developer. So I spent one weekend hacking this thing together as a command line utility, but I knew I wasn't going to be happy until this thing was actually available online as a tool that anybody could use to randomly generate galaxies worth of made up information about places that don't really exist. And so in early 2011, on a server in my friend's basement, I launched the SWN sector generator. Now, this might sound like a really specific tool for a very small group of nerds who are interested in a very, very specific game. But in fact, since I've been tracking usage on this thing, this has been used over 65,000 times. And these days, it gets about 90 uses per day. So as niche as this application seems, I think that there are some people out there who would consider this a mission-critical application. If this site's not available, I get emailed. So I was faced with this problem. Let's just reiterate how this thing was built. I've already mentioned that it's written in Pearl. It has dozens of C-Pan dependencies. If you're not familiar with Pearl, think Ruby gems or Python eggs. Same idea. These are third-party package modules that I need in order for my app to work. The whole thing is served over CGI. I'm especially proud of this because in 2010, CGI was already something nobody did anymore. I did it. Let's see. CGI, of course, has to be handled by my Pearl, which is an Apache plug-in. This is as traditional a lamp stack application as you can possibly have. And I have this problem. It's running on a server in my friend's basement. I have to figure out how to get that thing running in the cloud so that as the demand for this increases, who is using it? I have no idea. I want to make sure this thing is running for them. If you have a legacy application, if you have an application that was designed and constructed before there was a cloud, then you and I have a very similar problem. I'm going to be talking about what I had to do for this Pearl application, but forget about the app I wrote because we're really talking about the app that you wrote and what you need to do in order to get this cloud deployment happening. I break up this process into three steps, and the first step boils down to basic containerization. You have a lot of options. If you saw Dan Walsh's talk yesterday, he noted that the only really readily available build tools for images are like Docker build, and he also mentioned tar. Believe it or not, Docker's documentation still includes information on how to hand build your images using tar. I didn't feel like that was the right option for me. As it happens, recently I discovered Packer. This is written by the folks who wrote Vagrant, and it's their idea for a single source that can be compiled or constructed into a couple of different container formats. Containers themselves, VMs, the Vagrant box files, that sort of thing. Finally, you've got what the OpenShift team has been working on, which is called source to image. Now, I'm sure you're not going to be shocked to discover that I decided to go with source to image, but it's not just because I happened to be on the team that wrote it. There are a couple of things that, as an independent developer of an application, I happened to like. First off, as Dan pointed out yesterday, I don't need a Docker file in my code if I am using the source to image approach. And that's great because my application doesn't really care that it's in a container. It's an important application that does important, fake galaxy generating. Another thing I like about source to image is that it's going to produce containers that are good cloud citizens, right? Simplest example, the things running inside that container are not going to run as route. And now it's thinking about your situation if you have an enterprise app that you're considering, that another nice thing about this approach is that there are opportunities for an operations team to vet what you are doing if you are using source to image, as opposed to pulling things directly down from Docker Hub and kind of taking your chances. All right, so that's the why for me choosing source to image. Now, here's the how, okay? This was a great experience for me. Over the summer, I sat down with the OpenShift documentation and I'm reading up on source to image and I discover that Red Hat actually provides a stock pearl builder image. So I'm thinking, this is good. This is going to work out really well. I use source to image. I indicate the builder image that I want to use, in this case that stock pearl builder image. I point it at my source code. I tell it what Docker image tag I want to have when I'm done and I run it. And I'm feeling good, right? What could possibly go wrong? This is going to be great. Okay, unfortunately there were problems. And I'm going to mention them in the context of the app that I wrote, but this is the kind of thing that you may run into as well, okay? And it really boiled down to those C-Pan modules. Two things. First off, some of my C-Pan modules want to be compiled before they can actually be used. And some of the development libraries that I needed in order to do that compiling were not available in the stock pearl builder image. That was one problem. The other was the sheer number of dependencies that my application really has. Because you think in terms of, oh well, my application needs these five modules. But each of those modules might need five modules. And the tree begins to expand below you. And I ran into huge issues where I had incompatibilities between different modules. And I realized that what I'd really like to do is take advantage of some of our pre-built pearl modules available from the software collections. That saves me the trouble of having to do some of this compilation work and it allows me to avoid some of those issues. So in short, I came to the conclusion that I was going to need to build my own builder image. Now it sounds like a complicated, bad situation. It wasn't quite as confusing as all of this. And we actually have some documentation if you want to build your builder image from scratch. But I don't really think you need to do that. In my case, I didn't. Remember, the builder image is itself a docker image. And you can extend docker images by adding more layers to them. So rather than trying to start from scratch, I took our stock builder image and I used it as the basis for a new custom image. And I honestly think that for most people looking at legacy applications, this is going to be, you're going to need to make a similar decision. And because those legacy apps typically have very specific requirements, I feel pretty comfortable with this approach. The particular thing that I needed to do was I needed to take that stock image, throwing some additional RPMs. There are some other things that I wanted to do in that image that I needed to do as a root. And we've already talked about the idea that you don't want to run as root in a cluster context. So that was something I wanted to do. And I knew that I'd be able to use that resulting builder image as a custom image that was going to work for my particular application. This is actually the guts of it. I've taken out a couple lines just for clarity. You can see that I start with our base image, I switch to the root user, do a number of things that I can only do as root, and then I switch out of the root user. In an enterprise context, this is a document that I can hand to the operations team and say, please build this for me and make it available in the registry so that I can work with this. There's that opportunity for operations to vet what you're doing. And in fact, where your RPMs are coming from in that particular scenario might be a satellite server that's controlled by operations. So there's some good ways to check the work that you're doing and make sure that you are being a good cloud citizen, doing things that are safe for the enterprise, et cetera. In my case, I'm not really doing this for the enterprise. I was doing it on my laptop, so this was sufficient. Net result, right? So now I've got myself a custom builder and voila, I was able to get that first step out of the way. I can spin up a container that has this lampstack application in it. I can actually run this Galaxy generator as a web service on my laptop. A couple things to keep in mind, though. I talked about some of the benefits that you have if you're trying to be transparent in an enterprise setting. For me, though, I think one of the biggest wins is every single one of the Pearl C-Pan modules that I pre-included in the builder image I did not have to download from C-Pan. So this actually reduces my build times from one push to the next from my code. The trick is you don't want to lock yourself in too much. If you pre-build a builder image with absolutely everything, then you don't leave room for flexibility as you continue to push your app and make changes to it. So there's a trade-off to be aware of. So that's sort of like, I've just condensed what could be months of work into a 15-minute description. For me, it really took probably about a week to get through this. Yesterday, John Frizel actually gave a talk, and I don't know how many of you saw it, but they took a much more complex application than mine and broke it up into more than one container. That is certainly something that you will want to consider, but I'm not really going to touch on it here. I think this is an app-by-app decision that you're going to have to make. So now we've got this thing running in a container, but the problem is, and I don't know if you've heard this yet, less than 500 times, but containers are immutable. And so if you want to make it possible for your application to be different in any way from one run to the next, you need to figure out some way to reintroduce configurability back into your application. When I wrote my blog series, I was talking about third-party applications. I didn't have the opportunity to actually change the behavior of the app code itself there, so I described a process that I refer to as config and run. In the case of config and run, PID 1 inside the container is not the app you want to run. Instead, it's a wrapper script that wakes up, checks to see if the application has an appropriate configuration, and creates it if it's not there. Then it invokes the application. So this is a config and run pattern. However, when you do control the code, you have many more options. I actually have less opinion on this one. What I did for that app that I wrote is I hard-coded it. I'm not proud that's what I did. I hard-coded it. But in fact, my application, there really isn't that much to configure. And I actually think that for some class of applications, it's probably fine to say I will change the configuration by updating the source code. Because remember, these clusters can use webhooks to trigger rebuilds. So I can push a change to source, get a new image, and in effect, I can change my configuration that way. But I think that more common case will be kind of a flip that situation I described before. Instead of config and run, we'll call this run and config. When you control the code, you can make your app sort of self-aware. It starts up. It's kid one inside the container. It looks for its own configuration and can generate a configuration if it's not there. In this case and in that config and run case, the magic sauce is environment variables. You know that you can present environment variables inside the container and your startup behavior can test for these environment variables and cause the application to configure itself based on what's in there. So those are two approaches, three really that we've talked about now, for configurability. One which involves changing it dynamically in the container. Another that involves changing it in the source and pushing new images. The third approach I'm going to talk about is config map method. Now, this is an important distinction. Up to this point, everything that I've been talking about does not require you to be working inside an OpenShift cluster. Once you go config map, you are committing to deploying this thing in OpenShift cluster. The config map itself is a Kubernetes object. It's got a name and it's got a set of key value pairs. And the recommended usage for a config map is that you mount it into a container where the name of the config map becomes a directory and each of the key value pairs becomes a file and a file contents inside of that directory. When you go this route, you are using OpenShift templates, which I'm going to talk about more in the next step. They give you access to a higher level of configurability. The magic sauce here is template parameters and I'm going to talk about them more in a little bit. In short, when it comes to how you reintroduce configurability into your application, you have options. I've talked about these three things. The net result, of course, is that you will have one image and you can make each instantiation of that, each deployment behave a little bit differently depending on what your requirements are. Now we arrive at step three. We are finally ready to start talking about getting this thing deployed on a cluster. I think that's an important thing to point out. I've consulted for a couple of different teams inside of Red Hat that are very interested in standing up their own OpenShift systems and doing development work against it. I have advised them that while they are waiting for the OpenShift deployment to be set up for them, they still have plenty of work to do because that cluster is going to stand idle until you have containers that you're ready to run on. You can do a lot of that work on your own laptop long before there's actually a cluster available for you to work with. Keep that in mind. We've gone through a lot of this process and we are finally now at the point where we're talking about cluster deployments. Have any of you run OC cluster up? Have any of you tried that? Not many. Man, this landed around the time that I was writing my blog series and it was a really cool feature. Speaking from the perspective of an individual developer, it was incredible to take a laptop where Docker is running, download the OC binary, run OC cluster up and it stands up an entire cluster all in one on my laptop. It pulls down a registry and a router to run in that cluster. This is the fastest way right now that I can think of to deploy an OpenShift cluster that you can work against. I noted here that you can go from zero to OpenShift in 20 seconds. The caveat is if you've never run it before, the first time you run it, you're going to pull down some Docker images that it needs, but after that it will start up very quickly, I promise. So now that you have the environment to work in, remember the goal of putting your app in a cloud environment in the first place. You want repeatability, you want resiliency, you want to eventually reintroduce state into this thing and this process, this part of the process is where you're going to be working on those things. The easiest way to get started is with the command OCNewApp. The syntax here probably might seem familiar based on what we were talking about with SourceToImage. Very similar idea, right? I'm indicating the builder image I want to use, I'm indicating the source repo that I want to use, and when I run this command OpenShift is going to go off and behind the scenes to image build. It's also going to create a number of other objects that are relevant to managing this application in the cloud. So, for me this was like a point of triumph is when I was able to run OCNewApp, watch my app get rebuilt with fresh pulls from CPAN, you know, downloading all these required modules and then running in the cloud, this is the exact application. That was pretty cool. When you get to this point with that app you wrote, this is the point where you get to check your work and make sure it's behaving the way you expect it to in this cloud context. Once you're happy with things as they're running using OCNewApp, you already have created a number of objects that you can capture in the template. The next instruction we'll talk about is OCExport and the reason I put an asterisk up on this one is that unfortunately using OCExport is not the most smooth process ever. There's some disassembly required. OCExport actually supports a keyword all. So if you say OCExport all you'll get every object that exists in the current namespace in your OpenShift cluster. But the problem is if you try to take that template and run it in a different namespace it's not going to work. I had to go through a little trial and error and I determined that there were actually four classes of objects that I wanted to preserve that were going to make it possible for me to repeatedly deploy my application in a cloud environment. So for starters I needed to do this export. This as template argument specifically says wrap the object definitions in a template so that OpenShift understands that these things are all going to get deployed together as an application. Unfortunately there was a little more work to do. This is just sort of a visual example of some of the more metadata things that were included in that export that I had to pull out. I've included a link here to an example template that we publish on GitHub and I actually found it extremely helpful in guiding me to know what I needed to pull out of that template to make it work. I am telling you that as an individual contributor I really wish this process was a little smoother but I feel like the payoff was still worth the effort. For one I now have this template that I can use in any namespace on any OpenShift cluster and think about that that means the cluster I'm running on my laptop the cluster that my company is running on their internet AWS Google Compute Engine is not saying anywhere. It's a LAMP stack application written in Perl. This is pretty incredible accomplishment. I also have maximum control over the application now. I can very finely tune the way I want this thing to behave because I have direct access to the template which is going to let me introduce all sorts of settings and most notably when we were talking about config maps template parameters not only provide you with points of configuration they can also you can provide them with instructions to like randomly generate values for you. So if you wanted to make a password as a template parameter you could either supply a password or based on the instructions that you used in the parameter definition it will just make up a new one for you. So that's pretty cool. And then you can also define environment variables that end up getting exposed to the container this is how you get that end-to-end configuration for that app you wrote. Once you have gone through the process of making it repeatable it's time to think about resilience. And I've actually heard a number of speakers talk about liveness and readiness probes which look exactly like this in the system. Nothing? Okay. Okay, so let me just quickly explain liveness and readiness. Liveness is a check that you run and if your pod which is to say your running container on Kubernetes fails a certain number of liveness checks Kubernetes is going to kill that pod and start up another one because your system just that pod just never really got going. In my case, I checked for a 200 response on index.html of my web application. I just try and load the home page. If it loads then I know the thing is at least running. Readiness is a check that we perform to see if the application is going to be capable of actually doing the job it's supposed to do. In my case I call one of those famous CGI files that I wrote one of those scripts and if I get a response from the script then I know that my application is actually ready to handle requests. Finally if we want to add state to this we have to consider persistent volumes and persistent volume claims. Now, again there are people here at this conference who are experts on this stuff. I want to point out that the persistent volumes are going to need to be set up by the cluster administrator. In my blog series I actually talk about how you can do this on your own laptop. You need to change some SE Linux tags but you can actually create an attempt NFS shared storage space that you can use. And then the persistent volume claim is the way an application works for a particular piece of storage. Now, as an interesting note here when I was talking about the config and run approach and the running config approach which are essentially the same thing. We're talking about a situation where the app wakes up and it looks to see if it's been configured. If you are running in a container that does not have any mounted persistent storage then when it checks for the config the config will never be there. It's going to wake up there's going to be nothing in that directory it's going to generate it and when that container goes away that state is going to be lost. But when you introduce persistent volumes and persistent volume claims into the mix then you have an interesting situation where the first time your container it's going to start it's going to see an empty directory and it's going to write state to it. But when that container dies and a new one comes up and remounts the same persistent volume your state's going to be there it's going to run in config or config and run check and it's going to see that the state's already there so it's just going to hand it off to the application to run. So this is how you begin on the file system to be the place where you're storing state. I don't do that with that app I wrote but in my ZNC example from the blog series I actually do introduce persistent storage I've got some winks at the end of this so we can get into that. Was there a question back there? No? Okay. There will be time for questions. Anyway the net result of all of this is that starting by just figuring out how to containerize this thing then working on reintroducing configurability and then finally going through the process of building an open shift template I have taken this application from my friend's basement to a point where it's actually ready to be deployed in a cluster environment and that's pretty cool. There are a lot of science fiction gamer nerds out there that are going to be really happy about this. So in summary I kind of talked you through this whole process in the context of this Perl application that I wrote but I hope that I was able to do it in a way that it's really less about the thing that I did and more about the thing that you need to do when you sit down and look at some of these programs that you've been working with for a long time and try to figure out how to containerize and deploy them. This slide is here for I'm going to post a link to this and here's some references for things my source code the custom builder that I generated which is literally just one docker's files worth of code and then some links to open shift origin where a lot of our examples are and where you can download the OC binary. Questions? Lay it on me. Yes, I do have a database in my app Well it's a SQLite database and the entire binary database is actually checked into my source code. It's a little embarrassing because that's a perfect example of a that's a perfect example of a situation where I could have used that persistent storage trick check for the existence of the database because I also have the raw SQL database so check for it but it's not there use the SQL to create the database and then any time a new iteration of my app came up the database would already be there well yeah this is like things you can do with SQLite that you should never do with a real database Any other questions? Anyone? Yes? What was the most challenging step during this process? For me, the question is what I think was the most challenging step without a doubt it was the first step just getting the thing containerized because I knew I wanted to use the source to image strategy but I also knew that the stock builder wasn't going to work and remember I'm the manager of the team so I could have been like guys this is broken but in thinking about it I kind of concluded that this is a really specific application people are going to have different modules and things they require so I kind of concluded that most of the time it's probably better to create a custom builder given that it's not that difficult I didn't have to do it from scratch I could have, it's documented so for me the process of going through and continuing to do the builds until they actually succeeded figuring out what modules I needed to have in the builder that was a long iterative process so I would say that was a tricky part. Cool, good question Cool, any other questions? Yes? So do you see any value in having your local OpenShift cluster running on your laptop or doing actual development of this app or is it just figuring out how to run it into OpenShift cluster and then you're done locally and then you develop locally and then just to go out on your I mean, the beauty of OpenShift is how it adds build tool chains to the Kubernetes cluster for me as an individual developer like, yes, there's a point sorry, the question was how long am I going to be doing the development on a cluster on my laptop versus getting it runnable into a larger OpenShift environment to continue doing production and yeah, I'm waiting for OpenShift online to move to v3 and as soon as that happens I'm going to be pushing a lot of this work up into the cluster but really, just for that basic template setup it's so fast when you're working against your own computer to build up the templates that I could see a workflow where up through the point where I have a template I'm going to go ahead and do the work locally and then I'll move it to a cloud department to figure out the build tool chains so in the end you don't have a doctor demon running on your laptop I do have a doctor demon running on my laptop that's the thing about OC cluster up is it actually pulls down a containerized version of OpenShift and runs it on doctor on my laptop and it presents as an entire OpenShift cluster so yes, I do need to run doctor a little bit but this is I think recently been tested to work on Windows and Mac OS with those doper instances as well so you don't just need a Linux machine to do a constant build great question yes you can't use that source of image way how to build the image what will be preferable way yeah if you use Pecker or I don't know do you use doper files okay good question so the question was if source of image is not available to me then what other tool would I want to use and honestly I think I would just use docker build I mean that's just a personal preference thing in fact in earlier iteration of this work I did have a docker file in my source code but yeah I would probably do that the thing that I mentioned Pecker and it's interesting because it's a way of taking a single source and producing an image format but also a VM format so it's interesting but I haven't dug too far into it but I feel like there's probably a little more overhead than the source of image approach coffee you guys are missing out Bruno all right what's that when is the V&D version shipping it's fun you should say that because I opened my talk with a question and I was going to close it with a question who's a gamer here and when are we going to play right good question is that okay thank you two questions out of good interest the general audience was that was this picked out just out of your interest or were you also trying to do a use case study or how this might present itself to a single developer just well yeah it's a good question so my first exposure to OpenShift was as an indie developer playing around with it so I very much have that mindset like when it comes to me working with the OpenShift system so I would say that I really tried to keep the indie developer approach but obviously since I'm on the team like a lot of my experience I definitely was talking to the engineers about like hey do you really want it to work this way you know so I took advantage of my position on the team to be able to ask those kinds of questions you know I don't think that they changed anything because I asked they were like yes we're doing it that way we're considering yes yeah yeah you mentioned back there but I think that a lot of the questions that you were actually not haven't looked too much into it yeah I haven't was there any into continualization rather than baking the images because there's quite a lot of talk about going into containers like if you look at the corporate deployed applications and like this stuff for some of the applications it sometimes feel like too bit of a overkill to actually do the full continualization removal of state application configuration of all of that kind of stuff just probably bake an image and deploy it really without paying directly so the main reason that I want to do it that way is because the author of that game was putting out a new version pretty soon I've already been in touch with him and I want to be able to iterate on the app once he's put out the new version because some of the rules are going to change so having the opportunity to set up a whole build chain for that thing actually means that once I have that build chain figured out and I get his updated rules I can continue like that's the thing like legacy app to me doesn't mean dead app like there's still work that I'm going to be doing on it and so I love the fact that I can, you know, it's this launch stack application but I can still make an automated build chain where I push my changes to get hug and I can see the results of that and I'm testing it so otherwise if he was like hey man I'm out I'm never going to update this yeah okay then I have a doctor version of that that I share on a doctor hug if people just want and don't want to deal with the build process you can actually pull the good thing straight from my doctor hug cool good questions you need a free coffee card ok if I switch we built that setting good if I switch in between the dock so in between if I just switch to the shower should be ok yeah maybe bigger font but yeah it works but I'm not sure why the small one doesn't work I'm not sure why this one doesn't work do you have slides if you want to write remotely you have that's yours right you bring that one 430 so you have 20 minutes 4 talk and 5 minutes for questions ok so I have those signs with minutes 430 430 430 430 430 430