 So, I'm very excited for tonight's party and now let's talk a little bit about JFrog. We have four products. Not going to talk about products or JFrog. Now we have both table, half a table. We are somewhere there. We have awesome t-shirts. If you didn't grab yours, you should go there and grab yours and then we are going to talk about different stuff related to JFrog. But now I want to do kind of a poll. Now it's not exactly a poll because when you do a poll, you actually care about knowing the results. That's not the case. I know the results. So, it will be like a social experiment on you. So, okay. Let's do that. You heard about Docker. Hands up and keep them up, right? Okay. So, I think we have more than 90% hands up now. How about who can do the tutorial? Most of the hands are still up. I would say we get 85, maybe 80. How about I play with it at work? Almost same. Now we got to about a half an audience here and how about production? So, we have about like five hands up, right? And that's how we went from 90% to about like 5%. And that was my experiment on you and it works perfectly every time for the last at least year that I give this talk. And the question that I have for you and that's the question of this talk basically is why don't we take Docker to production? And there are a lot of reasons and everybody have their own story why. And today what I'm going to talk with you is this, right? So, most of us love Docker. It's a real neat technology, really good stuff and it solves tons of problems that we used to have but we don't trust it. And we don't trust it because we really have no idea what's going on inside this opaque black box that added a couple of layers on top of complexity we already had. That's fine if you feel that way. We feel that way as well, all the industry feel this way. So what we're going to talk today is a little bit tried to fix that. So why at all I think that we can talk about, I can talk about Docker. So we have this relationship with Docker but I guess it's a hug that we love Docker. That's what it was intended to express. That's Salmon Hikes. Salmon Hikes is of course the creator of Docker and that's DockerCon last year in San Francisco. That's a keynote and he asked this question and that will be another social experiment because I already know the answer. So he asked who uses Docker and nothing else. And by that he actually means, okay, Docker, you put something inside. You have other technologies, maybe Java, maybe JavaScript, maybe .NET or whatever. Who uses Docker? Who has a company or a team at least? That only does Docker, nothing but Docker. No hands. We have 100 people here. It was a couple of thousands there, same results. No one just does Docker. And that's because Docker, sorry about that, Docker is a container technology, right? Docker is a container technology and this analogy works perfectly. No one ships empty containers. That's pretty stupid business shipping empty containers. So no one ships empty containers, no one does Docker and nothing else. And that's why I think that our solutions, which are universal, are of an advantage. But enough about that, let's talk about Docker container life cycles. How do we build a successful software life cycle? Is CI life cycle, is CD life cycle? Docker is new relatively like three years. So when we try to build a life cycle, we build on our own experience. So we will ask, do we have a pattern already? And if we do have a pattern, how does it need to be changed? And of course we do have a pattern because CI CD pipelines, that's something that we do for years, right? I'm sure for 95% of you, 99% of you, maybe 100% of you, CI CD pipeline is not a new thing, right? And just to make sure that we are on the same page before we start to change this pattern, let's talk a little bit about that. So that's what I call the promotion pyramid. And that's a really strange diagram because I didn't have enough dimensions to express what I would say. So the idea is that as close as you get to production, you have less artifacts to check, to check. But that's like in the amount of builds. But the time the build stakes increase. So you have your unit tests, which are extremely fast, but you test everything. And then the survivors of the unit test go to the next stage and do development integration test. There are less artifacts to check, but the tests are longer. And then it goes all the way to production. Probably maybe you have some manual QA in some stage, and they will be very long, but you will only have a few artifacts to check, et cetera, et cetera. So that's not a new thing. Another view of the same process is view of the quality gates. And that's from the great book, Agile LM. It's not a new book. I think it's been around for a decade now. But as I mentioned, those are not new things. And what we have here are the quality gates. The promotion itself is moving the artifacts from one area between those quality gates to another, while during the quality gates, the tests take place. Right? So again, I hope this is not new to most of you or all of you by no means. And that's a good thing. That's exactly how it should be. Now let's talk about Docker. You could take this and apply to Docker, but most of the people won't do that. And they won't do that because of that. Docker build command or Docker build concept is extremely powerful and very easy. And when you have something very powerful and very easy, you want to do that. You want to use this great tool to build your promotion pipeline, your continuous integration and delivery pipeline with that. So usually you will end up with something like that. Instead of promoting binaries or artifacts between the quality gates, you will promote this Docker build file between the quality gates. And that basically means you just tag it or your branch it in Git or in whatever source control you use and you just call it, okay, now this Docker build should be run in a QA and then a staging and then production. And then all you need to do is check out the right Docker build file for the right environment and run the Docker build command. And eventually you will get the same image in the right environment that you can run. That sounds like a good idea, but fast and cheap builds are not always the way to go. Shanghai China, by the way. Yeah. No, it was empty. The problem with Docker build and trying to recreate the same image in every environment is that. So when I worked on this presentation, I said, okay, I'm going to write the most ridiculous and unstable Docker file, which of course no one ever does, just for the sake of the argument. And then I went to the internet and understood that I don't need to invent anything. Most of the Docker files that you will encounter look like that. This is a great Docker file. It has two stable lines. Run number five, create a directory, and run number nine, run the final, the app. All the rest are unstable calls to some kind of dependency managers or package managers to bring the latest version of stuff. And of course you can understand that the chances that I will run this Docker build file over a course, over a course of a couple of whatever my CI pipeline takes can be hours, it can be days, it can be sometimes weeks. The chances that I will get in my production environment, the same image that I get in my development environment are very, very slim. And now there are a couple of hands that take Docker to production, all right, I think five, and there are more and more, half of you actually played with Docker. And this is where you should say, okay, we can fix it, right, we can fix it. Let's try, let's try to fix it. This is how we fix the base image. We can nail down the version. Did it, did we fix it? Looks like so, right? Who said no? Good, why? Yeah, no, we will get to the other. Did we fix the base image? No, why? Hey, oh, thank you very much. Here's someone that actually knows this Docker. So, that looks like a fix, at least for me. We know what final versions or release versions mean. And they usually mean stable binary, right, immutable binary. Not with the operating system versions. And that's why. So, it's a dilemma and canonical had to face it. Ubuntu 14.04 when it was released, 14.04, April 14, right? More than two years ago. And since then something happened. What happened since April 14th? Yeah, no, I mean, not a version. Like something that requires attention of canonical guys. Harbleed, yes, Harbleed, thank you very much. We all remember Harbleed. And so, here's the dilemma. From one side, system administrator guys hate changes. This is what they do. And there is no way they're going to upgrade a good version of an operating system, which actually works perfectly. And 14.04 was extremely good version. Because very obvious reason, it screws the uptime, right? So, we cannot upgrade because it will screw our uptime. Important stuff happens, like Harbleed. So, the trade-off is should we stick with this concept of immutable versions, 14.04 will always mean the same. And then chances are that a lot of people who are still on 14.04 won't get those security updates. Or should we push the security updates under the same version? Basically, mutating a final and immutable release but guarantee that important security patches will be applied even for the existing version. And of course, the right decision was to apply those security changes, those security updates on an existing version. So, 14.04 that we download today is not the 14.04 that was distributed on April 14. And that can happen any day. And that means that nailing down the version of the base image like that won't actually solve the problem. There is one fix that is even better than that. Who can give me, what can we do? How can we fix it again? That's right, we stuck not so much. Image ID, thank you very much. It's called a fingerprint or a SHA-2 is this guy. Now, this guy is great. This guy basically means now we refer to a checksum of this image. We will always use the same version of Ubuntu. By the way, which version is that? Yeah, I think I just sat on the keyboard and that's what it ended up with. Okay, what about those guys? How do we fix those? Come on, guys. All of you come from ops. You should know how to fix those. Versions, right? Versions, of course. It's up-get. Up-get, we can nail down the versions all around. And it actually will be just fine. Right? Sounds good. How many of you come from Java background? Great. Okay, almost none of you. Good, good, good, good. How about when I run this? What does that mean? Can I rely on that to be like immutable stuff when I run it twice? What it will do? Most of you have no idea what this even means. And that's good, right? You don't know. Yeah, there is no way you can build you can run Docker build multiple times over a course of any same period of time and end up with the same binary. You run Docker build in your development after you crafted your perfect Docker build file and you end up with the container. In other environments, chances are you will end up with another container. And that's one of the reasons why you don't trust Docker. Because you know that you have no idea what is going to run in your production server. I don't know what's more scary than that. Right? And so that's of course one of the reasons. And we can solve it with a pattern. And this pattern is of course from Martin Fowler which is a nice diagram from Netflix blog. What is called the traditional, the immutable server pattern. How many of you heard about immutable server pattern? Oh, not that many. Okay. So the immutable server pattern basically says a very simple concept. When you want to do changes to your production servers, don't do them. Instead, spin off a new server which is configured differently from the old one. Right? And that frees you from managing state. And managing state is horrible. You know, elections are coming. Managing states are horrible. Yeah. And that's true for server as well. So when you need a new one, just kill the new one and provide a new one. So we apply this immutable server pattern. Give it some reading. It's a good stuff. And then we have the pipeline as well. What I actually say is when you build a promotion pipeline, do it with immutable and stable binaries instead of trying to manage state and trying to recreate what the developer wanted to do in development environment, in production environment as well. Take what they build and take it all the way. Right? So here you do a docker build. You get your image and then you promote this image all the way to production. Now, I keep talking about those quality gates all the time and you're like, okay, what's wrong with this guy? What's his obsession with gates? Well, I have a reason. Good one. Clue A shouldn't test development images. Non-tested images shouldn't be staged. And of course non-staged, non-tested or development images shouldn't end up in production. And the only way you can guarantee that that one happened is by building those strong quality gates when the environments are completely isolated one from each other. Now, the problem with that is docker. Remember that? That was good times. We have something like that in docker as well. And that called docker tag. Docker tag is actually kind of the name of the image. Right? It has like here, right here the name. But it also has a prefix which for some strange reason describes where this image came from in terms of physical registry host. Well it's quite obvious why it's done because you want a very simple way to understand whether you look at the Ubuntu image whether it's the canonical Ubuntu image or it's some kind of your customized Ubuntu image and this registry host will clearly mark where it comes from, whether you touch it or not but still the separators bring us more problems than benefits because now we cannot have more than one registry per host. And that's complicated stuff because when we talk about those quality gates, the best quality gates are having separate registries under the same host and there is no simple way to actually do that. And here is an example, right? So you can use Artifactory or any other tool to set up multiple registries. And here are multiple registries inside a one repository manager Docker Dev, Docker QA, Docker staging, Docker prod with of course the same image going through all of them and the question is how can we support it because in our Docker tag we can only give host to a busy boss. Where can we express that we have multiple registries in the single host? And that's kind of a problem. We kind of work, we can do that, that's of course always the natural reaction but we can work around it. Anyone knows how? Virtual hosts and virtual ports are the one who can actually help us. So we can use a kind of a virtual host here or a virtual port here to map this URL, which is the URL that Docker uses, to actual URL that our repository manager accepts. And that's how we can do that. So that's example for NGNX. Apache or HAProxy look pretty much the same. What we are going to do here is every time there is a request to port 5001 take the part that it goes to and then append it here to whatever host, whatever registry we actually refer to. So this solves it. Another option will be using virtual hosts. Although virtual hosts there is another limitation of Docker if you want to use an admin password you have to work HTTPS which sounds like a great idea when it's actually not because most of your registry will be behind your firewall and you couldn't care less about HTTPS but you have to do it because otherwise Docker won't work. And now if you use HTTPS that means that you need a certificate and if you work, if you use virtual host certificates won't work if you don't have a wildcard certificate which costs probably more than your startup. So anyhow Docker. OK, so you set up that and now the next question is OK, now how do I promote? I want to take my images and move them from development to QA and then to production. Should I download the Docker image? Huge. Retag it just for uploading it to another registry. That's kind of silly or scary depends. So what you can do is again using a tool like Artifactory, that's not the only one and then use the REST API. If you have multiple registers inside this tool then you can actually promote the artifacts within it using REST API. And then this actually solves the need to download, retouch and upload for that purpose as well. Now let's talk a little bit, we have five minutes about how we actually build our image. So it turns out that if you Google for anatomy of the container you can actually get anatomy of a real container. It's like rear door. Seriously. For our Docker container we will have three layers. We will have the base image and then we will have some framework and in this case it's Java, but it can be whatever it needs to be. And then we'll have an app that we will deploy inside our Java web server container Tomcat. Framework build is the build that contains a verified base image and then system dependencies like Java, Tomcat or whatever you need. Ruby, Python whatever framework you need to build your app. JDK in this example and Tomcat in this example as well. The most important part is even if you don't need any external dependencies, for example you just rely on Ubuntu base image and it already comes with I don't know, Python or Ruby whatever it's already comes with and you're good, you don't need any additional dependencies you still want to have your own framework image, your own base image and that's because you own it. It can be as minimal as that if you don't need anything on top of this Ubuntu or 14.04 and then you just add a maintainer. The reason is you own it, you control when it's get updated, when it's got changed. Remember that this is not good? This is why because when you have your own you have a really immutable tag and you decide when you update it and then on top of that you have your application build and your application build, you use your framework as your base and then you run a Java build or whatever build you need to run in PM or Ruby or Python or whatever and then you add one file your application file to this base image going from this base image to the application image and that makes adding one file and then you're done. So here's an example of the application docker file. So first you go from your own registry and you use your own framework. That's a start and then you add one file in this case is a war file but it can be whatever it makes sense for you and you put it in the right place and then the question is where it comes from how it comes from the release and the answer will be it depends on your pipeline in my pipeline I can test my Java code without docker so I will have a whole different pipeline for my Java and only then I will trigger additional pipeline for docker in a lot of cases you will do it hand in hand you will have your application pipeline and docker pipeline move together and then of course you will start with development and on the release but then you have like what what's that? Why you use unstable version? Didn't we have fun about non-stable versions don't rely on latest and everything and you actually rely here on latest and I will say yes this is my latest my latest is fine because it went through all the pipeline and I can trust it whatever build produced this war file is verified and good and that's why I can rely on my latest but also here okay now this is ridiculous didn't we speak about how bad this was? again for the same reason this is not the Ubuntu latest or any other latest from the internet this comes from my repository and that's the latest version of my framework it was tested it was verified it's fine to rely on latest and now here is another pattern which none of you heard and I didn't invent it it's called the sandwich testing anyone heard about the sandwich testing? I need one hand to verify that I didn't invent it not here trust me google it not mine sandwich testing is a flavor of integration testing that does bottom up and bottom test at the same time and this is a little bit of oh here you go not mine I have a proof and this is kind of what we do we have two pipelines one the framework pipeline that builds our base image and the other is application pipeline in which we build our application that actually effectively tests each other so we run our framework against the development environment pipeline right so every time we have a build application build the latest production framework when we need to test our framework pipeline we will take the latest production application war file in my example and we'll try to run it with our new version of Ubuntu let's say in our new version of Java in our very early tests of the framework build does it make sense that's the sandwich testing now that's a deadlock of course that in theory might happen when the framework is not backwards compatible and the app is not backwards compatible and they break in the same time let's say I changed something that doesn't work on the latest Java because it's not backwards compatible but also does not exist I use features that do not exist in the previous Java can that happen? No that can't happen because of how we establish our triggers and the triggers are very simple every time we have changes in code we build the latest application and it checks with the latest framework to verify that it's good if it's not good we can roll it back we can make changes every time we change something in the framework docker file we also test it with the last known good application file and again if we upgraded our Java in our framework build and suddenly it doesn't work with our production code we can then roll it back so what we don't do is we change changes in both of them in the same time and only check them together we only change one test with another and then check the other one and test with the first one and this is how we guarantee that this deadlock will never happen okay and I know that for one purpose for you to move faster from development to production for you to release faster faster is about faster releases automation, of course brings you to market earlier and that's exactly what we want now if you liked it devopsdays, Kansas City I hope that's the correct hashtag is it good and I'm Jay Baruch on Twitter and if you have JFrog is JFrog Twitter and if you have negative feedback it's also very important to me Baruch, I have two minutes for questions