 Thanks for coming. Welcome to Docker and Buildpacks, one app, two options for deploying for to Cloud Foundry. So first let's introduce ourselves. I'm James Myers. I'm a software engineer at Pivotal. I've been on the Diego team for about two years. I work out of San Francisco, and I've been on Cloud Foundry for about three years in total. And I'm Jen Spinney. I work for Hewlett Packard Enterprise as software engineer. I'm also on the Diego team. I work out of Seattle, Washington. I've been on the Diego team for about a year and a half. Cool. So let's talk about what this talk is about and what we're trying to get out of it. So what we're going to do here is we're going to give you a side by side look at deploying one application with a build pack and with a Docker file. What that means is we're going to look at the user experience like CF push and all that stuff, as well as what happens in the system under the hood and the consequences that happen because of it. We're going to examine some of the pros and cons between each deployment strategy, and we're hopefully going to give you some insight into when you should use each of these deployment strategies. So first we're going to look at build packs. Build packs are the traditional way of deploying applications to Cloud Foundry. They were popularized by Heroku, and what they really are is just a zip file with three scripts, a detect, a compile and a release. And so what it really does is it takes some language runtime and some dependencies and your source code and then produces an executable thing that we like to call a droplet. Essentially build pack apps are your source code and the build pack and then they just run on Cloud Foundry. Next we're going to examine like what is Docker. So Docker is the popular implementation of containers right now. So basically a user will or a developer will create a container specification in a file using directives known as a Docker file and then they can build and run that container locally or on some distributed system. Deploying Docker apps to Cloud Foundry is actually a relatively new feature that came out with the new Diego backend and we're starting to see some users take advantage of its functionality. So throughout this talk, we're going to be taking one sample application and seeing how it works on both the build pack way and the Docker lifecycle of deployment. The sample application we have here is a super simple, go-lang app. All it does is it listens on the port that it's been instructed to listen on for any incoming requests. HTTP requests. As soon as it gets a request, it echoes back, hello, CF summit. So it's very, very simple. We're using this simple example, but everything we talk about right now applies to more complicated apps as well. So let's say we want to deploy this as a build pack application. Our first step is actually to do, to just open up our editor, make that file we just saw, just a single go file. Code that up. And then we're going to do a CF push. You can accomplish these next two steps with just CF push, but we're breaking them down into a CF push no start and then a CF start in order to explain the differences that happen between those two steps. So when we start with a CF push no start, two things happen. First thing that happens is the application bits. In this case, just that main.go file gets uploaded to the Cloud Controller. So this is whatever your source code is. This is not like your language runtime and your frameworks and stuff like that. That's just the code that you wrote specifically. So now that's uploaded to the Cloud Controller. And an application record is also created for this app. So this application record contains information about what the environment variables are, what the desired number of running instances are, things like that, the metadata about the app. So one reason, so here we're splitting out the CF push no start and the CF start just for illustration purposes. But in the real world, you might really want to do this as well in case you want to bind a service after you do the CF push. You want to make sure your app exists in the system. And then you want to bind the service before you actually start the app. So the next step is actually composed of two phases. When you do a CF start, you're actually doing what's called staging and then really starting the app up. So as part of staging, we have this component called the Stager. This component is a translation layer between the Cloud Controller and Diego. It's responsible for creating a task to tell it to Diego to describe, hey, I want you to take these bits and I want you to combine them with a build pack and create a droplet out of it. So Diego gets this request. It says, hey, I want you to go make a droplet. In this case, we haven't explicitly specified the go build pack. So the first thing that's going to happen is when this task is executed, all the build packs are going to be brought down that the system knows about. And each of those build packs is going to run its detect script and say, do I apply to this application? In our example, we have a .go file. So the go build pack is going to say, hey, I can handle this one. And then the go build pack then is combined with your app bits and it will combine together to create an executable that can just be run anywhere in the CF system. So the result then is this droplet. It gets uploaded to the Cloud Controller database. From there, the next step is the Cloud Controller wants to actually start this out. So it's going to talk to another translation layer we call NSYNC. This is the communication between the Cloud Controller and Diego for actually starting up apps and stuff like that. So the Cloud Controller says to NSYNC, hey, please go tell Diego to start running this. Diego being the back end knows about tasks and long running processes. So this is the long running process we want to create here. NSYNC is going to look at the metadata on this app and put together a description of the long running process that Diego needs to go run. Diego is then going to take that app, that LRP, long running process, and it's going to put it on whatever number of containers it's going to find containers on various cells where it's going to actually run this app. And then once it starts running, Diego is going to make sure it stays running, and if it crashes, it'll restart it on another cell, for example, and stuff like that. Cool. So we just talked about deploying it with a build pack, but what about deploying it with a Docker application? So once again, the first step for deploying with a Docker application is actually creating the app itself in your favorite editor. Then after that, you actually need to make what's called a Docker file or the container specification. An example Docker file is this one here. So there's various directives that mean some stuff. The first one is the from directive. So this directive I like to think of as the root FS or the root file system. And every example previously, when we talked about containers, the containers are made with a base image that is specified in the deployment. Here in your Docker file, you actually get to change that. So you can specify what you want, Ubuntu trustee, or in this case, it's a going image that already has going installed. The next couple of blocks are what I like to think of as the staging process for Docker. So here you have run directives and add directives and a bunch of other ones that they expose. And what this does is it's going to basically set up the dependencies as well as compile your application into like a running app. We also have metadata directives like expose and user. So you can use these to tell the container system or the container engine, like what you're going to do with your application. For example, I'm going to expose port 8080. I might say that I want to run with user recap. There's a couple of things you can do there. Then lastly, there's the command directive. This is basically going to specify the process that you're going to run inside your container. So this is where the build pack would figure this out for you. You're saying it yourself like I want to run my sample out. So now that we have this Docker file and we're ready to create it, we're going to actually build the Docker image. So the two steps that I'm sure most of you are familiar with are, first of all, Docker build. So what this step does is it takes that Docker file and then for each directive, it will try and create a layer. And then basically once you finish running this, you'll have a local container image that you can run and experiment with and test out. For the purposes of Cloud Foundry, you're going to want to actually run a Docker push. So Docker push, you need to upload your container image to a Docker registry that's publicly accessible by the Cloud Foundry deployment. So now that we have our Docker image actually uploaded to a registry, we can start using Cloud Foundry to run it. The first thing, once again, that we're going to do is we're going to split CF push, which is normally one command, into two commands to illustrate some of the differences. So in this case, I'm going to start with a CF push with the dash O flag. So the dash O flag is what you use to specify the location of the Docker image. So all this does in this case is it will only create the record in the Cloud Controller. There's actually no reason to upload the application bits as they're stored in the Docker registry already. So then you're going to run CF start on your application. So it's actually surprising, but there is a staging process for Docker applications. So the first thing that's going to happen is it's going to take that metadata that was uploaded to the Cloud Controller. And once again, it's going to talk to the translation layer, which is the Stager, so that it can create a task on Diego. Once this task begins running on Diego, what's going to happen is the Diego cell is going to communicate with the Docker registry. And it does this to pull down that metadata that you exposed in your Docker container, so that we know how to like run your application, so the port and the start command and all that stuff. Once you've done that, we actually compile this into what we call a Docker Droplet. A Docker Droplet is not actually an executable thing. It's more of just a bag of metadata that we can use to execute your Docker image. So once we do that, we flow that back to the Cloud Controller and save that into the database. The next step in the start is actually running the application. So once again, we're going to talk to the translation layer, which is called NSYNC. And this is going to create what's known as a long-running process. This long-running process, instead of specifying the RudaFest as its base image, it's going to specify the Docker image itself. When this starts running on Diego, what's going to happen is Diego is going to communicate with the Docker registry. And it's going to actually fetch that Docker image and pull it down and then run the start command inside it. Once that's happened, we're going to do the same stuff that we do with the BuildPack app, and we're going to make sure that your app is running and that's pretty much it. Some special things to note for Docker and Garden. So Garden is actually Cloud Foundry's container engine system that we use to run containers, basically. But Garden can also run Docker containers. And what this means is that Garden Linux and Garden Run C basically can download and fetch the Docker image and then they just replace the root file system and then create a container using this as the new root file system. They also then can just launch processes inside this container as well as expose ports. One important thing that's cool, and I'm sure if you want to, you know, will or jewel sock, is Garden Run C is now running Open Container Initiative compatible images. So this process is becoming much more standard. Cool. So now that we've talked about how the apps get deployed with both BuildPacks and Docker, let's go over some scenarios that developers and operators are going to run into on a daily basis. One of the first ones is a security vulnerability. So let's say you get a CVE and open SSL. So if a BuildPack app gets a security vulnerability, there's two things that are important here. One is whether or not it affects the BuildPack or the RudaFest. So if it affects the RudaFest, the developer does not really have to do anything. They just sit around. But the operator, it'll notice that a new RudaFest release has been made, which patches this. And then all they have to do is upload this RudaFest release and redeploy. On the deploy, we're actually going to redeploy the cells and they'll get this new RudaFest. So any container that's created with this new RudaFest will be secure and ready to go. So after a finished deploy, all BuildPack apps will actually be secure if it's a RudaFest vulnerability. The other case is when a BuildPack is vulnerable. So it's very similar here. The operator will notice that the application or the BuildPack is insecure. They'll have to upload a new release and deploy or just CF upload the BuildPack. Once they do this, you'll probably have to restage your application just to get the compiled bits to be changed. But then we'll be ready to go. As if you went to the V3 talk, pretty soon this might be a zero downtime deploy, so no downtime for your application. This is really important to point out that for security-wise, BuildPacks are great for operators and it really defines these user roles well. So like the operator can actually patch a CVE without actually knowing anything about the container or the application running inside of it. And it's pretty much, the developer does not need to be involved. The second scenario, sorry, the same scenario for Docker, the developer would need to notice that the CVE happened. So that's one thing that's different. The other thing is they're going to actually have to rebuild their Docker container. So what's going to happen is they're going to need to either rerun Docker build so that it pulls from an updated base image, or they're going to need to actually change their Docker file itself so that it can no longer be vulnerable. Once they do this, they need to repush it to the Docker registry. And then just to be safe, they have to restage their application so that it pulls any metadata that might have changed with this Docker image back down and saves it in the Cloud Controller. Sometimes CF Restart might work, but it's always a good thing to just see a free stage. Another scenario we want to talk about in order to compare BuildPacks in Docker is when you're doing local development on your app. So in this case, we had a go HTTP server. If I want to do some just quick iteration locally and see if I do this little change to bring down my app, stuff like that, we want to look at how that works with BuildPacks versus Docker. So with BuildPacks, your local environment, if I'm developing this on a MacBook, for example, my local environment isn't going to look exactly the same as the environment where the app is eventually going to be running in. It'll probably be running in a Linux container up in some public cloud or private cloud. So as I'm doing my local development, small things might be different. I might make certain assumptions about certain libraries that are available that aren't in the other environment. Or, for example, I might have a different version of Go on my system. I might have a really outdated version of Go, and when I actually combine it with a BuildPack, it's going to use a more recent version of Go, and I don't realize that. And I have some things in my code that are only working because I'm relying on some older features of Go, stuff like that. So I'm going to be developing an environment that isn't exactly the same as how the app is going to be running up in the cloud. There are a couple ways to deal with this. Some of you might be familiar with Bosch Lite. This is like a full Bosch deployment, except it sits in one VM on your local machine. So you could deploy a full Cloud Foundry and Diego to a VM on your machine and then push to that Cloud Foundry. This gives you a lot of control. You can see the logs through the entire Cloud Foundry system, but it's quite heavy-handed. And when you're just trying to see, does my app work or not, this is a lot of effort to set up, and you're probably not actually saving any time by doing it this way. So we really only recommend Bosch Lite for when you're developing Cloud Foundry itself, or you really want to see the detailed logs. More common, we suggest a blue-green style of deployment. The idea being that you make a small change. You push your app, but you push it to a special space that's like a testing space or a staging space, and your real-world traffic isn't being routed to this version of the app, but you know where it is and you can play around with it and test out whether it works or not. The nice thing about this is as soon as you validate that your experimental bits work the way you want to, it's very trivial to then go and update the route and have all the real-world traffic routed to your new version of the app. The downside with this is that every time you are doing a small update, it takes some minutes to do the push. So it might take several minutes to actually upload your app bits and stuff like that. So it is a little bit slow if you want to be doing small tweaks and see, oh, if I just change this one thing, does it bring down the app? Or if you're debugging, if you see that your app is crashing and you're trying to figure out how, it can take a while if you have to do a lot of iterations of this. On the other hand with Docker, it's a bit easier to do some of the local development. So with Docker, if you're running on Linux, you can use Docker directly. If you're using one of the other OSes, there's probably some system out there where you can run Docker on a virtual machine. This allows you to really quickly test out your app locally, make sure that it's running exactly the way you want, and then when you go and actually do the push, you know it's going to be running in the exact same environment as how you tested it locally. So it just means you can do much quicker local iteration-based development. So this is nice, especially when you're doing more edge-casey kind of things or when you're doing troubleshooting, things don't go super well the first time. You're using more advanced features that the BuildPacks don't provide for you. Another scenario we want to talk about is when you want to port the app to different providers, for example. So the great thing about open source is that if you're unhappy with whatever vendor you're using, you can take your app and go to another vendor and it'll all just work. So for BuildPacks, for both BuildPacks and Docker, this is pretty easy to do. For the BuildPacks scenario, BuildPacks originally came from Heroku, so you can run things on Heroku if you want. You can also run on one of the many CF vendors out there. We have the CF certified program and we have a number of different vendors that all have their own Cloud Foundry, and so you can just move it to one of those vendors. There's nothing about your app that's specific to BuildPacks. The way the BuildPack style works is you don't have to know that you're running inside Cloud Foundry in general. You might be looking for specific environment variables, but other than that, it's not that different from how you would just write an app that would run on a bare metal machine, so you could also do that if you wanted to. With Docker, there are also numerous providers out there that can run Docker images, so it's pretty similar. In both cases, you have a big ecosystem of different platforms that you can port your app to. So in summary, we want to just go through the pros and cons of BuildPacks and Docker. In general, with BuildPacks, the idea is it just works, and it's supposed to work for the 90% use case, not for the weird edge cases when you want to use a super beta version of Go, for example. But for the majority of uses, we expect BuildPacks to be a lot faster and easier to develop with. You just worry about your source, you just push your source, and it just works. You get automatic and constant security updates, so you as a developer don't have to be worried about this. And all you need to do is focus on writing your own code. Cons is it's difficult, when something goes wrong doing troubleshooting locally and that local development takes a little bit longer, and there's a little bit of a black box with the BuildPack. You don't get the output, you don't actually get to play around with that droplet. So that result of the staging, you don't get to see what that really is. So there's a little bit of, you know, we'll handle this for you going on there. So you have a little bit less control and a little bit less insight into what's actually happening. On the other hand with Docker, the basic idea is you have more control, which means more work, but you can tweak whatever you need to tweak. So you can run that like super new version of, you know, Go that's experimental and not released yet. You can also do your local development much quicker. But on the flip side, it means more work for you if, for example, there's a security update or something like that. And general, you're just more in control. So it's the typical software engineering trade off of more control means more work for you. The sample app that we displayed here and we talked about through this talk is available on GitHub. We also have our, all of our cloud foundry repositories on GitHub as well under github.com slash cloud foundry. You can talk to us on the Diego team on our Slack channel. It's cloudfoundry.slack.com and the channel is Diego. With that, are there any questions? So for the purposes of this talk, we're trying to keep it mainly open source related. So, yeah. Okay. Okay. I actually haven't had any personal experience with PCF Dev, but I imagine since it's similar to a Bosch Light environment, it would probably work as well. Yes. I would not imagine so. They're still running on the same execution agents, which are the cells. Yes. So you could do that, but if you like, for example, develop it first with Docker and then want to make it run as a build pack. There's probably no point in doing it at that point. Once you've got it working, working as Docker, I would just use it as Docker. Because when you run as Docker, you're going to be using your own custom rudifest that you're bringing. You're saying, I want this to be my base file system, and that's going to have slight differences with how the build pack style apps are run and stuff like that. But in a given organization, you might find there are certain apps that are more tailored towards build packs and certain apps that are more tailored towards Docker. So the apps that are more basic, not touching a bunch of edge cases, don't really require a ton of customization. Those would generally be better with build packs. And Docker is when you need a little bit more finer grain control. But for a given application, I don't think it makes that much sense to mix the two strategies for the same application. I think some of the trade-offs you might maybe want to do it is if you get security updates automatically. So that's like the one case. So if you prefer Docker for development and then you want the security updates for free using build pack might work. The other thing is I think that in some previous talks at the summit, there are some ideas about combining the flows into a more stable thing. So there might be something along that lines in the future, but I haven't heard much. How does it work when it comes to third-party agents that are running on, for example, for an agent that gets downloaded and use a build pack. How does it work in terms of Docker? Do we have to involve the agent as part of the Docker container? Yes, you would be responsible for doing that inside your Docker image. So you'd specify in your Docker file, most likely. So the question was about agents for logging and stuff like that. And agents that come along for free when you use the build pack style. Does that happen also with Docker? And the answer is no, when you have a Docker app, you have to take care of all that stuff yourself. Like if you want to have specific agents that deal with metrics and logging or whatever specific thing you need, you have to include those yourself because those are properties of the build packs. See that one more time? That's a good question for the container networking team, but no, I don't think they support that quite yet, but it's in the horizon. Sorry, the question was whether or not garden supports container linking over the network. There's a container networking team that's responsible for that right now. So it's not finished, but I think it's coming soon. Yeah, so that's one of the cool things with that from directive is you can actually specify a base image. So if you're worried about skew across Docker images, you can always have your organization create a base image that you'd like people to build off of. That's always a way to do that. Sounds good.