 This is a talk about decoupling the cloud foundry build packs from the cloud foundry application runtime. I'm going to talk about CF local I'm also going to kind of talk about build packs model in general and why we like build packs and think they are awesome So I am a software engineer at Pivotal. I'm the project lead for CF build packs CF dev and CF local right now I am required by the city of Boston to announce this fire exit announcement So very quickly, please note the locations of the surrounding emergency exits and locate the nearest exit sign near you In the event of a fire alarm or emergency, please calmly exit the public concourse area emergency Exit stairwells leading to the outside of this facility are located along the public concourse for your safety in an emergency Please follow the directions of the public safety staff. Thank you okay so Why should I use build packs and why are build packs important and I think a big part of this question is Why should I use build packs over docker files, which are really popular right now? So docker files have a lot of purported benefits. They are really transparent You can really tell what's going on in the docker file That's a really simple model and people sort of like that that you can you know read each each line and understand What's happening and they provide a lot of control so you can in a docker file. You can do whatever you want you can Set up your application, you know correctly or you can set it up to be really insecure and you know Have a lot of production stability problems They also build images that are really convenient for sort of shipping to production because they're immutable layers that They're sort of made of immutable layers that You know don't change and so you can know that you're deploying the same thing that you tested and people like those things And you know they're they're benefits to that But there are a lot of caveats for building applications using docker files So and when you build an app using a docker file, you tend to lock external dependencies Behind application code so that when you patch when you need to patch those dependencies it's really expensive because you have to rebuild all the layers on top of them and With docker files that also know part of that model means it's really difficult to enforce structures Sorry to enforce structure or opinions about how application code how application should be built Because you can kind of do whatever you want The there's a loose contract between base layers in a docker file and application layers so you don't have to configure your application any particular way and Developers you know a lot of developers like this because it means they can you know configure their app however they want to and it's it's straightforward, but In an enterprise setting especially this is you know difficult for security and for You know configuring apps the same way for production stability so This is especially bad for enterprise apps because it makes it difficult for an operator to control the contents of applications that are shipped as images and it makes it really difficult for Enterprise operators to audit docker files as well because there's no particular structure to them You could have insecure dependencies kind of anywhere anywhere you'd think So I'm going to do examine this one I'm going to talk about this one CVE and open SSL from 2016 it was a high severity CVE as a memory leak in open SSL that could result in Denial of service attack if your application is fed ciphertext like encrypted Data that is sort of malicious so if you have a Node.js app that uses open SSL and the app is built using docker The usually your app would be built Excuse me From three docker files So one docker file might be from canonical or from an operating system vendor where it would have Ubuntu packages in it or SunTOS packages or whatever Then include open SSL and then on top of that you might have Node.js Docker file that's based on that image and Ed's Node.js And then on top and then you'd have your application docker file on top of that from the Node.js layer and that would contain your application code And in the case of Node you'd install your your node modules So in that high CVE hits you have The vulnerable open SSL at the very bottom layer of that locked beneath all those different layers So updating that that requires rebuilding all the layers on top of it So if you have 500 Node.js apps Then on September 26th, 2016 you're in a little bit of a pickle so If you don't do your dot if you don't manage your docker files Well, all your developers will have started their docker files on different base images And so you'll have to hope that they rebuild all their applications and deploy them in Some amount of time in order to patch this high CVE quickly Realistically, that's not gonna happen. It may if their base images aren't maintained You may never patch the CVE for every app in your organization If you do docker files better you can You know you'd use a corporate Base image with a corporate version of node potentially that everybody in your organization would use and then to patch the CVE You just have to rebuild all of the applications retest them and redeploy them That's definitely better than the previous scenario here, but it's still not great the You know you'd have to hope that all the all the pipelines for all these applications are working and it could you know still take a week Or more to to patch everything Let's look at the CF model for applications though. So in cloud foundry. We don't use immutable container layers We build a droplet. That's the separate artifact from the operating system level dependencies. That's not linked to it it's we rely on something called a bi compatibility that allows us to update the Operating system level packages separate from the droplet And so when that open SSL CVE hits you're in a much better position So I'll talk a little bit about a bi compatibility. So that means application binary interface It's a guarantee that operating system providers provide that says that when They patch their their operating system level packages They won't break compatibility at the binary level. So if you have Object code that's linked to object code at the operating system Provided by the operating system it will still work after they patch those packages and in cloud foundry our operating system layer and Applications is called CF Linux FS 2 and it's currently based on Ubuntu 14.04 This month or next month will be releasing CF Linux FS 3 which is based on Ubuntu 18.04 LTS They'll come really soon hopefully So I want to talk about how cloud foundry manages this the same process So when you do a Bosch deploy of cloud foundry, we're going to go cell by cell and start new cells with a new Root FS so each application is starting with new operating system level dependencies While the previous version of the application is running and this happens live in production So your applications are running you do the Bosch deploy new applications start coming up on new cells with a new root FS And we take the old cells down and we do this cell by cell until your whole platform is Has been patched and this process usually takes a few hours It's really fast compared to having lots of pipelines rebuild all of your Docker images every time a CD comes out So why you all might be here is why limit this to cloud foundry? Why why do we want to you know? Why is this model something that only cloud foundry can provide can we do this in other places the answer is we can So let's go back to that node.js app. I talked about before we could translate that into So OCI image land world Where we could build a droplet layer and a root FS layer And and link those to each other like Docker images are built But then you know we have a bi compatibility between those layers But we wouldn't be stopped to rebuild the droplet on top of a new root FS It wouldn't really give us many benefits So recently some new tools have come out that let you rebase Docker images And so you can take a previously generated Docker image and point it at a new Base image an example of this is Jason Hall at Google has a tool called imagery base that works really well for this purpose and Oh, sorry, so you can see here you can rebase the droplet on a new root FS And with some of these tools like Jason Hall's imagery base You can do this remotely in a Docker registry without downloading anything So you can take a layer in a Docker registry and point it at another base layer without really any data transfer And you know pretty instantly so with 500 node.js apps You could update a Docker registry to have all new base or to point all your droplet layers in your root FS layer pretty much Instantly, which is very similar to what you do in cloud boundary No, you don't you don't have to download anything you can just Yeah, so you wouldn't have to do a Docker rebuild of those images you would still have to redeploy those those images, you know However your platform handles that right Not not picking a particular, you know container platform to describe this anything it uses image layers, you know, you could do it Cool, so why stop there we connect on top of these You know if we use this OCI image model, we can extend the Way we do things currently to be even sort of more effective So for instance, we could separate the application layer from the dependency layer so that node.js and your application are separate and the build pack could manage the a Contract between those there wouldn't be a bi compatibility But would be something like in the case of node if your package.json file doesn't change Then we can reuse the same dependency layer from the last build to be to make this even more efficient and in multi build pack mode We could separate dependency layers generated by different build packs So that we don't even have to run every build pack to restage an application just the build packs that are you know associated with your app You know or just the build packs that Would need to be rerun to build dependencies that need to be pulled in for when they're updated and your application specification so we can go even further different than this and We can make use of build caching so that That could be shared between build packs So that build packs could still take advantage of each other's dependencies But you wouldn't have to redownload droplet layers to restage you could just literally do the minimum thing possible to generate new layers So the goal here is to minimize build time and data transfer for these builds So we have a few things in progress that sort of work towards building this new You know interface One is CF local so CF local is a tool I put together that lets you use build packs to generate droplets and the only dependency this has is docker Usually you use a local docker daemon to do this, but you can actually use a remote docker daemon to So CF local will let you build droplets using build packs locally. It'll let you pull droplets from cloud foundry and You can also run those locally too or you can push droplets to cloud foundry Droplets you've generated or droplets that you pulled from it So it's a great it's a great tool for debugging and iterating locally on cloud foundry apps And it's much faster than cloud foundry because you don't have to upload anything and you're doing the whole staging process locally It also has sort of special support for connecting to services running in cloud foundry and If you're interested in this here's some it's up on github and cloud foundry incubator It's also still on pivotal IO at pivotal IO slash CF dash local So the back end for CF local was just been extracted into Images that are independently usable from CF local So there's an image published a docker hub called pack slash CF that will let you without CF local without any other tools besides docker use docker images a Sort of processing tools to build droplets and run them and export them as images. These are sort of is the same interface That's similar to Google's cloud builders so You can use these Docker images as base images in platforms like concourse or really anything that runs Application containers or anything that runs up Docker images In the future, I'm so I'm currently working on a concourse resource That'll use these base images and just spit out new versions of applications all the time using these strategies like remote imagery basing and a remote layer appending and Finally most exciting announcement here is we we are collaborating with heroku on a new sort of more uniform build pack API that Would were of cloud foundry build packs and heroku build packs would be more compatible with each other when used and with in multi build pack mode we're also trying to collaborate on sharing more tools and Optimizing application builds So I'll have more about that later this year Sorry optimizing application builds And that's it any questions sorry Brett you want a hand of the mic? It's just wondering if you could go back a couple slides to that initial one more. There you go. Thanks Sure any other questions There In your talk in the demo you mentioned that CF local is available only for Mac now, but Windows is coming soon. When is that? So actually that's CF dev. Oh, the first alpha release of CF dev, which is a different tool It's a whole cloud founder. You can install locally is only for Mac But CF local runs on Windows Linux and Mac all same features across all platforms as long as you have Docker there Cool, and the second thing you showed is the multi-build pack thing. Yeah, is that? Available today with the CF V3 push or is that something else? Yeah, you can use multiple build packs with CF V3 push eventually it'll be in the CF push command also. That's that's coming pretty soon Having talked to the CLI team It doesn't work quite like this. It doesn't use images But it still lets you supply dependencies from different build packs to the application Yeah, what I've also noticed is when you do the V3 push, it's overriding some of the core Dependencies in the initial build pack what I was hoping is let's say I push a Java one and then I will push a static build pack or Binary build pack that says just overlay the same container with additional files But keep the original entry point. That's is there any way to get that behavior? So let me see if I understood that you want to provide Different dependencies from different build packs into one application But you want you want like multiple processes running at the same time No, only one process the Java one, but let's say it uses IBM Cplex Runtime which includes some runtimes and some libraries and things like that There are need to be available in the host, but I don't need to be actively running They will be invoked to the JNI on the jar through the jar files that are already put in my first app So I want to be able to manage the second set of Cplex libraries independently than packaging into my original app That way it can be versioned independently Than doing it all as one app and then have this nightmare of managing versions of Cplex Makes sense. Let's talk after I think it'd be easier to figure out exactly where you use cases and Talk about offline anybody else Over there in the back So that was just a very high-level introduction to CF local So can you tell us how people are using it to make their lives easier or better? so CF local will let you iterate really quickly on Cloud Foundry apps without needing to Re-push them and that staging process is much faster So if you want an environment that looks more similar to your production Cloud Foundry environment you can use CF local to Iterate really rapidly, but also have that environment available locally so You know this this works across all languages and that's you know, I've seen people Use it that way It's especially nice with Java actually because you can use spring boot dev tools Locally with an app running in CF local To you know sort of update application source code and see it your application update instantly without even restaging That that's one example of a place where I've seen people use it really effectively Anybody else When you build things When you do a CF local image, is it always going to be a Docker image? Or can you convert it back into the standard syntax where we can do CF push? So actually CF local doesn't generate Docker images. It generates droplets that don't include the root of s That's when you do a CF local stage You get a droplet file in your local directory and you can get a build cache file in your local directory And so you can re-export those without restaging with a new root of s whenever you want to so it gives that model The platform has but the locally in Docker So on a build Jenkins server, let's say what are the steps that would be there when would it be doing a CF push there? Or would it be running CF local on the Jenkins server and doing this and then pushing it? So if you want to push you see if local to push an application to cloud foundry that you built locally Is that the question? Yeah, so you could do a CF local stage and then you get a droplet file in your local directory And then you could do a CF local push and it would actually push that droplet without the root of s part up to cloud boundary and then start the application in cloud boundary and That you wouldn't need a Docker image, you know anywhere in that process. You'd just be uploading the droplet What does the Jenkins server have like if I'm doing the CI CD and Is this just for purely testing and is there any artifacts that I do with CF local Reusable when I check into bid bucket somehow to be picked up by Jenkins server So not not sure I totally understood that I don't think you want the end end results of the CF local process to be able to push to an OCI registry if that's what you're saying Yeah, that's You could and that that's a workflow I've seen people do in concourse where you generate a droplet and you store that in s3 And then you re-export image out of the droplet and upload that to that platform or upload the droplet to cloud boundary That that's that's a workable workflow We're working on a concourse resource and some other tools that are sort of more effective tools for CI And less that are less targeted at local development to it should help you with things like that Yeah, I think I think you're gonna want things in your CI pipeline that are gonna validate in other ways that the dev wouldn't necessarily So you don't want the devs to push up the final droplet necessarily All right. So I've had some people express concerns that the typical CF development pipeline is to have something like you push an app through development And it works and then pipeline pushes it through dev and then our stage and then into production But each time you have a different Droplet it does this give us a much better tool set then for saying yes We have one droplet that then is then moved through different pipelines or different stages in a pipeline and validated Or is there some other path that you would go through to realize that? Yes, you can use CF local to do that You because it has this ability to pull droplets from cloud boundary and push them up to cloud boundary you can Download a droplet from one application once you've validated to that application works and upload it to another application in a different space Or org or wherever So you can use CF local to to enable that kind of workflow There are some features in the platform that are coming up that will let you do that You know without needing to use CF local as a separate tool. Is that like the better push stuff? They showed sorry Say again, is that like the better push stuff that was in the keynote the where you have like multiple droplets and you're selecting which one Yeah, I think sorry are there plans for this Build pack process to be extended for things other than CF local So do you mean do we plan to have other tools that you know use the same kind of model besides CF local to do it? So Yeah, so the packs docker images I was talking about before our tool you can use independently in CI and concourse You know with whatever your CI system is as long as it supports container images And those are a little more Flexible you can sort of you know fit those into different kinds of pipelines in different ways to be able to use build packs You know in platforms that support images and to build droplets and platforms that support images We're looking at a whole bunch of different options for tools We can build on top of those images for concourse for Jenkins for you know different CI platforms people use For doing even for doing builds and Kubernetes for instance Would you mind advancing one slide? I want to take a picture of that one. Yeah, no problem Any other questions From a jacquard side Real quick one. You mentioned Jason Hall's work on image rebase. Where would I go looking for that? I? Think it's in pretty sure it's in the github google org. It's called image-rebase Any other questions? All right