 Thank you for coming to this presentation and good morning. This is a sponsored talk, but I'm not going to try to sell you anything, I promise, except for some new ideas. And we're going to learn a little bit about containerization and an easier way to create containers and to maintain them. So my name is Danielle Adams. I'm the Node Language Owner at Heroku. Heroku is a platform that runs hundreds of millions of applications across multiple languages and database types, including Node, Ruby, Java, Python, Kafka, and Postgres. As a language owner, it's my responsibility to make sure that our Node users have a good developer experience, which includes managing build environments, run times, documentation, and more. So like I said, today we're going to be talking about cloud-native build packs and containerization. So hopefully more seasoned Docker users will be able to learn something new, and then also beginners will be able to get started today. But first, I'm going to talk about my own experience getting started with Docker a couple of years ago. So at a previous job, I was working on some software that was rapidly scaling. The company was growing really quickly, and there were a lot of teams that were kind of moving at a very fast pace. So because we were growing so quickly, we had to cut costs to our infrastructure. And so all of these fancy platforms that would do what we would not configure and just kind of do it automatically and magically, we would have to manage that stuff and take it in-house. So at the time, all of the development teams were kind of siloed, and they were working by themselves, and they had their own processes and deployment pipelines for all of their respective services. And then it looked something like this, where everyone was kind of forced into new workflows and new processes because of the new tooling that we had to use. One of the things that we were doing was we had to take our own, we were going to bring our own job and scheduling infrastructure in-house, so the developers would have to manage and maintain their own build environments and runtime environments with help from the DevOps team. So this is where we all got kind of introduced to Docker. It wasn't a very positive experience because of the pace, like most startups that we were going at. It was a steep learning curve, so we all had to configure our own Docker files. We already had hundreds of services and applications front-end apps, and so we had to adapt Docker to all of these. There's a lot of copy and pasting, things weren't working, and it was a very hostile time in the environment in between teams. So this is an example of a Docker file. As you can see, it's not doing much. This isn't something that any Node application would be doing, installing Node and Yarn, and then running a Yarn install. And then Docker files have to go in every source code. So what we were seeing, like I said before, was we had to have Docker files for all of our JavaScript code bases, and they were all pretty much doing the same thing, but we didn't quite know how it was supposed to work, and we also had to copy and paste all these Docker files across all of our source codes. So at the same time, Heroku has been iterating over the concept of build packs. Build packs are a set of execution steps, which will create a runtime environment for any executable code. So at version two, Heroku has two things kind of wrong with the way that build packs work. So at version two, build packs will create what is called a slug. It takes all of the source code environment variables, and then dependencies like Node and Yarn, and it creates this slug. But this is a proprietary piece that you can't take this out of Heroku and run it on any environment. You actually can only run it on a Heroku dyno. So it makes it really hard for our users to debug things. If they see something running in production, they have to debug it in a production environment because they're not able to duplicate that environment. And then also, because this is something that is built internally and lives in Heroku, you're not able to run it on something like Kubernetes, which has a very open ecosystem of ways to run execution scripts and whatnot. So this is where we meet Cloud Native Build Packs, which is a project that we've been working on. So Cloud Native Build Packs is really just math. Cloud Native, in this sense, that term is kind of up for debate, but for the purposes of this, we'll just be talking about Cloud Native as a way to manage containers in an agile and a scalable way. And then Build Packs, which I've just described as a set of execution steps to input source code and output something that's runnable. You add source code and then it outputs a Docker image. So these are the steps that happen when source code is inputted into a Cloud Native Build Pack. There's first a detect script. So if you have a Build Pack, you can pretty much put any piece of source code that is runnable, but if it doesn't match the Build Pack, the build will fail. So for instance, I can't put a PHP app through the Node Build Pack because it's not going to run because PHP does not run on a Node environment. So for instance, on a Node Build Pack, it's going to use detect to look for a package JSON or some type of JavaScript file to run. Next is the build step. The build does most of the work. So it will install Node modules, dependencies like TypeScript or any scripts that needs to be run. It will run build and compile steps. And then there's an export step. This will take all the artifacts that have been created and export it for a runtime and image. And then there's the caching step. So this takes a lot of the reusable artifacts from the build and it makes it available for the next build or for other steps that need to be made after the build. So I'll talk about a little later. So it's really easy to get started using Cloud Native Build Packs. If you want to do it locally, you have to install Docker. Then you have to install the pack command line tool. This is something that came out of the Cloud Native Build Packs project, which is a tool to both build images from build packs but also to build build packs. Yeah, which is cool. And then you need some source code available on a local machine for Node. You would need a piece of source code that has package.json because that's what we use to detect Node source code. So creating an image and running a container is also pretty easy. It's only two steps. So the first step is you have to create your image. We have a flag that just passes in a build pack and then pack build. My Node server is the name of the image that will be the output image. And then the next step is to run the image. So this will just take the image that's already been created and it creates a container from the image. So as Node developers, we know that not everything just comes out of Node. We might get a package, well, we do get a package manager, but we have other package managers that we might want to install and use like Yarn. We also have tools for static typing. There are too many front end frameworks to really count. And so we want to make sure that if we have a source code that also has kind of these extensions on Node and JavaScript that we're also catered to those. So this is where multi-dependency builds comes from. So one of the ways that we can cater to those environments is by using a builder. And so a builder.toml is a file that will create a builder image. And so this is kind of a step above a build pack or a build, yeah, so a build pack. It takes multiple build packs and creates an image to be run against source code. So as you can see here, these are a bunch of build packs that we've created at Heroku that we might need for a Node source code. Node, NPM, and Yarn, and might is the really important word here. Because next, so further down on a builder file, you'll see that we have two different groupings. So the thing with a build pack also is that if it fails the detection script, it will fail the build. And for reasons that because it can't detect source code. But if it's in a builder grouping, if it fails that group, it will just move on to the next group so that it can use that grouping of build packs. So for instance, our Yarn build pack looks for a Yarn.wok file. This is how we prioritize it because usually people opt out of NPM to use Yarn. So we want to detect for a Yarn lock file. And if they don't have a Yarn lock file, then it will move to and use NPM because that's the default package manager that Node developers would be using. And so this is visual of a builder image. So that builder.toml file will, when you run it through a build, it will create this builder image. So at the base, we have a Heroku stack image. We're using Heroku 18, which is based off of an Ubuntu image. And so this is the operating system that the code will run on. And then we have all of the dependencies which kind of stack onto each other, Node, Yarn, and NPM. So it's just as easy to create an image and run a container with a builder. First is instead of passing in a build pack flag, you can pass in a builder. And then the next is that you just create the container from the image that you've created. You'll see here that we have Docker run and then the image name, but there's no actual execution script. So the same way that build packs are smart enough to understand the environment that it's creating for Node, it's also smart enough to give the image a default run step because Node and NPM and Yarn applications only have so many execution steps that they're going to use. So it assigns an execution script to the image. So when it's run, it just starts it. So for instance, this will just start a Node server. I think it's just nodespaceindex.js. And so the image knows to run this step. So I kind of said a lot of words that all sound the same. So I wanted to recap some of these. So first a build pack is scripts that are run that will output a Docker image. Builds take an input of build packs and source code and then they output an image. Builder.toml and builders define multi build pack build environments. Pack is a command line tool for executing builds locally. And then Docker files are what we're trying to avoid here. Okay. So I have a demo. Hopefully, I think that's big enough. I recorded it because I don't want to put you through the torture of having to watch me mistyping. So first we are creating our builder image here. So that's what I talked about when we saw the stack of Heroku 18 node yarn. It's taking a builder config file. That's the builder.toml and then it's running no poll. So that means that it's not actually pulling from remote the Heroku stack image because I already have it locally and it just cuts down on production time if I don't have to take the stack image because it's most likely not going to be updated that often. So I've created my image. Okay. And so now I'm going to build my image from my source code. And so I am building the node server. I pass in the builder and also run node poll. And then as you can see it's run the detect scripts. It's gone through yarn. So there's a couple steps there from build as well. You can see it's downloaded node. It's downloaded yarn installed the node modules and then run the export and cash those layers. So the layers that we have that we want to make sure that we remember our node yarn and our node modules because we will be looking at those later. So the next step we can do is use pack to inspect our image. We get some metadata here like what build packs we use the run images and then the stack that we're running on. And we can see here that we have a list of images that we've just created. So our node server which is from the builder and then the builder image what we just created from yarn and PM and node. So we can run a couple scripts against our Docker image. We can look at the node version that we've that it's using the images using. We can run a test script locally and then last but not least we can run our server. So we kind of saw how the layers are built they run in the build and then they're exported and then they are cached at the very last step. And so this is an example of the layers that are created from the Docker image that we've created from this build pack. So like I said we have the Heroku base image and then this multi build pack or the builder has created an image which has node yarn a layer for node modules and then source code on top. And so the good thing about layers is that while it might seem like a stack it's actually layers that you can swap out so that they don't impact the lower layers and also the subsequent layers. So for instance if I want to swap out node 12 for node 13 and not have to rebuild my node images or touch my source code or recompile but I could just update my package.json with to say hey I want to use node 13 and then it'll read you could rebuild your image and then it'll replace that layer. So it would be nice if we could just run our production servers on our local computers but that isn't sane nor practical. So also the benefit of layers is that when you update them locally those are the only things that get pushed up. So if I'm updating something like an image that I have on Docker Hub I can rebuild my image locally and then when I push up the layers to the registry there's a delta that's analyzed between what I have remotely and what I have locally and it'll just push those updated layers up. So another great thing about layers is that you don't have to the same way that you can use them for caching you also use them for subsequent build locally. So in the first image everything's going to be slower because you're building everything for the first time but you have a cache that's available. So when I'm running a build there's three directories that I have access to so the first one is my application code then I have the build directory or the build pack directory and then there's a third directory which is our layers and so the build pack takes out the stuff that the dependencies that I might want to reuse node modules, yarn, node and it'll duplicate those over to the cache so that they can be used later. So this is configured by using a Tommel file and so we configured so that we let the build pack know that do I want to make this available for the cache? Do I want to make this available for subsequent build packs? So for instance if I'm running node and then I'm running the yarn build pack do I want to make node available to yarn? Yes, because I need it and then also do I want these dependencies available for the runtime? And so if it's a cache that's available for the next build we could take those from cache and then the second image will be built a lot faster. Here's my next demo. So right now I am tagging the node server image that I created and I'm pushing it up to Heroku's container registry so that I can run it on Heroku. This middle one is going to take forever so you can see this is really annoying. We don't want to, if we're testing something we don't want to push this every time. I also sped up this video so this is actually twice as fast as when I was doing this. So then we, oops, there we go. So now we're going to change the version of node which my video is off but so that got messed up but if anyone didn't see I changed the version of node from 12 to 13 and then I rebuilt the image from the builder. So we can see here that we're downloading and we're re-downloading node and using node 13 but we're re-using our node modules. We're going to re-tag the image that we just created and then we're going to push it up to Heroku again. So we can see here that there's only two layers being pushed and that's the new node layer and then the layer of the source code that we changed in the package JSON. Okay, so also to just show you how easy this is I do have a, I'm going to try to do a live demo and hopefully it works. So this is the server that I've been using for the image that I've just created and I put it on Heroku. So I'm going to change the color of the background of that website, the page. Okay, well it's getting messed up because I resized it so I think I'm just going to skip that. If you have any questions about that step I'll just tell you what I was going to do. So what I was doing was I was going to change the background of the image and then of this website that we have here and then I can push it up to Heroku and then release the container and then you can see that the only layer that would have changed so we saw those two layers that were changing and then so the only layer that would have changed would have been the top layer which would have been a really fast push and then it updates automatically and we released it on Heroku and then it would have been changed in seconds. So we have a couple of benefits like I've talked through for building containers with cloud-native build packs. Build packs are modular. They take advantage of Docker layers that logically map to source code components and dependencies. You can also chain build packs to suit the needs of any container. It's scalable so you can use them across projects that use the same technologies. Source code can remain free of container configuration and you don't have to maintain those over time as we saw in my story before that can get pretty overwhelming. It's also efficient because you can use Docker features to enable an agile and composable development workflow and then also build packs remove a learning curve for people that really just want to get started with containers easily and they don't have the knowledge to do that quickly. So I have some resources here. Oops, I clicked the link. So first, okay. So if you go to buildpacks.io, this is the site for cloud-native build packs and that's where you can learn more about build packs, how to use them, the different options and also you can create your own build pack. This is some more information about deploying with Docker. So everything on Heroku, so everything that I was doing, I was using containers on Docker or on Heroku and so this has some documentation around that and then this is all the source code that I used for this demo. So Heroku has a couple cloud-native build packs that we've been creating and then our builder images are at our pack images repo. The demos that I just created and if you want to go over the slides again, the slides are at this URL. Go back, yeah. Great, and that's all I have. Thank you everyone for listening. I'll be at the Heroku booth pretty much from noon until the end of the conference. So I'd love to chat about cloud-native build packs or Node or just come say hi. We have build pack stickers and Node stickers also. I also have open collective gift cards where you can contribute to open source, which I'd be happy to hand out after my talk and yeah, that's all I have. Thank you very much. I think I have a couple minutes for questions. If anyone has any, I know we ran three minutes for questions. Yes. No, that's not a good time question. No, so Heroku is... So first of all, I will give you a form to... You could try Heroku for free. We have a free tier if you're curious, but you can deploy with Docker on Heroku, but Heroku is actually... How can I describe this? So you don't need Docker to deploy, but because it does things a little bit differently, if you have a piece of source code and you just push it up to Heroku, Heroku uses Git to deploy code, it will detect the source code, so the same way that build packs use it, it detects the source code that you have, it detects the source code, and then it'll run it on Heroku for you. I know that sounds really simple, but that's pretty much how it works. Yeah, and then I can give you... If you're interested in seeing how the different ways to deploy on Heroku, I can show you how to do that. Yeah. Exactly, yeah. So for Ruby, it looks for a gem file. For PHP, it looks for a composer file. It does other stuff for the other languages. Any other questions? Cool. Well, thank you.