 John Waliki is coming in at 15. Ah, here he is. Welcome. Yeah. All right. Thank you, Matteo. That was fantastic. I'm really delighted to be here. Thank you. Yeah. Thank you, John. Have fun now. Oh, of course, Zoe. So Ryan, I'm like a live demo guy all the time, and that's what you're going to see. Excellent. That sounds great. I was especially interested in your topic. I know there's a lot in there. But this sounded like a really fun thing that I would maybe want to reproduce after the fact and try out as well. So I'm really excited. This sounded like a really fun topic. Yeah, absolutely. I think we're just about ready to go. If you're ready, you want to? Hang on one second. What I'm going to do is during our little talk, Ryan, since I can't see the live chat anymore, because we're in the backstage, I just sent you some links. I was wondering if you could drop them into the chat for our audience. They want to follow along with some of our links here, OK? Yeah, you bet. Definitely. All right, well, let's do it. All right, so we're going to pray to the demo gods, like Matteo suggested, and we are going to roll here. Let me do a quick introduction. And so my name is John Wallachy, and we're going to talk today about containerizing and deploying your Node.js applications using some best practices. At the end, we're going to deploy our containers to IBM Cloud Code Engine. And I'll sort of take you through that as well. If you're joining us just now, Ryan's been running the track today. We're in the JavaScript track, and Matteo, and Lucas, and I, and later, after me, we've got Ganesh coming up and just awesome presenters. I'm really humbled and delighted to be here with them. We're going to cover a couple of topics and just a little bit of background and why you're going to want to spend the next 30 minutes with me. I'm an IBM developer advocate, sort of a senior developer. I love to teach. I love to speak at conferences and mentor at hackathons and run workshops and tutorials. And now tons of live streams. So this is sort of second nature, eager to get out on the trade show floors and at the conferences. And next week, hopefully, I'll see some of you at Open Source Summit in Seattle. So look for me there in the IBM booth. So that's what I do. And senior IBMer, I do a lot of different. So I'm an IoT guy, and Node.js application developer forever and ever, other roles in IBM, sort of really fun leading the Linux efforts. And so great partnership with Red Hat for two decades plus now. But what we're going to spend the rest of our time talking about is how to build Node.js containers. And I'm going to spend a couple of minutes. My one slide, guys, I promised that there was going to be no slides, and I just got one because I just want to introduce the topics. So what makes a good container? Where and why would you want to build containers that are trusted and secure and small and production ready? Well, let's unpack those because they're really important. First, number one, you want to start with a trusted base image. You can go to Docker Hub and peruse through all the containers that are out there and maybe pick one that you want to build on top of. But let's be honest, Ivan from Moscow doesn't have your best interests in mind, OK? So maybe you shouldn't select his image as your base. What you want to do is select an image, a base image, that has got a pedigree and is supported and it's being scanned and actively maintained, sort of that you want to build from recent images that have got all the CEVs applied, that have the right set of infrastructure and rigor underneath them. So we're all very passionate and very worried now about our software supply chains and the software build materials. And starting from a trusted image really gives you that security that you know you're going to really require, especially as you get to production. You want to know what your application is running on top of. And so number one, let's start with a trusted base image. And I'm going to introduce you to what's called universal base images from Red Hat. They're free and I've been building with them for now a couple of two years starting to get good at it, though I rarely call myself an expert. Number two, secure. So you can throw a kitchen sink into a container and call it done. It works, but it's got an entire vast attack surface. And a secure container is going to be really tight, sort of that you understand and you know exactly why every package is in your container. Anything more is a vulnerability waiting to happen. So just I always think of is how do I make this container as tight and small? I going to install only what I need for the particular workload and then nothing else because you want to make sure that because CVs are going to happen, versions are going to happen. And it's software. We're all building on top of other shoulders, other giants. And we make mistakes. So the smallest attack surface possible is going to make your container hopefully as you get to production that much more secure. Now, as you start to prune and really call it your image, obviously it's going to get smaller. And maybe it gets big and then it gets smaller over time. I very often developers will collaborate and they'll give me an enormous docker file and we'll prune it down and we'll get it smaller and smaller. And I'm going to show you how we can do that in the next couple of minutes. We're going to start with a pretty big container and then a little bit smaller container and then as small as we can get it for the particular workload. And then we want to make sure we're production ready. When we go pick on our image, do we need all the docs? Do we need all of the extras that are sort of delivered inside of our image? No, right? What we really want is the production harness, the test infrastructure. And you'll see me in the next couple of minutes that I build almost everything from a make file, local and then GitHub actions and then a variety of Tecton pipelines and so forth as we get to production ready open shift clusters and Kubernetes cluster. So we want to make sure that we can push our containers through the pipeline and we get consistent results out of, right? We want repeatable results. We don't want, oh, today it looked like it worked and then tomorrow it doesn't quite work because we chose poorly. So we want to think through how we get to production ready pipelines. And so I write make files and then I write GitHub actions and I write Tecton pipelines and I pay attention to those things. So those are sort of the four big lessons, the four big takeaways for John, all right? Let's go and so I'm not the only one that's thinking about this. And I have one of my ex IBM colleagues and now a Red Hatter excited for him. And he's leaving the node application to stack. He's actually part of the OpenJS as well. He sits as a community member on the OpenJS technical committee, but Michael Dawson and Michael wrote a blog post about a month ago and really inspired me to come and talk Ryan today because he wrote the blog and I'm like, oh, I do all those things and I learned a couple of new ones from Michael along the way. And so as I read through this post and if you're able to drop that link, this link into the chat, that would be great, right? Thank you so much for being my right hand man. Got it, you bet. Thanks, thanks. All right, so we're focusing in here on part five, how to build a good container. And so what makes a good container? And so I'm echoing the same things, right? Best practices, security and size and pitfalls and how you get your images to production. And he starts to talk about why you should start with a base image that is trusted. And because they're supported by the Red Hat team, so the Red Hat Universal Base Images is a perfect place to really get started. And the second thing, right? So you wanna don't run your containers as root, kind of avoid the privileged ports and really think through the security of your container. Only put the bits that you need. Don't put secrets into your container. And what you'll often, what I've been doing is I inject environment variables into my containers. So I build a container, it's all everything that, whether it's Python or Node, I'll just use environment variables. And either through Kubernetes secrets or environment variables from your runtime, you wanna inject them. All right, so that way I could put my, and I sometimes I do have to time I'll put my containers up on Docker Hub. Sometimes I put them into private registry services like the IBM Container Registry or Quay. I don't put secrets into my containers because someone drive by, go to Docker Hub, it's public, they pull down my container, they run it, they run, you know, bash into it. And they're like, oh, look, there's John's secrets. And so I've avoided that very careful to not leak my secrets anywhere in a public container. And so I'm gonna use environment variables throughout. Now, it's a little bit more work. I know it's a pain, but it's the right thing to do. And the last thing is you want someone to go hack your backend because they've got a secret, okay? And then he talks about complicated stories. And I'm gonna actually do the same thing, but my own little words and not in Michael's because if I were to just pick hello world, we all, hello world is so easy. It's not going to show us the differences. So I'm gonna pick a little bit harder scenario in my little example in the next 25 minutes. I do wanna do one more shout out to Bobby Wolf because he also wrote a really good blog post. Let's see if we can get that link into the chat as well. Best practices for designing universal application images. And Bobby's one of my colleagues here at IBM and posted this on the IBM developer website. And he's an OpenShift expert. And so he's obviously in the trenches with our clients and customers deploying their images. And so he's got lots of practical experience about building trusted, secure images, keep them tight, keep them small. And so he's got a great set of lessons here that you can also learn from, right? UBI, image management, security updates, two-stage builds, I'm gonna show you the difference between a two-stage build and a one-stage build and you'll be shocked about how much you can squeeze out of a container. Now, why would it matter? Why is it important to build small containers? Well, a small container is going to let you, especially in serverless environments or a Kubernetes cluster where it's sort of transient or k-native, it's gonna pull an image and then execute that image. Well, you want the cold starts to be as fast as possible. And if the cold start needs to pull a gig and a half or two gig or five gig of cruft out of a container registry service, that's how your cold starts are gonna be really long. And so the smaller your image, the faster the cold starts are gonna be. All right, so think about finding the right way to squeeze your images, especially as we start to look at the bases, okay? Now, I've been talking about UBI. Actually, do I have this link here? I wanted to talk about what a UBI image is in a little blog post here. So announced two years ago at Red Hat Summit, and the idea was they're free, they're built on top of Red Hat, REL8. They also have REL7 images as well. So UBI7 and UBI8. And then they started to spin custom builds of it. So we're gonna focus in on the Node.js because this is a Node track and a JavaScript track, but there's a set of Java space images and they really start tight and they're all well maintained. They're every week, they're coming out with whatever CDs and someone's watching and someone's paying attention to that. So a brilliant place to get started, as opposed to the Wild West that you can go up to Docker Hub and good luck with that. Now, some companies post official images, that's a great place to start as well, on an official image from one of the upstream projects, great way to sort of understand the pedigree of your base images. But be careful about what that official image is a truly, how much support is behind it. All right, so a great link there. Now, where to find the UBI images? So jump up to catalog.redhat.com, software containers explore. And then I did a search and because I'm looking for Node.js 14, sort of the latest LTE release. And I found two of them. And so we're gonna take a look. So this one is sort of the build and run an application and then minimal is sort of the strip it down a little bit. And I'll tell you some stories, Ryan, because there's often scenarios where I have a two-stage build that begins with this one. And then for my production run image, I move what I just built here over to the minimal. And that's gonna be part of my story because in this case, I might need, if I'm especially if I've got native binaries, so Node lets you build native binaries. So I might need to do a GCC compile and with all the kitchen sync that it requires to do the build tool chains, but I only need the resulting shared library. And so I'm gonna copy that shared library and all of the whatever build artifacts from Node modules over to the minimal image. And that's when we're gonna see the size dramatically shrink. And my attack surface is smaller and I don't have all this junk in my junk required for the build, but unnecessary for production runs. So these two become my friend. There's actually a micro, oh, here. So I jumped up and I've been doing this now for months and months. And I noticed that just a day ago, they came out with the latest tag. So I've been a month ago, the image that I'm running is built on 1.23 and yesterday, last night, I said, oh, I'm gonna do this talk and wanna make sure that I'm running the latest, but I'm not. Okay, so I figured I'll hold off and I'll do it live with you because we're gonna upgrade our image from 23 to 26, just to do it live. And just be careful when you, if you're doing a Docker build from and then you just say latest, not always sure what you're gonna get. And I'm gonna show you that why latest is not what you sometimes think you're gonna get. Latest does not mean 1.26 if you already have latest in your cache. It doesn't do a check. And so I got tripped up on this. I kept uploading my image and there would be a scan up in the cloud and it said, you've got security vulnerabilities. And I'm like, I'm running latest. I was running latest from like way back here because it was still cached on my laptop where I was doing my builds. And so what I've done is I've moved to tagging my images explicitly because now it's a repeatable. I know exactly what's in that container, not latest yesterday, latest today, right? That's not a repeatable process. When I say latest, I really want is, I know my production image is 1.23 and how am I exposed? Because now I can go and look at the security for the newest one and I can go and take a look at the security packages and what's different. So I jump over to packages and upgrades and I can say, ah, these six packages have some CVE related to them. And I know the security exposure because look, we all get asked by our boss, does this affect us, right? Does it affect our image? Does it affect our production? And I can say, well, we're running 1.23, 1.26 is available and let's go take a look at the CVs that are particularly, and then of course, do push the latest builds to our pipeline and move to production upgrades. All right, so we're gonna actually do this live and then my last little image here. So in addition to sort of telling you about Node.js minimal UBI image, so very quickly, you know, grab that with the right tag, there is an even smaller one, which is a little bit more work, but if you're a Fedora or Red Hat user, it's actually not too bad. So what they did with Micro, UBI Micro is they pulled out all of the package maintenance tool chains. So you actually run the DNF install. So these are, or the app get install from your laptop and you just push the results into your mounted container image. And so actually it's sort of like the distro list container. Think of it that way. So the distro list containers. So that's UBI Micro and you'll see some dramatic shrinkage here as well. All right, so let's stop out of my browser and let's just jump over to a couple of little terminals here. I sort of, I'm gonna do some edits to make files and darker files in a second, but I wanted to show you, so I built this a couple of different ways and I'm building a pretty big project. I decided not to take Hello World. The project we're gonna build, let me just show you at the end, it's always good to see what we're trying to build here. So part of idea, I'm sort of a weather nerd and I've got access. I went and talked to my weather colleagues and I'm like, give me an API key because I wanna go build my own weather maps and I've got a weather station on my roof. And so what I can do is I can go and find out what the weather is. So I took a base map and the weather radar and I sort of do the little, current conditions and the heat and the wind. So I can sort of look at and I, so this is what we're gonna create. We're building a node application that's gonna go and pull different tiles from the weather company and overlay it into a map. All right, so that's sort of the end result that we're gonna get to. And so that's the Node-RED TWC weather radar map and I built it three different ways. So the first one, remember I told you about multi-stage? Well, I built it with just one stage. So the, and because Node-RED requires Bcrypt which is a native module, it needs the copilers, the GCC copilot. And so when you build that one, it's, you know, 600 meg, 687 meg there. Then I said, well, I wanna go build it using the UBI-8 as my first stage. And then I'm gonna use UBI minimal as my, oh, actually here then I actually, the second one I built was the two stage. So this is now a two stage one, but I stripped out the copilers and it shrank by, you know, 200 and something meg, 275 meg. That was a pretty big difference between just doing a two stage even UBI. And then my last step was to do the build and copile in UBI-8 and then copy the results into UBI minimal. And you can see I dropped another, you know, another 75 meg. So student math 65 meg. So I've, you know, in very quick amount of that little bit of complexity in a Docker file, but big results that made a big difference. And then remember I talked about micro here. Look how small the micro images are. And so that's sort of clever as well. Cause, you know, this is the type of thing that you really wanna head towards. Okay, so this was the other thing I wanted to point out. Remember I had cached the UBI-8, no JS, 14 minimal. And I had said, you know, do a Docker poll on latest about four months ago and I got an image and it's still in my cache. That's not the newest, right? The newest is was up until yesterday 1.23 and today 1.26. So be careful when you pull your tags because this is not always what you get. All right, let's go to a make file and to a Docker file. Let's sort of walk through it. And I'll tell you a little story. So I, because I'm the slowest typer and I fat finger everything, I do create make files. So I could just do make build, make, you know, make whatever, make run, make test, make push and so forth. So I've got, I'm gonna push both to Docker Hub and to the IBM Container Registry Service. And I've got a little namespace there for Dev Nation. I gave my container a name. I gave my container a version. Oh, I'm doing multi-arch builds too because behind, you remember an IoT guy? That's my real job. Behind me, I've got my Raspberry Pi in order to desk. I've got my Jetson Nano, ARM64. And so in the Raspberry Pis here, I've got some Raspberry Pi OS 32 bit Raspberry Pi OS 64 bit. I've got a Fedora of 34 IoT and a Fedora 35, oh, sorry, Fedora 32 image. So I've sort of mixed my little cluster there where I start to, you know, experiment with different types of different architectures because we know we're deploying out to our clients in IoT and IOT scenarios. All right. So we'll, we can also recompile our little Node.js app with ARM64. I'll show you that too. If I've got time, Ryan's gonna have to give me the hook, I think. All right, so set up my credentials. Here I got, I've got one that, so my, I'm a Fedora user that hasn't figured out there. So I'm gonna run Podman and I'm sort of muscle memory between Docker as a CLI and Podman as a CLI and Builda as a CLI. So you'll see me, you know, sort of fat finger between all of them. But what we wanna do is we're gonna just build, the first one's gonna be just a one stage. Can we, can we build from one stage and we're gonna pull in and I'll make this bigger so you all see. And we're gonna pull in from a Docker file called Docker file UBI8 one stage and we're gonna do a build, okay? And then we're gonna try again with two stage but just UBI8 and then we're gonna do a minimal which starts with UBI8 and then copies into. So we're gonna run a couple of these and they do take a couple of minutes. Oh, let's actually jump over to these as well. Because there's really some important tips here. So from registry access red hat com, UBI8 tag diversion, 8.4 most recent production. So of course now I'm starting from a base image here and I wanna install the Node.js bundle and I know that my particular workload here requires a set of node builds that are gonna need Python. So I have to pull in Python but you'll notice that I say don't install all the weak dependencies. So false on the weak dependencies because you wind up getting extra craft into your base images that you don't need. So just turn it off that I don't want the recommended stuff that you might give to me. The next thing I do is there's a whole bunch of silly warnings that it's not a, I don't have a license for Raleigh because I'm a Fedora workstation person. So I just disable that plugin so I can avoid some alerts and warnings. And I do the same thing. So you'll see me pull in GCC shadow utils, et cetera, et cetera. I gotta throw one minute warning in here. While you're on the topic of warning. So Ryan's telling me that I've got just a minute here and so we'll kick off a build and then we won't see it. The last thing we're gonna do, so that's the one pass image. The next one I'm gonna show you how I do it here. Start with the base and then copy from my, in my release image, I'm just gonna do a copy of the results there. And I'm gonna get a much tighter, a much smaller image. I mean, at the end of the day, I do a make code engine. And I actually, well, I've logged into here and it's actually logged me out. But this one, so there's that app running up on IBM code engine in five minutes or so. I was able to push my container up and run it in IBM cloud without too much trouble. All right, so I hope you learned a few things. I wish I was able to give you some demos and the capyles that the Docker image builds do take a 10 minutes or so. So we'll take a pass on that this time. Hope you learned a few things about how to build good containers. Do follow me on Twitter at John Wallachie on my tweet about Linux and open source and IBM cloud of Node.js, Node.Red IoT. Come follow me. That was awesome. Thanks, John. I have one quick question for you since you're running a lot of this on Raspberry Pi. Was there any major difficulty and like those are ARM images instead, right? Do most of them work for the common image types? They do, Ryan. So what I'm gonna show you here. I just did a, when you do a multi, so I use Builda to do multi-arch quite often. And let me just show you in my make file. So in my Builda command, there are base images. Nice. UBI base images for multiple architectures. So ARM, ARM64, PowerPC, System390, which is the Z mainframe. I don't know if any others are supported. Cool. So you wind up building, you do a Builda manifest create and you're gonna give it, you're gonna override the architecture. And so that is the Builda build from Docker. So that's the magic command right there. Okay. Cool. And the other one is like, you could actually, so within the ARM world, there's multiple variants. So you could do variant V8, that's ARM64. If you do it at variant V7, that's ARM32. And that's so you can target different Raspberry Pi images. That's great. Cool. Well, thank you for this fully live introduction to minimizing containers and covering all these best practices. This is a really great topic, excellent content. Thank you so much. Great. Oh, and follow up. If the audience wants to continue to chat, let's jump over to the Slack channel and I'll be happy to continue on our, if you're in Slack, just jump over, either meet me in general or meet me in, well, general is probably the best place. We'll start a thread there. Yeah, yeah. I'd love to see if you have any repo links to those build scripts. I'd love to take a look after. Oh, I do, I do, of course, Ryan. I'll put it in the chat for everyone. Excellent. Thank you very much.