 Hey everyone, I'm Shobhek with me. I have got Adam Kaplan, your software engineer at Red Hat and we'll be talking about our journey on building container images on Kubernetes. So a quick introduction into who we are. I work as an architect in the CI CDN get-off space at Red Hat and Adam works as the OpenShift build API team leader at Red Hat. We've got good experience here between doing CI and doing builds the old way and the new way. So a quick history, a quick note on what we'll discuss today will be, you know, going down the history to building apps in general and then building images eventually after that. We'll do a small case study of our experience with OpenShift build so far and from there we'll jump into why we have project chip write, which is the next step after OpenShift build, which has been quite successful. So if I may quickly you know go over the old school way of doing things. So the old school way of doing things was typically you have a local dev environment where you have some Java code and then you build a jar out of it and then you deploy it on a VM and after that you're good. It's a nice CI system, all works and then what it is said, hey, this worked on my CI system which tried out this jar on a VM. Now let's deploy it on test stage prod, which could be another VM. So it's effectively the journey from your Java file that got turned into a jar somehow using Jenkins, maybe, and then you deploy it in a couple of different environments on VMs to actually finish your cycle. That was how the old school used to be. And Adam at any point if you're joining just raise your hand and let me know and we can get started. But then we have the new school here, which means things are probably robust in a lot of different ways, but at the same time that added to a bunch of complexities. So you do have the same Java code which needs to be turned into a jar eventually, but you don't typically do a deploy on Kubernetes the same way you used to do on a VM, which means you don't take a jar and deploy it in Kubernetes directly. There are things that have to happen with it before you can actually deploy your jar on Kubernetes, which means you have to build an image out of it. You have to ensure it adheres to certain security constraints so that it can actually go and be deployed in Kubernetes without becoming a nightmare for your admin. This is definitely different than what you saw in old school where you could simply take a jar, give it to your ops or DevOps person and you're on your way. Now, well, in this new school, the most important thing is effectively container images, which means eventually what you want to deploy on your test stage prod is not a jar file. It does not see a jar file. What it sees is a container image that your application has been built into. So how you get there? Well, there are a number of ways to get there, of course. So if I have my Java code on my laptop, it has to be committed to git, for example, and then something has to turn it into an image. It could be you yourself will build an image out of it and push to registry and then attempt at deploying in Kubernetes. But a more conventional way to do so would be have a CI system that takes care of your code, builds an image out of it, and then deploys it. A key component of the CI system would be the stuff that builds images out of it, which means you need something in your system where you would say, hey, I've got my source code here. I need to build an image out of it. And then I want to deploy it on Kubernetes. Now, your CI system could be running on a Kubernetes cluster. It may not be running on a Kubernetes cluster. It may be running on a Kubernetes cluster without any awareness of the fact that it is running on Kubernetes, which means you may still not have to use any Kubernetes-specific APIs. But then the idea is you need infrastructure when you need to be able to build images and building images are fairly expensive. So that needs attention in itself. So a quick case study. The main reason we get into this case study is because we've tried solving this problem for the last three, four years. A quick history of where we as Red Hat have come from in this space and there have been many other attempts. But given that we've tried out with OpenShave, but I'd like to quickly give an overview of how our experience was with this. So a user would start with a source code called app.java, for example. Then we have OpenShave builds on OpenShave today, which is build config. There you would be able to specify your parameters that would tell OpenShave, what do I want to build, how do I want to build, and where do I want to eventually push it to? So you guys say, hey, here is my source code. I want to do a Docker build of it, or I want to do a source to image build of it, and then I would want to push it to Queer.io or into an image stream. Both would work. So in this experience, the one thing that we learned is building images are expensive. They need a lot of care to ensure that they're secure. Number three, while we started out with, while we were probably one of the first projects to actually provide a robust build experience on a Kubernetes cluster, while doing so there were other tools, there was other innovation happening in the space with respect to how do you build images securely on Kubernetes? We did end up with a few limitations in there. While OpenShave builds have been massively successful, there were a couple of limitations where, which, namely, are limited tooling, which means we are constrained to using source to image and build up. We don't expose build as such, but then today in OpenShave builds, when you do a Docker file-based build, you are potentially using builder under the hood. And for those who don't know, source to image is a way for you to build images without you having to specify anything like a Docker file into your source code. It just takes care of converting a source code into an image with a specific framework that you have defined. But then it's still limited to the fact that there are these two options that you have with respect to building images. Of course, it's a closed ecosystem at the moment, given the fact that we started probably three, four years back. The ecosystem hasn't been as open as it could be. Of course, it's inflexible, which is one of the key concerns we wanted to address in the upcoming slides that we'll be talking about, which means if I had to add another way of building images, it is definitely possible today, but it's not super easy to do for an admin or for a user. And of course, it works in OpenShave only, given the fact that the API itself confirms two Kubernetes standards. We would want to not take this to the next level of ensuring that we not only build something for OpenShave, we build something for Kubernetes that we also may happen to use, that we also will be using for OpenShave, but it has to be built in a way that any Kubernetes would be able to use it, be it OpenShave or be it a non-OpenShave Kubernetes. So while we were doing that, we figured out that, hey, we started with solving this problem of building images on Kubernetes, because you need to eventually deploy images on Kubernetes as we saw the old school and the new school. We did solve the problem to a great extent. It works. But now we see in the community, there is a lot of innovation happening around strategies of building images from source code. And we didn't want to ignore it at all. We wanted to embrace it and we wanted to start a project around it to ensure that if a user or if an admin has an opinion on what the best way to build images are on Kubernetes, one should not be constrained on what they could use. For example, if a Kubernetes admin decides that, hey, I love cloud native buildbacks, there should have a Kubernetes API to be able to use it. The same goes for a builder or source to image or talk file build. The idea is we should provide an API for admins and users to be able to build images in OpenShave with well-known strategies. Or if an admin wants to define their own image build strategy, they should be able to do so. And we want to help build out the contract, which is Wendernutral, which doesn't think about any first-class strategy out of the box. Rather, what it says, hey, if your strategy is something that can be defined, we want to embrace it. We want to ensure you're able to run that on Kubernetes. So with that, we would want to introduce Project Shipwright. It's a framework for building container images on Kubernetes, which means it doesn't specify any first-class cells in about which build strategy you should be using. If you want to come and say, hey, I've invented my new build strategy, excellent. We would provide a Kubernetes framework and an API for you to be able to run that on Kubernetes. So with Shipwright builds, you should be able to build container images on Kubernetes, which means it's a CRD-based project. So you should be able to install it on your cluster, your CRD and controllers, and you should be good to go. You could use your tool of your choice. As I mentioned, it could be Bellda, Istwi, Cloud Native Bellpacks, Kaneko, I think one of our colleagues who have even come up with the build strategy for Co, which is a nice build and release management tool by Google in the K&A space. You should be able to customize them as a quick example if your admin wants to reduce the capabilities that you want to give to your build system because your admin isn't too comfortable with very expanded sort of capabilities. Usually the admin should be able to do that in a very easy way. It shouldn't be harder than going and modifying a CR and ensuring that the same update is available to all users on your cluster. And one of the most exciting things that it's powered by Tecton API is under the hood. It's a non-leaky abstraction, which means you don't get to interact with Tecton, but under the hood, we are using the popular open-source project Tecton to actually do all the heavy network there. A quick look at what the different APIs are in the shipwriot world. So you have a build strategy API. That's where you define how you want to get an image built. So the build strategy API would be used by admins to define, hey, I want to offer a Kaneko build strategy on the cluster. I want to offer a build-packs-build strategy on the cluster. This API is available both in the namespace scope and in the cluster scope, so which means if you want to try something risky, a risky build strategy, you could totally do that in the confines of your namespace. And once you're happy with it, you could just make it a cluster scope CR and you're good. The build API is where you define how your build needs to look like. That's effectively the one on the left of my screen, which is you see that I have defined that I want to build my source code called foo.no.jsx. Here are my credentials for my source code. I want to build it using the build-packs-v3 strategy, which is a reference to another CR, effectively. And I want to send the output of my build, which is an image, to query IO foo bar, and here are your credentials for it. It's a pretty straightforward, simple API. The goal during the API design phase was to ensure that it should be easy for... And even though there is Yamal in there, but you should be able to understand, hey, it should be no brainer to say, here's my source code, here is my output, here are my credentials. This is a strategy I want to use to be able to build it. Yes, and right now we have tried out and hosted a bunch of build strategies on our GitHub repository. We've tried out Belga, Belpaks, Kaniyiko, Source2Image, and we're really looking forward to you contributing more. The goal is we would love to maintain popular build strategies in the upstream community, which means the list is definitely not exhausted, the way you see here. We would want to see more build strategies being contributed, such that they're available for consumption on a Kubernetes cluster. With that, I jump into a demo to show you that things are actually working. Right, so I'm on an OpenShift cluster right now, and to be clear, this is because I had an OpenShift cluster around. You could do the same thing on Kubernetes, on plain vanilla Kubernetes as well, not the red-hat distribution of Kubernetes, which is OpenShift. So if I go to my build strategy, let me press out. So we're going here. You could see that I've got four CRs here. One is Belda, Belpaks, V3, Kaniyiko, and Source2Image. So at this point, the admin has enabled these four build strategies, and they are simple CRs, and we can go in and show you how they look like later on. But in general, I think we have these four build strategies available on the cluster. If you can see the pre-diverse from a vendor, as well as mechanism of building images perspective, I'm going to go and do a build. So let's say, let me grab some YAML. How's a demo if we don't show you YAML? Right, so I think there are a few things to note here that I'm in my namespace, or I may actually not go from here, just so that I can. I'm going to actually open up, create a new CRD instance, create a new CR. So I'm going to name this as myNodeApp. This builds from a Docker file, this one specifically, and let me just give you a nice image tag. I'm going to push it. So I'm going to build the main branch of this GitHub repository. Awesome. So I'm executing a build using builder. As a quick wrap, what we just did is we told you that we have got some cluster build strategies on the cluster. We defined a build, which means we created a build definition to say, hey, this is what we're going to build, this is where we're going to push to, and this is how we're going to build it. And then we decided to execute a build. So with that, I could quickly show you what's happening out there. You can see there's a new build pod being initialized. Nothing too exciting here other than the fact that, yes, it's going to make a lot of things boring for you. It's going to take care of building the image for you and pushing it to quail.io. And the most boring way possible is that you don't have to think about what happens behind the scenes when an image is being built, because shipwright is taking care of that for you. So while that's happening, Adam, now that you're here, I think we missed you on a couple of things. So then we were discussing on the old school and the new schools. And while discussing about that, I wanted to pick your brain on the fact that you did get an early head start on the new school Kubernetes, even before you joined Red Hat. Yes. So how was that experience like while the build happened? So, well, that was like a very challenging time when I was working at a startup. And we, like many startups, this was back in, I think, 2014, 2015. We had our application on Amazon. We were using EC2 instances, effectively virtual machines on Amazon's infrastructure. And we found that it was harder for us to kind of scale there. And we thought Kubernetes would be a better way to do things and would help not only our company scale as we grew, but also it would help us as developers get our applications into production. And we had to dive in to the complete ecosystem in order to really get it there. We had to learn Docker. We had to learn how to write a Docker file. We had to really inspect the intricacies of our application because Kubernetes forces you to think about things that you probably didn't have. Sometimes you might overlook, for example, memory allocation, how much memory our applications were using. It took us about six months for us to get from starting to write our container images and push our application to production to actually get it there and running in a stable capacity in production because we would consistently find that when we're trying to tune things that one part of our application wouldn't talk to another part or we thought we were giving things enough memory, but if a user interacted with the application in a certain way, memory would explode. And then on our EC2 instance, we might have been covered by the fact that we had large memory and it kind of hid the fact that we were spiking. But on Kubernetes, if we didn't set sufficient memory limits, then the pod would crash. It would get out of memory error and then the user would be left hanging. So even when we got into production and working and we got everything talking to each other, we would still continually run into issues where some things just would not would crash because of the things that Kubernetes makes you think about. Right. So building an image locally and trying to deploy it wouldn't necessarily work on Kubernetes. Yeah, that was another challenge. Certainly the days of we had a Python application. So in some sense, we had nothing to compile. It was just our Python scripts. But at the same time, assembling it into an image meant codifying all the things that we had on our servers. And we found sometimes that some virtual machines were configured one way, some were configured a different way. And so there was a big challenge kind of bringing those things together such that all the different things like cron jobs, for example. If you just have an EC2 instance, you can say, okay, this one EC2 instance has the cron jobs that take care of various batch things. Well, on Kubernetes, you need that in a container image and then you have to create a cron job object for you to do that. So those are the many challenges that we had in writing the Docker file so that we had everything there. It exposed things that frankly might have been missed, especially as a younger company. We still had a bit of turnover and things that were done that weren't necessarily documented. So it revealed a lot to us. That it's definitely not a boring way to deal with deploying applications. There was a lot of involvement by various folks into ensuring that you could actually take your source code and get it deployed on Kubernetes. Yes. Thank you for that. My pleasure. I think with that, I did do a walkthrough on how things were in OpenShift builds, how things are in OpenShift builds and why we're on a spaceship ride. I think with that, it quickly jumped into the build that succeeded here. So if you see here, we built an image, and I'm going to deploy it shortly, but first I'm going to show you that we can actually build it using different strategies. So I built my NodeApp build using the build-up strategy. I should have named it properly, but okay. I built it using the build-up strategy, and the build succeeded. Now I'm going to build using a different strategy. That's also named with B. That's called BuildPacks on OpenShift or Kubernetes. So I'm going to do something very simple, which is I'm going to go here and modify the image, the image tag here. And I'm going to say, hey, let's do the same build with a BuildPacks v3 strategy. And as you know that you wouldn't need a Dockerfile for this, I'll comment it out for now. You don't need a Dockerfile for doing a BuildPacks build. So everything else remains the same. You specify your output. You specify your credentials to talk to the external registry. You specify an image that you want to push to. You specify your source, your revision, and then you have a strategy, which is BuildPacks v3. With that, I'm going to quickly do a save. And that's done. Now I'm going to go and create a build run out of this, which means I'm going to create an instance of build execution right here. So as you remember, the name of my build definition was my node app. I'm going to call this BuildPacks build as a generate name. Let it come up with its own name. And there you go. I'm going to say create. And let's see whether it gets off. It says pending. Pending is usually good, which means it's trying to pull images. So here you go. And I hope folks watching this are aware of what BuildPacks are. This is a build using Cloud Native BuildPacks. It's a project that lets you create a container image just with your code. You don't need to know the intricacies of Docker or how to write a Docker file to get a container image that you can then deploy to production. It's a very cool project. I think it's buildpacks.io. And there are a lot of different companies that are providing BuildPacks where they can detect which kind of application that you're running. And based on that can then provide their sort of the right opinionated set of instructions to get your code to production. Right. Yeah. And if you were aware of some of the different... And I'm going to bring in an old build strategy I've had that we're all popular, that we're all well aware of the source to image. Just in... It's kind of similar to that, which is your source code, which turns into an image. But they're like I said, they're different strategies. They handle building images in a very different way. And this project ensures that you could use all of those, be it source to image, be it buildpacks, or be it buildup. And while we were talking the buildpack build completed, I'm going to quickly see if it shows up on my tags here. You can see the demo buildpacks tag has shown up here. I'm going to go ahead and do a quick deploy of this container image to see if that actually works. I just hope it works. Let me just cross. Going to do a few things here, which is I'm going to say this is... Sorry. Sorry. A bit of copy paste off here. All good. And I'm going to say this is a Node.js application. And that's it. So let's see if this gets deployed and after it gets deployed, you should see a URL serving the contents of the source code. Sorry. The URL containing the URL serving what the source code is supposed to do. So with that, while this is deploying, I'm going to quickly show you another build strategy. And this is important because as promised, this is not just going to be one or two build strategies. We've tried a few of them and I'm going to quickly show you the Kaneko build strategy. So we have the Kaneko build strategy on the cluster. So we are good with that. And we'll do the same thing what we did there. We'll do the same exercise. We went ahead. Let's use a Kaneko one for this. And we're going to call it Kaneko demo, the image that is going to be pushed. And let's build the same source code with a third build strategy out here. All right, let's do it. Yeah. I'm going to create a build run which is an instance of a build execution of your definition. And as we know, it's called my node app. Let's do Kaneko build. I don't like generating names here. So Adam, you might have noticed we missed something while that. And this is probably going to fail and fine. Which is Kaneko build needs you to have your docker file defined. Oh, yeah. So let's go and put that in there. And well, let's say you don't want to get rid of a build execution that, you know, is not going to work. Let me just quickly go and delete this. It's going to fail shortly. You know, say, hey, don't bother. We're going to start a new build with the modified build definition. It's that easy. And I just want to call out that because this is all API driven, there's some might be asking, is there an easier way for us to do this? Do I have to go in and edit YAML? And the truth, the matter is that in the future, hopefully you won't. One of the things that we are working on in ShipWrite is a command line interface that we're going to call ShipSHP. And that will let you create the build runs from your build objects. We have on the roadmap to add the ability to cancel a build run while it is executing. So then as when you say cancel my build, we will take care of gracefully terminating that build. Make sure that it cleans itself up easily. And other things like adding logs, getting logs out of your builds is also on that roadmap. And there are actually a couple of projects which have started building tooling on top of these APIs, I think. If you want to build tooling like your own CLI, your own UI on top of this, or your own distribution of Kubernetes, you're free to do so. We're providing the APIs anyway for you to be able to create your own experiences. So yeah, I think the image we just built using build packs on Kubernetes with ShipWrite has been deployed successfully. And we can see your running application. Awesome. Look at that. So yeah, with that, I'm probably going to... I just want to remind you I have like five minutes. Five minutes. And we've got a question in the Q&A. Eduardo asks... Awesome. Let's play the questions. Yeah. So Eduardo asks, will Buildv2 enhance support for config change triggers? So right now with ShipWrite, we don't have any event-driven mechanisms in it, but that is something that we're actively exploring. So one of the things that I believe, Shubik, you probably mentioned earlier, I may not have been on the session is that ShipWrite is built on top of Tecton. And one of the things that we want to take advantage of is Tecton triggers. And that is a more general purpose way of triggering events in the Tecton universe. And so things like firing a build run after your build object has been changed is something that is certainly being considered. And we are starting to work more closely with the Tecton community to make things like that happen. Do you want to walk through the future? Yes. While you're at it? Yes. So our young project, we are absolutely looking for new contributors to come and help out. As I mentioned earlier, the command line interface is one of the things that we are working on and in fact, it's bootstrapped. It's already been bootstrapped. We, as Shubik was showing you, you can install via Operator Hub on OpenShift right now. We have more enhancements coming to that as well as the community also is looking to add the Helm Chart option for installation. We have a pretty bare documentation website. We would love to have folks contribute to that, especially if you have graphic design skills, web design skills, or if you have any technical writing skills, we would love to have you join us. And finally, as I alluded to earlier with the Q&A, is that event driven builds is something that we are looking to, such as config change. Also, some ideas that we have informally discussed is driving builds based off of changes to a base image, which is something that exists in OpenShift. We don't have that either in Shipwright or Tecton just yet, or even some more customized, more open means of firing off a build. Right, and I think if I could add a more general statement to that, it's like every awesome feature of OpenShift builds that you've loved using in OpenShift, that would be upstreamed in a different way or in the same way in Shipwright IO build. And we'll ensure that runs on Kubernetes, of course, as a first-class thing. Yes.