 Alright, I'm Lydia Schultz for today's webinar. Thanks for joining us. We are here to hear our live webinar today, making your app soar without a container manifest. I'm going to read our code of conduct, and then I'll hand over to Jason Smith, customer engineer at Google. A few housekeeping items before we get started. During the webinar, you're not able to speak, but there's a Q&A chat box on the right hand side of your screen. Drop your questions there. We'll get to as many as we can at the end. This is an official webinar of the CNCF, and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct, and please be respectful of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page under online programs. They are also available via this registration link you used to get in today, and the recording will also be on our online programs YouTube playlist under the CNCF playlist channel. With that, I will hand it over to Jason to kick off the presentation. Thank you very much, Libby, and thank you everybody for joining. I see we have quite an open house here, and I'm on the Pacific Coast right now, so thank you for waking up early for this. We can have a great start to the day learning a little bit more about ways to make containers without manifest files. So as mentioned, my name is Jason Smith. I grow by Jay. I will always say that I will respond either if you call me one or the other. I will raise my head and acknowledge you. So that's fine, and you can follow me there on Twitter. And there's my dog, which you may or may not hear during this session. So heads up on that one. On today's agenda, we're going to define environments. One of my big things that I like to do is I find it very beneficial to kind of recap where we've been and kind of set the stage. Then we'll talk a little bit about build packs, tecton, quick demo, and then we'll jump into a Q&A. So I started computing seriously, like running my own web apps, running my own PCs back in, I want to say, around 2004, 2005. And this checklist might look very familiar to a lot of people here. If I want to build my own little PHP app to, I don't know, let's say manage my college schedule or something. I had to go through this whole mess of things. I had to first find an old computer. And usually you have to go to, like, Goodwill or something or head up some yard cells, get a bunch of components, put it together. Then I would install Linux, usually Ubuntu. But sometimes I would mix it up. I was an early adopter to the Ubuntu Ubuntu. I pronounce multiple ways, Linux. Install Apache, my SQL PHP, installs the various PHP plugins for like SSL and whatnot, configure Apache, create virtual host, create new users, create folders, open firewalls and ports. And then of course, because it was in my house, I had to go and find a way to work for the network or use dyn DNS or something. And this was all before I even started writing an application. This is literally everything I had to do before I started writing an application. I'm sure we all remember having to do stuff like this. And a lot of it was just setting up infrastructure. Like I mentioned, finding an old computer, making sure the ports are forwarded, making sure my VMs are not my VM, sorry, making sure like I have the right hard drive. I've got things mapped properly. I'm doing the right security, all of this stuff. Virtual machines came around and I felt like my world changed when I got doing hosted virtual machines, whether it was like a VPS system or something on the cloud or whatever it was in those days. My world changed. It was just like, oh, snap. I don't have to mess with hardware anymore. This is great. I can just install an operating system, configure and boom, I'm ready to go. And then there were a lot of images out there already. And there's quite a lot of tools out there that existed. Most of these open source tools you've seen before. I'm sure many of us have used it. I could declare how to stand up my VMs, what I want on there, all that stuff. Kind of write code, but I'm still kind of configuring the operating system and all of that and patching and whatnot. Then comes this little thing called containers, which further abstracts away the operating system. And now we're just talking about my application, my dependencies. We all know this. We all use Docker containers. We were all familiar with it. But, you know, there comes the question of, well, how do we manage all of it? OK, well, Kubernetes makes the most sense. If I dockerize all my applications, I can use Kubernetes. It will abstract away all of the infrastructure. I can declare my code using simple YAML. Good to go. Of course, there still comes this little problem of Dockerfiles. Now, I would argue that Dockerfiles are much simpler than a lot of times your cookbooks or VM manifest or anything like that. But still, if you're a developer, you know, using a lot of using a large Dockerfile is not fun. You still have to declare stuff. And you still kind of want to just focus on code. Like, hey, can can something just build the file for me? Like, can I just code maybe have a little bit of stuff in there that kind of declares what I need and then all of a sudden it compiles? And well, we do have such a thing. And actually it is called BuildPacks. A BuildPack is a set of executables that inspect your app source and creates a plan to build around your application. In short, build containers without a Dockerfile. Originated at Cloud Foundry, if I'm not mistaken. It's now a CNCF project. And it also is great for CICD because, well, what's the point in having Dockerfiles if I'm not going to actually implement them somehow? Do I still have to use Dockerpush or something like that in order to get the code up there or is there a way to automate that? We'll talk a little bit more about that later. So BuildPacks allow developers to take advantage of the benefits of the containers without needing to understand them. So I write the source code, BuildPacks, go ahead and turn it into the container. And my CICD pipeline will go ahead and deploy it. Why this is important is, again, letting developers focus on the things that are important to them, like with all the new libraries that come out, all the new dependencies, trying to just keep up with what's standard in modern technology can sometimes seem very intimidating, quite frankly. Just because it's like, what do I learn? What's going to be deprecated? What's going to be important? How do I actually remain competitive? How do I actually build something that my users are going to like? On top of that, I need to keep up to date with how to make sure my code employs properly, well, into a container. How do I make sure these containers work or secure all that stuff? The point of BuildPacks is kind of to remove that friction from the developer. And as I mentioned, it's an incubated project. It was actually a sandbox project. I don't have the exact timeline, but I want to say about a year, a year and a half ago, but now it is actually incubated. So it is graduating. It is growing up and it's kind of a container within a container. So BuildPacks is actually a container that builds containers without Dockerfiles. It seems a little seems a little odd, but essentially what you're doing is you're deploying a special builder container that will then take the code that you deploy, examine it and then say, hey, this is what I think the Dockerfiles should look like. Let's look at a few different definitions. So the builder, that's the actual part of the BuildPack that will build the Docker container. So it is in and of itself an OCI or Docker container. I think a lot of people are trying to move away from the term Docker container, not necessarily because there's anything wrong with the term, but because it's more than just that one company, or it's like trying to separate the idea of the company. It's like how you might use the term, like Kleenex for tissue, where it's like, well, Kleenex is a company tissue is the actual thing. So it's like, OK, so a lot of people will say OCI image, but just to kind of set that clear, because you might see both here. So you'll have a composition of BuildPack groups and life cycle binaries. What does that mean? We'll see later. It's basically a long set of different builders to create a simple application. As you know, your application is not going to be just one simple file, one simple Dockerfile. And hey, I've got a running application unless it's a low-world application or something to that effect. You're going to have more complex. I mean, of course, you'll have a full platform that provides the users the information. So basically, what does this mean? I write my code. I deploy the code. The builder will then just say, OK, this looks like GoLine code. Let us go ahead and make this a GoLine container. Or this looks like the code to work this way. This is the code to work that way. It supports pretty much every language. And then it's also kind of in the spirit of open source, various companies have taken the primitives for BuildPacks and kind of built on top of it so that it works better with their cloud provider. So Google, Cloud Foundry, VMware, et cetera, they all have different ones. Now, I mentioned BuildPack groups. As you can see, when you create a Dockerfile, you'll notice that you'll have the different steps. So you'll have like from Alpine. And then I'm going to copy this file and create this entry point. That's essentially what the BuildPack groups are. Each group is kind of its own step, if you will. And you will group all of these different builders together to the final product. So it's basically, hey, how do I build the language? As it says, OK, I can tell that this is GoLine. Or I can tell that this is Python. I think in order to make a Python application, we're going to need to put in this, this, this, this. The code is in this file. OK, so it's doing a lot of reading and guesswork, if you will. But it's a little better than just guesswork, because it is more intelligent than that. It gets things right. And then, of course, you have the stack, which is kind of the base image. You know, your Alpine, your Ubuntu, what not. Each BuildPack supports a set of stacks. Unfortunately, I don't believe you can do the, this could be dated, but I don't think it does the app build yet app gets. But I think that might now, to be honest, this might be an older slide. But ultimately, you have like the stack, the base level, and you know, like, OK, this is like an Ubuntu stack that can support PHP, or this can support Python 3.9. This can support Golang with XYZ libraries, so on and so forth. So basically, you define your stack. You decide that becomes the base image. Then on top of that, the builders will build the application using the code, making the guesses, or making the educated guesses, the very educated guesses in building out the entire application. And as I mentioned, build packs are run in a container called a builder, which is pretty straightforward. You don't have to really install anything special or add anything as special to your system in order to make it work, which is great. My benefit, and I've really been liking build packs lately because it just makes it so much easier to deploy code. I think a lot of times we're all trying to find different ways to containerize things. I'm sure if I asked everybody here, what's the number one way you containerize things? A lot of it will probably be either Docker build or include something in your CI CD pipeline or Canico, but it all requires you to still have a Docker file, usually. The nice thing here is that this kind of eliminates that unless you just speak in the word of code or speak in the language of code, something you're familiar with. Now, I'm a huge fan of Tecton, and so is my dog, as you can see there. A lot of people are saying, well, you might be listening and saying, well, that's great, but where does that container, that builder container live? How does it deploy? Does it live in my Kubernetes cluster? Am I actually just going to have to run it as its own container on my system? How do I actually implement the code? Well, Tecton is the way I would do it, and I always talk about Tecton because I love Tecton. It's so great. It is governed by the CD foundation. If anybody is not familiar, it's essentially, I would call it a sister foundation, the cloud native foundation, as they're both kind of under the Linux foundation. Kubernetes native components, reproducible, composable event triggers for automating pipelines. You can also, we have a catalog for reusing tasks in pipelines. So for very simple tasks that are reusable, that you will do multiple times like push to my Git repository or containerize this application or something like that. You can just download a lot of these, from the catalog, these tasks and pipelines to just automatically do that, probably tweak in the things that you need specifically, like the URL for your Git repository or the security token, something like that. But for the most part, it's like 90% of the way there. And that way you don't have to redo a lot of stuff. And it is integrated with other projects such as Jenkins X, Knative, and more. So that's part of it, a little bit of a complicated history part, but actually originated from Knative. But then it was just like, hey, this is such a good product. It should be a standalone thing. Like why should it be a CI CD solution for one specific environment? It should be a CI CD solution for everything. So we're going to use a few, we're going to define a few basic things within Tecton. So there's a pipeline, and that's similar to what you might think was the standard CI CD pipeline. That is essentially the set of steps, the set of actions that take place, that take your code and ultimately take it from code to deploy and doing a bunch of stuff, security checks, tests, all of the approvals, all of that stuff. So the step would be an operation in the workflow, such as running a pie test on a Python application. So the individual things that it's doing. A task is essentially a collection of steps. And then of course the pipeline is a collection of steps, it's a collection of tasks. Triggers are the component for eventing. So do I want to actually have to go in every single time somebody does a git commit or a branch is emerged or something and manually trigger the build of the code or the deploy of the code? That doesn't really seem like a great idea. We're trying to make it a lot more a bit more more like a CID world, especially when we're trying to be more agile developers. So instead we have what we have an event listener, which is essentially a CRD that we'll listen for a specific JSON payload. The good thing is that with a lot of JSON payloads, you know, there's some defaults that were there. So like it'll say, okay, well, this is, we know what a Git lab or GitHub or whatever payload looks like. So we already have the schematic or the schema there. So you just tell us where it's coming from and we got it. Or you can also just create your own custom if you're wanting to trigger in your own way. That's fine. Trigger template is the resource for the triggers. And then the binding is essentially what binds the trigger to the payload to give you a good idea. Let me go ahead and see if I can expand this a little bit. This looks like an eye chart almost. I think I bigger apologize for that. Well, in short, this is essentially what a task with steps would look like. If you look at it, you have a few parameters that I set like, okay, here's my Docker file. This is where my source code lives in the source path. This is the, this is where my canico in this example, I've talked about canico in the past being a way to build containers without actually ever having to pull it down to your own machine. It builds containers in the cloud in your Kubernetes cluster. Your resources. So those are kind of like the variables you'll put in. Like, okay, this is the image I want created. This is a git. And then of course the actual steps that are executed such as, you know, do a pie test and do the canico build. You take the, you take a task. Now you line up a bunch of tasks. Now you have a pipeline. So as you can see here, I have, I took the task from the previous one, the build task, which is actually building the container, taking my code and containerizing it and then pushing it to a git repo. And then of course the deploy process, which will then take the code from the git repo and then push it to wherever it is, I'm hosting the code, whatever Kubernetes cluster I'm hosting the code in. And of course I want to bring up the fact that there is the catalog. The benefit of the catalog, as I mentioned earlier, is that there are a lot of pipelines and tasks there that are very common that you probably will use. And they are, all you really do is kind of plug in the 10% of the variables that you need that are relevant to you, such as your git repo or what you want to name your container or things like that. Whereas the actual steps that do the thing are already created. So, you know, why bother reinventing the wheel when you don't need to come up with speed things up for development. And of course the nice thing here is that the way Tecton works is a lot of the tasks are reusable and then they're kind of plug and play too, so I can actually add new tasks into a pipeline or new pipelines. One way to think about Tecton that I always like to say is it's not traditional, like simple CICD platform. It's more like building blocks so that you can build a great CICD platform. So it's not something that I'm going to just necessarily click, click, click. It's deployed. It's running on my server. I can do all the great stuff to it. Some assembly required, but it allows you to do a lot more than what we've been able to do historically. Largely because you are getting those primitives. You are getting that access to the lower level components of Kubernetes that allows you to build these individual tasks and pipelines to do some pretty cool stuff. Speaking about the catalog, did you know that Tecton has some tasks related to build packs? And I'm going to include on the slides when they're available, like a slide at the end that will have a link to a bunch of different resources that give you kind of the groundwork to use build packs with Tecton because we want a full solution. If I say, hey, I made your life so much easier. Look at this. You can use build packs. You can deploy code as you want. Or you can containerize your code however you want. But how do I actually, this is great. The code is containerized. How do I actually use the code? How do I actually consume the code? How do I actually get the code to do something? Whereas we all use CI CD pipelines. I think most of us are using CI CD pipelines or at least thinking about using CI CD pipelines. So by giving you these Tecton tasks, it's making it that much easier for you to utilize the CI CD pipeline. And in fact, I will go ahead and show you real quick how we do this. Give me one second. I need to move some stuff around here. I'm doing that. I'm going to look and see if there's a question here. Yeah. So essentially you have to choose, that's correct. You essentially have to choose the proper stack. Now, granted, you are able to customize stacks. So in the same way that there's default Docker containers out there that you can download or there's boiler play code for different pieces of software, et cetera, et cetera. In the same way, there's a lot of quote unquote boiler plate stacks that you can use that you can build on top of if you need to. If you have a platform engineer who's on the team, that person will be in charge of it and just pass things along to the developers. But yeah, you do need to use that stack. The nice thing is you can build your own stacks. You're not like kind of stuck with whatever stacks you provide to your grant account. Of course, that is extra lift. And I'm not going to say that it's not, but it is something that you can do. And let me get back to sharing here. All right. That's kind of a nice thing about everything kind of CNC. If I could take a minute here, like everything Kubernetes. And I've been a huge like Linux and open source fan for who knows how long I think I lost count since my college days whenever that was. And the one thing I've always liked is it's like, hey, if there's not a solution there today, we'll give you the basic building blocks. So you can just build it yourself if you want to. And, you know, obviously that's not always the best idea. If you're in a, if you're in a rapid situation, but just having that flexibility is, I think, what has allowed things like containers and Kubernetes and Tecton and whatnot comes into existence because you've had a lot of great people come together and say, Hey, let's take this code and let's make it a little better. Let's find a better way to do things. So to live my little aside there on the benefits of cloud. So let's go ahead and take a look here. So that you can follow along. And I'm kind of one of those, you know, point and rebuilding reinventing the wheel here. You're going to go ahead and take a look at the build pack Tecton integration code. And like I said, I'm going to, I mean, you can easily find this link, but I'm going to go ahead and put it in the, in the slides later. So that way, you know, make it a little easier to find it. All right. And then I'm using Google cloud because I work for Google. I'd say easy for me to access a Google cloud environment. This will work for any kind of Kubernetes cluster. This is, these are open sources. They're not Google products. These aren't, you know, these are actually CMCF and CD foundation products. So, so we pull in the repo, we go into our Tecton integration folder. And then we have a script. We have a, I do not have a local Docker or the street setup. That's interesting. Okay. Let me see if I can still in my AMLO bear with me one second. I did have a custom AML, but I might have it on a different project. Oh, here let's look at the script. Apologies for that. We're going to just take a look here at our sample run script. Just to give you an idea here. Oh, see, so it actually build, it actually will build the Kubernetes tasks for you right here, or the Tecton tasks by downloading them from the repo, the CD catalog and build the actual, a container. We go to the Tecton hub. Let me see if that's this or actually have it. Apologies in advance. I've been having a fun time with demos lately. Well, it's going to apply build tasks. So I think that are build packs, but I think that might be an older version. I guess we'll find out in the meantime, if anybody has questions, feel free to ask. Let's see. Interesting. Let's do this. This isn't connected. We're able to do polling, but feel free. Anybody here actually use build packs before this meeting, or before this low webinar, just curious. Of course, my technology does not want to work today. I am sorry about that. Demos, let me see if I can find some interesting code here to, I guess, make up for the demo problem. Let's see here. Let me expand this a little bit. See, I can install the Tecton. So I guess we can just talk step through step through this. Apologies again. So install Tecton on the dashboard. Super easy to do. It's just kind of cubed CTL. And then you can apply the file. And then there is a GitHub repo link to the actual demo dashboard is a pretty cool tool. If you want visualization, it's more, I wouldn't say it's newer. I would say it's, I mean, it's existed for a while, but I think it's a little more mature than it's been in the past. Let's call it that where it gives you a nice UI to actually see all the tasks that are being run. But it's not a hard requirement. You install the actual build pack version, which if we wanted to look at it, I'm not going to force you all to look at a wall of YAML. You install this special task for pipelines, which will clone the Git repo. And this should be a smaller YAML. So we'll take a look at it. Oh, I lied. It is a large YAML. But as you can see, it's just a standard GitHub task. Sorry, standard tasks for cloning a Git repo via Tecton. Let me just move this over here. You apply the pipeline, all very similar YAML. We've all seen create a file with resources on YAML, define a persistent volume claim, authorization, we'll create a secret, no big deal. Because whatever we're hosting, we plan on hosting our Docker containers because remember, at the end of the day, it is building a Docker container. Just because you're not creating a Docker file doesn't mean that you're not creating a Docker container. It is still creating a Docker container. That container needs to live somewhere. Whether it is Docker or some other cloud, I'm sorry, excuse me, some other container registry is fine. But you do need to give the access authorization. We create a basic pipeline that's pretty straightforward. You create some workspaces. You create a, you kind of list the different tasks, what workspaces they're a part of. So we have a task for, we're cloning the repo. We got a task for using the build pack and then some parameters, which you can think of as variables. Apply the configuration and then there's a, so remember when I mentioned triggers. So the triggers just kind of automatically trigger the build of a Tecton pipeline. So in events such as I pushed something to a specific get branch, or there's a specific tag label or something to that effect and we want it to react to that. However, there's also these tools called pipeline runs, which essentially allows you to run the file manually. So if I were to deploy this run.yml, it would just go ahead and trigger the build right away. And then you can do pipeline runs to see the build. You can do some cleanups. It's pretty straightforward. Tecton and build pack are both open source. As I mentioned, build pack is a CNCF project. It is open source and it is a lot of, I kind of like to think of it as like a primitive. The reason I say that is because just like the earlier question we had about different stacks and whatnot, because there are multiple stacks or multiple ways to do things, different vendors will build on top of build packs or create their own custom stacks that work great on their environment. But build packs in and of itself, those primitives are open source on top of that. Tecton is open source too. Tecton is part of the CD foundation, which is a, I would call it a sister foundation to the CNCF as they're both part of Linux foundation as like kind of the parent organization, whereas CNCF focuses on, you know, cloud native technologies. CD is more specific towards, you know, continuous delivery, which is important in the cloud native world, but not just a cloud native thing like it exists in other, you know, there's still legacy CD technology and everything like that. So hence having its own foundation, but yes, they are both fully open source. So you can contribute if you choose to. We, you know, in the spirit of open source, we can all use as many contributors as possible. So I would encourage it. If you have the cycles to do so and the desire. What we're seeing is with, so with Tecton, we're starting to see a lot more companies actually adopt it. In terms of the primitives, as I mentioned, it's kind of a tool for building tools or in the same way that like Kubernetes is kind of a tool for building platforms. It's not the, it's not the end goal. It gives you the, the building blocks to build your, your platform versus it being just a point and click solution or something like that. Tecton's kind of the same way. So there's been a lot of companies that have taken Tecton and they're building open source tools or proprietary tools for that matter using those open source APIs, open source standards of Tecton that way kind of, that way there's kind of a standard between different environments. You're not having to go from, you know, this environment uses its own very proprietary stack. But I want to be cloud native and I want to, I want to be in every environment and, you know, I want to be able to deploy on-prem and in the cloud. So I don't want to have to learn two different things. So that's kind of a nice benefit to it is the, the fact that it is kind of, that's what I'm looking for. It is following that open standard. I know Google has a solution right at IBM has a solution. I know there's a lot of other companies that are contributing to it. But as you can see, there's a great forward and there's this whole list of how to do different things with build packs like how to build an arm app. I built a Kubernetes cluster on a series of Raspberry Pi force recently. So, well recently like a year ago, but I guess in the sense that's still recent. So if you want to build our applications, we have setups for that. There's how to build a Windows app, which is great. If that, well, if you build Windows containers, you know, personal choice, but I know there's a plenty of people who do that. So it's nice to have this option. You're not having to use some kind of third, weird third party solution. You can use the same solution that you're using for your non Windows machines, specify launch project Tommels, which is your project descriptor. Obviously you're going to have a larger project. So there's a lot of cool things here, build packs.io is where you'll want to go to read all the documentation. But ultimately the end goal here is just to simplify the build process. I want my developers to be able to just write code, push it to the Git repo or wherever it is you're hosting it. And then let them know that at the end of the day, everything's going to be built and they just have to really focus on the code portion. You know, there's been a lot of solutions. Well, if I can a cone, whatnot, which will build containers in flight in the actual. Kubernetes cluster. But, you know, you still have to create a Docker file stuff to create a something like that, which to some developers as a showstopper, I've seen some developers which is understandable say, you know, well, we don't know if we can get our entire team to learn the ins and outs of Docker files. This makes life easier. You have to build packs, but then of course, how do I use the container? How do I actually make it one thing? Well, how do I make it one pipeline? How do I make it one deployment? How do I, how do I take my code from code to running and test in ready for a B testing or whatever? Tecton is kind of that middle ground. So you put build pack tasks into your Tecton pipeline and you, you've just enabled your developers to build containerized applications without actually knowing a thing about containers. If there are any more questions, I will be happy to answer them. And I will put in, like I mentioned earlier, I will put in a slide that has a bunch of different resources for anybody who wants to learn more. We can also just hit me up on Twitter, LinkedIn, whatever. I love talking to people, but yeah, this is a somebody introduced me to build packs a while ago and I've been having fun playing with them since because I mean, I don't mind building a Docker file, but it is nice just not have to, to not have to think about it. Yes. So yes and no. To the question about can Tecton also act as an Argo CD flux alternative. Yes. You can use event triggers to trigger a pipeline build because something was pushed to a repo. However, I have actually seen people use Argo and Tecton together to build a pipeline. So more of the integration stuff is done on the Tecton side in terms of integrating the code, but because of some specific use cases, let's say a canary analysis or whatnot, people will use flux or CD to do the actual continuous deployment. The actual CD portion. So you can do it all. The entire CI CD portion with Tecton or you can kind of piece me all together using Tecton with Argo CD or flux or some other tooling, integrated known into Jenkins that existing, whatever existing pipelines you have. There's no right or wrong way to use it. Any other questions that I may be able to answer. Anyone else. Okay. There you go. There we go. Have you seen Tecton being used for non CI CD purposes? I personally haven't. I have kind of heard that they exist, but I personally have not tried it or seen it use myself and like a project. Though I have seen people using it. I have heard of people using it for other things because again, it's, it is not like a tool as much as it is a, what's the word I'm looking for the building blocks to build a tool. So in theory you could use it to. Automate a bunch of different things if you chose to. Though obviously CI CD is the most common use case. I would call the other ones off label if you will. So thank you, Gabriel. Yes. All right. Well, if we have no more questions, I think you'll know where to find Jason. If you need to follow up. Thank you everyone for joining us on another live webinar. It was great. And be sure to join us for the rest of our online programs this weekend. Next, check it out on a CNC F. Thank you so much, Jason. Take care. We'll see y'all next time.