 Hello, everyone. Welcome, welcome. Thanks for coming to our talk. This is the treacherous choose your own adventure, the treacherous trek to developments. I'm Whitney. This is Victor. We have a very full 35 minutes for you. So we're not going to introduce ourselves further. If you care about who we are, you can look us up on our ways. There are ways to find us. So right now, the focus of our time is to get our hero, right? Our source code that's running on a developer's laptop. We're going to get that source code all the way through to a development environment where it's running, where it's connected to backing services and where it can be continued to be developed on Kubernetes. So we're going to use that using all we're going to do that using all CNCF projects. But here's the thing. We need your help. So every step of the way, there's more than one CNCF project that can do the job. So we want you to vote and based on what you vote, whatever you choose is what Victor is going to build into the live demo as we go. So it's an ambitious talk. So I want to encourage you, don't vote for the technology you know. This is not a winning thing. This is not a popularity contest. Vote for what you're interested in seeing demoed for you. What is most likely for me to fail? That's exactly what I was going to say. Yes, let's cause Victor some pain. And there's one more thing I want to say, and that is out of all the projects you see, one of them is not a CNCF project. So don't choose that one or we will all die. Okay, let's get started. Yeah. How does it work? Here we go. There we go. All right. So you all should be able to see the the what everything is. So listen to what things are. The QR code is still up there if you don't have it. So make sure you get in there. So our first choice loaded. Okay, come on, y'all. Okay. So our first choice is Lima Lima is a VM that is optimized to run container D on a Mac. So VM underscores Rancher desktop. So container D that has a CLI called nerd cuddle. And then you could use a nerd cuddle like your Docker CLI. So you use your Docker regular Docker commands. You might be familiar with you supply it with a Docker file under the hood. It uses build kit. We're talking about it at the Lima level because Lima is a CNCF project as well as container D. Next up we have cloud native build packs cloud native build packs input application source code and output and OCI compliance container image. So it does that by spilling up spinning up its own application called a builder. If you're an ops person you can put your company best practices into that builder. If you're a developer you can use the builder without having to care about the best practices at all. You do this with no Docker file. So you can easily make your container image. It's a reproducible build. You can when you rebuild the rebuilds are fast because only changed layers are rebuilt. And then it also has a really good rebase experience. You can rebase quickly. And then finally we have Carvel K build. Now Carvel K build doesn't build your container image itself. It can use build kits. It can use cloud native build packs. It can use other build technologies. What it does is it you feed it your application configuration. It reads the image keys. It knows where your application source code is. Then when you run the K build commands it takes your source code. It builds the application image. It optionally pushes it to a registry. And then it enters the tagged image into your configuration for you. So you can be sure that the image that you just built is the one that's in your configuration now. And so now y'all please vote for which technology you want to see. As if they were waiting for you to say that. Okay. So build packs it is. Wait. Is it enough time? Give them a countdown. I'm not giving them anything. It's be fast. It's not my problem. Not my problem. It's build packs. And we're going to estimate build packs using the pack CLI. Yeah. Yeah. So you continue and I will be doing something with build packs. Okay. So now you can see the little buddy is not a laptop anymore. Our hero is now a container image. So assuming Victor is going to get this step right. It's a big assumption. So next up we need to store our container image in a registry. So that container image once made it's going to live on a server. And that server can serve the container image for so it can be transported around. Our first choice is Docker hub. Now some of you might be saying this is it. This is the one that's not a CNCF project. I know. But the thing is that Docker hub is underscored by distribution and distribution is a CNCF project. And what distribution is is it's a toolkit that you can use to build container image registries. So GitHub container image registry uses it. Get lab container image registry uses it. Just your digital ocean uses it. And another project that uses it is our next choice, which is Harbor. Now Harbor is a container image registry with a lot of additional features. So you can scan your images to make sure they're free of vulnerabilities. You can sign your images as trusted. You can replicate your registries, which is good to replicate across geographical regions. Or you might want to replicate to avoid Docker hub rate limits. You can set image policy to make sure that the images will go away after a certain number of time. They'll be garbage collected for you. And there's one I'm forgetting right now because I'm nervous, but there's another one. Oh, policy. Like so, so access. So you can make sure you can be careful who accesses your image registry. You can do that within Harbor too. And then our third choice is dragonfly. Now dragonfly is not an image registry per se. What dragonfly is, it's a peer to peer network system that connects all the nodes in your cluster with the goal of fast image sharing. So this could be for container images or just for large files in general. Now dragonfly, part of what makes it work well is that it has a sub project called NITUS NYDUS. And with NITUS, it's a different file format to store container images in. So it's not OCI. OCI stands for open container initiative. So NITUS, with NITUS does two things. So NITUS, you can store your container image in a lot of different pieces. So with that peer to peer network system, when it's stored across a lot of different nodes, you can use a lot of different bandwidth to pull your image at once. And the other cool thing NITUS does is it knows which pieces are needed to start the container and it pulls those first and gets that container started. So please make your choice now. Okay. Well, you're choosing the previous one was easy build packs. All you do is just execute peck. I want to build. This is the tag. Figure it out based on source code. You can do more, but that's that's all you really need to know initially, right? So I built it and there's the image, right? You made it very easy with the choices. It could be harder. Let me tell you that. And the next choice is Harbour. You're really making me easy. No, wait, wait. Did you really think that the hardest one would be Harbour over there? Was that the best choice? Okay. Okay. He wants to be punished. Harbour it is. All right. And next step is to define your application. So I'm a relatively new learner and it took me a moment to understand that defining an application is the same thing as authoring manifest is the same thing as writing application configuration. So for other new learners in the room, these all mean the same thing, more or less. So there are a lot, a lot of things you need to think about at this level. So for example, you're going to want to consider your container image updating strategy and how you want to define that. You're going to want to consider application specific configuration like environment variables or ports. You're going to want to consider your infrastructure configuration so that will be stuff related to backing services or also what environment or applications eventually going to run in. And of course you're going to want to consider Kubernetes object configuration. So there's a lot to think about here. And with that, I'll get to the technology choices. The first one is customize. Customize is natively built into the cube cuddle CLI that you use for Kubernetes with an apply dash K command. And customize uses a patching strategy. So that means they're a common example is if you need to play something across, let's say three different environments, you would use the same base manifest that for every single environment. And then for the smaller changes that change per environment, you would make small files called patches. And then those patches can be overlaid onto your base images to produce the YAML that's going to get deployed into an environment. So that's customized. And that's also a patching strategy. Next up is helm. Helm is a package manager for Kubernetes and helm uses a templating strategy. Now with templating and your configuration, the values that you think are likely to change between deploys, you use a placeholder for those values. And then you define those values in a separate file called in Helm's case, it's called values.yaml. So with helm, you have your application configuration is defined with templates that are actually written in go so that they're configurable, they're dynamic. And then you combine your templates, you combine it with your values file, you wrap it into a helm chart, and then that helm charts can be shared. Next up, we have Carvel YTT. YTT stands for YAML templating tool. So Carvel YTT is cool because it takes the patching strategy of customized, plus the template strategy of helm, and it combines it into one tool. It adds that dynamic capabilities, the configurable, let's say dynamic capabilities, so you can add conditionals, that sort of thing. It does that with a language called Starlark, which is the Python dialect. And then it adds that logic in through the YAML comments. So Carvel YTT remains YAML all the time. It's YAML in, YAML out, YAML all the way through. And it's great for Kubernetes and it doesn't have to be used for Kubernetes. But it can be processed and validated as YAML the whole time. Because of that too, it knows YAML structures. You don't need to worry about manual escaping or how many times you press tab or any of that jazz that we all love to care about. And last up, we have CD-KATES. CD-KATES stands for Cloud Development Kits on Kubernetes. So it should be CD Cook Kates. With CD-KATES, you can write Kubernetes configuration using the programming language of your choice. Assuming that language is JavaScript, TypeScript, Java, Python or Go, with more support for languages coming. So you would consume CD-KATES like you would any programming language. And then you can write your application configuration in your favorite IDE with your favorite code. And it also provides to the developer like simplified Kubernetes abstractions that only provide like exposed values that need to get made. So for example, you don't need to worry about the labels and selectors. Those can be auto-generated for you and you never have to think about that sort of thing as a developer. But then as an ops person, you can wrap your company's best practices in with your coding library. And then once that configuration is written, then it gets turned into pure Kubernetes YAML with a synth command. So a synthesis stage. And during that synthesis stage, you can apply policy at that time. And then like I just said, it gets finished as Kubernetes YAML that can be applied to any cluster. And that's CD-KATES. Oh, it's voting time. Yeah, go as if you didn't already. So what was the previous show? Harbor, yes. I just executed the script. This is just a wrapper around Helm. It installed Harbor with Helm and the third manager is already in my cluster. Tucked the images to use the address of my registry, which is Harbor. Pushed it there and that's it. Still easy. And now I'm going to do CD-KATES. That's my favorite. That's cool. Well, Whitney is telling you about what's next. What's next? That's why we have slides. Yes. So now our little friends is YAML and they have a reference to the image registry and a reference to the app, the image. And next up, we want to add a backing service. So in our example, we're going to add a database. So we have two completely different choices for how to do this. So one is to use Helm to install PostgreSQL that will run within the Kubernetes cluster. So Helm is the de facto package manager for Kubernetes. In fact, Victor just used it to install Harbor. So it's a great way to install third-party apps. And I'm not going to talk more about it because I already went on about Helm a bit. So next up, we have CrossClean. So with CrossClean, you can take any API and interact with it with Kubernetes. You can bring it into Kubernetes as a custom resource object, any API. So this is most commonly used to work with cloud provider offerings. So if you choose CrossClean, we're going to implement a Google Cloud SQL database with CrossClean. So what's cool about being able to interact with stuff with Kubernetes, you can use the Kubernetes control loop to make sure the state of your external resource, in our case, a database, is the same as the state that we defined in CrossClean. But then you can also use the entire Kubernetes ecosystem with your external resources now. So you could use, for example, GitOps to make sure to define the state of your external database and Git and make sure those stay in sync by using CrossClean in the middle. And then finally, if it's a Kubernetes resource, it really can be packaged up nicely with the rest of our application. Nice and tidy. The last thing I want to say about CrossClean is one thing that's cool about it is you can expose only the knobs that the developers need to know to the developer. So your ops people can deal with the complexity of making the external resources, but then hide that complexity from the developer and give them a way to access the external resources they need on their own with a simplified interface. So what have you done, Victor? How's it been? How's it? Okay, still easy, still easy. So what I did is actually I have already, where is it? Cdk, I already have my, ah, Cdk, there we go. My Go file that defines some manifest. Not going through it because we don't have time. Anyways, everything defined is Go. And then I executed the command, where is it? Here we go. Cdk, Sint, which takes my Go code, in my case, Go. It can be any code, no, almost any code. And Outputs, YAML, and that YAML you can see here. This is the output of my Go code, which is just a typical deployment service, what's or not, for application. And I deployed it, CubeCuttle. Now, if you say, I don't know what's CubeCuttle, you're in the wrong place. And here's my application saying, this is a mighty application. And then the next one is Crossplane, which is a great choice because somebody from my company might be watching this talk later and I would get fired, by the way. So, thank you for further helping me progress of my career. Okay. Oh, Victor. Okay, Whitney, Go. So now our hero, now they have a database. Well, once Victor's done with this step, they have a database that's defined in YAML that's also packaged up with the application. So we have a sidekick, that's a database cat. So the next thing we need to do is add schema to this database to make it usable. We have two choices for this. The first one is Schema Hero. With Schema Hero, you can declare the schema of your database. You can do that in a declarative and a language agnostic way. You can also do it as a Kubernetes resource. It's Kubernetes native. So again, you have a control loop to make sure you can use GitOps to make sure it stays in the state that you want it to be. Again, if it's in Kubernetes, you can package it up with the rest of your application in a tidy, tidy way. And then also a Schema Hero, as you migrate schema from one state to another, it will show you those changes before they're applied. So it'll generate the changes in DDL, which is a data definition language. And you can run policy on those changes. You can see them and run policy on them before they're applied. But if you're working with more sensitive data, that might not be enough. And that's a case where you might want to use Liquibase. Now, Liquibase is not Kubernetes specific, but it is a more mature technology. And with Liquibase, it's still declarative where you can define how you want your data to change from one state to another to reach the final state that you're about to get to. So you have more control over how your data changes. And that one's a fast one for me, and this is a long step for Victor. How are you doing over there? Not good at all. Is your career in jeopardy? No, no, you chose cross-plane. That's okay. Now we're in a safe, in a safe place. So I'm going to take this. So here's what I'm doing, right? I already have a go code for my application. It already includes all the variations of everything that you could have chosen. And one of them is cross-plane definitions. All I'm doing is really changing the value in a file so that my go code reads that and does some silly EFL statement. And then I'm going to change one more thing. I'm going to say that the database is insecure simply because nobody from DevSecOps is here listening to me. And I'm going to execute CDK Sync command just as before, which will now again transform the code into a YAML manifest. And then I'm going out and I'm executing kubectl, namespace, blah, blah, blah. That seems correct. And now the database is about to be created in Google Cloud. And while that is coming, it will take five minutes, so Whitney will have to speak very slowly. And then after those five minutes, I will put Schema Hero which you chose. I'm really, really disappointed you didn't choose LiquiBase. I'm honestly disappointed simply because I would have a pleasure to say... We're dead. It wasn't politically correct what I was about to say. So I'm not going to say anything. Yeah, database is being created and I will create Schema in a few minutes. Whitney, go. Before I jump to the next step, it'll be... It's probably a good opportunity to say in the hamburger menu on your voting app, you can see where we link to a GitHub repo. And with that repo, you can do what Victor's doing. But you can choose anything you want any step of the way. So you have the ability... We're throwing so much at you right now. You have the ability to slow down and choose your own projects and see how they all work together with that GitHub repo. All right, so the final step, let's just recap what we have here. We have our application defined in YAML. It references the registry we chose. It references the image that we chose. We have our database defined and it also will have a database Schema once Victor's done. So now we need to run everything we have going on in a development environment where it can be developed in Kubernetes. And our developer experience, what we don't want is for them to have to... When they make a change, have to rebuild a container image, redeploy that image, go through this whole CSED process just to see whatever minor changes that they've made. That's slow and tedious and takes our developer out of the flow state. So we have four options for this with all four of these options. You all can develop in a near real-time way. The first one we're going to talk about is telepresence. So at telepresence, you install a demon on your machine and you install a traffic manager in your remote cluster. And then a tunnel opens up between those two when you run the telepresence connect command. And then it's as though your laptop is sitting in that remote Kubernetes cluster. So you can interact with all the other services. You can interact with your running application. It's really cool. But then to do more developing than that, you run the telepresence intercept commands. And then all the traffic that would be sent to your application in the remote cluster is now being sent to a copy of your application that's running on your local laptop. So now you can work with your application on your local laptop and you can use your IDE. You can use your debugging tools. You can use logs in your terminal, everything that you're used to. But you're now getting the exact same experience as though you're developing that application in the remote cluster. So they like to use the word remote for remote to local experience. Next up, we have dev space. With dev space, you use a declarative dev space YAML that you probably do need a Kubernetes expert on your team to set up. But once that YAML exists, you can share it among your team members or across teams. And once you with that YAML, once you run the command dev space dev, then it assuming you have no environment right now, it will totally create your development environments from scratch as defined in that YAML. With dev space, you don't need to have a client. You only have the client side and you don't need to have anything running in your cluster. And then you again get that real time experience and you can use your local tools. You're debugging the IDE of your choice, that sort of thing. Next up, we have dev file. Now dev file is a specification. It's not an implementation. So a lot of companies were solving this problem of having a good experience for developers to develop on Kubernetes. And they decided between them, let's have a specification. And those companies are AWS. I know uses it. So does Red Hat and so does IntelliJ products. So all those have products built on top of this dev space YAML. And if you choose that today, we'll implement it with a tool called Odo. That uses the dev space YAML. And then finally, we have NoCo host, which is a click click click experience. It offers a UI for developers to be able to define what they want their development environment to look like. That development environment gets spun up and then they can interact with it. It has a lot of features similar to dev space, but without the declarative YAML instead with a UI to interact with your development environment. And those are our four development choices. Speak more. Database is almost finished. You can see there database instance. Database server is now running in Google Cloud, but there are some additional things that are about to happen, like to create a user there to create a database inside of the server and so on and so forth. And yeah, now it's running. So the SQL claim, which is the main composition from which all the resources are created is true-true. That means that I can create a run database schema with schema hero. So what they did generate through city case because you chose it essentially saying, hey, this is the database. Take the secrets from the authentication from Kubernetes secret, which was created by crossplane and apply this, create this table. It's only one table videos, nothing really special. So what I'm going to do is apply and, no, kubectl namespace is there. Apply. There we go. That thingy. Now, if everything worked and it almost never does, what's this? I should come on. I would like to blame this on Wi-Fi, but my company is sponsoring Wi-Fi, trying to get fired. So whatever the reason is, it's not Wi-Fi. Okay. And I, oh my goodness. I know what's wrong. I'll fix it right away. I knew that something will go wrong. I did not install schema hero. There we go. Manuscript. I have it already in the script. DB schema, schema hero. Now I'm going to install schema hero in my cluster. Come on. No, it is already installed. Okay. Then I'm going to drive this one more time and then I'm going to blame it on Whitney. If it's not Wi-Fi and if it's not me because I'm never wrong, you're the only one left. There it is. Does that mean I get credit now? Shut up. Are you listening now? Okay. So I'm going to send a request to the application connected to the database as null, which is good, meaning that it did find the schema there, table is there. I'm going to send a couple of requests just not to what it says here because I don't have HTTPS. I'm too cheap to waste money on domain and let's encrypt is too expensive for me. Come on. There we go. Putting something to the application, retrieving something from the application, which retrieved from the database. Database is up and running and what did we choose? Dev space. Oh my. Oh my. Okay. Dev space YAML. Here's the definition of everything you need to do for this application, which is relatively simple, even though it doesn't look like that. Deploy stuff. No, we don't have enough time. It would work if I didn't fail previously. How much time do we have? We have until 1.35. It's 35 minutes. Okay. Then let's do it. Which one did we choose? Dev space. I know what I'm doing and all I'm doing really copying and pasting from here. It makes you wonder how did I manage to fail in the first place? Develop. Dev space. That's the one. I created a couple of environment variables. You saw Dev space. No time to explain. What does matter is that I can execute. I want to develop something in the namespace dev and now what it will do is it will figure out what's running in my cluster related to that specific application and then it will keep everything like ingress, service, this or that but it will replace the container of the application, in my case deploy through a deployment which creates the replica set which creates pods. Anyway, it will replace the container with a container based on Go because I would need a compiler and I would need my Go tools in that remote environment and then at some moment in the future it will start working and that will take a minute, maybe not, I don't know, it will take a minute. Just enough time for Whitney to tell you a joke while we're waiting for this. Instead of a joke I'm going to tell you that Victor and I co-host a show on his DevOps Toolkit YouTube channel. It's called You Choose. For our show we intend to touch every single CNCF project. What you're seeing today is a recap of chapter one so we've had episodes around each of these design choices and for the episodes people from each project come onto our show they get five minutes to describe their project and then we do a Q&A session and then we open it up to a vote and that vote lasts a couple of days and so whatever gets chosen will implement in next week's episode at the beginning of the episode. So we do this in a longer format and we intend to do it in the very long term. We'll be doing this for years. That wasn't a very good joke. You asked for something and you never get what you asked for. Anyway, I'm now in a container of my application running in some remote cluster and I just execute go around my application will be running in two minutes because I made a silly demo application that contains 5700 libraries included into it because that's what really good the developers do. It will compile soon and it will be running and it will show in a browser. So where can you find a dog with no legs? Right, where you left it. She's seriously telling a joke. It was a rhetorical request. Okay, go, go, go. We need more jokes. That was the end of the joke. What's brown and sticky? A stick. Yeah. Any questions while we're waiting for this to work? No more jokes. Yes, please, anything. If it were equal I would have to do both. Actually, it would be a good choice if it was equal because dev space there is some overlap but they're actually very complementary with dev space you can create environment running somewhere and synchronize the code. I think the telepresence is doing it later but the main advantage of telepresence is being able to connect your application with the other dependencies running somewhere else. So actually it would be a good choice if they were equal. But... This is closer to the orgasm I could have look at it. It's running. We did it. Oh my gosh, you guys. Thank you so much. We put up that we did it slide. We have another slide. Okay. Yes. That's it. Oh my gosh. Oh, we did it. This is amazing. Thank you so much. Do you have any questions for us? How are we doing on time? We're doing good. This is unbelievable. So this is wonderful. Any questions? You have a minute and a half for one more question. Anybody? No? All right. Thank you. We have stickers. You choose stickers up here that you're welcome to have. Please hang out with us online. If you see us around we'll have the you choose stickers on us. So please say hi and ask for a sticker. And thank you. Thank you. Thank you so much.