 Hey, everyone. My name is Nathan Tabor and I'm a principal product manager here at Amazon Web Services. I'm really excited to be talking about saying goodbye to YAML Engineering with the CDK for Kubernetes. I'm going to be joined today by my friends and colleagues, Elad Ben Israel, who's a principal engineer here at AWS and also Ali Polonski, who's a software development engineer here at AWS. And all of us work on Kubernetes and the CDK project, and we're really excited to share with you what we've been working on and some of the things that you can do using the CDK and the CDK for Kubernetes. All right, so let's go ahead and get started here. So we have a few things we're going to go over today. This is a hands-on interactive demonstration session. So I'm going to spend a few minutes right now talking about what's the deal with YAML? How can we do more with the YAML that we have? How can we use YAML as part of the system? Talk about CDKs and how CDKs help solve some of the common problems that we see people in the Kubernetes community having around YAML, especially as they define and build really complex applications across the organizations. And then we have a really awesome deep dive hands-on demo that Eli and Elad are going to give. They're going to go all the way through how to deploy an app end to end with a CDK for Kubernetes on a Kubernetes cluster. And then finally, we're going to have a really quick wrap up. So I'm really excited that you've joined us for this session at KubeCon. Here we are live from home, and let's go ahead and get started. So let's talk about YAML. YAML is freaking awesome. I mean, YAML is the beating heart of a Kubernetes cluster. It's how we configure all of the stuff that we run using Kubernetes. And YAML is great because it's really easy for humans to read. It's something that anybody can open up a configuration spec. They can look at it, they can understand exactly what's going on, exactly what we intend for the system to do. It's universal. It's also declarative, which is great in a system like Kubernetes because you declare exactly what you want and then you expect the API server and all the other resources to take care of instantiating that declared state and maintaining that declared state. So YAML is a great language for how you want to declare things on your cluster. And it's also really good because it's static and things that are static are easy to work with. They can be versioned, they can be shared and distributed and understood at a single point in time. So that's really good. So YAML is a really excellent building block for applications. And it's not too hard to use when you have a few things. If you have a handful of deployments or a handful of services in your cluster, YAML is really great. You can probably just write out a YAML spec really easily and start running some basic applications on Kubernetes. But as you grow, and especially as you start to adopt systems defined in YAML across your organization, you end up with this YAML engineering. There is a lot of boilerplate that you start having to add and that opens up room for errors. Sharing becomes kind of manual and hacky, right? Like projects often start with these off-the-shelf examples, then they start quarking, copying and pasting configurations from vendors, from other companies, from community-maintained repositories. And maintaining these files over time takes a lot of work. And there's a lot of different tools that we've seen created by the community. Some, like Helm, are really good at packaging YAML and making it easy to bundle YAML together. But they don't actually necessarily solve some of the underlying problems about customizations and all the different tooling that you have to use. How do you update across lots of different things? Especially when, if you're a human, right, like if you're a developer and you've never actually touched Kubernetes YAML, it's this whole other world to actually begin to start to run your application on Kubernetes. And we find that that can be a bit of a learning curve for people. And most developers are used to working with general-purpose languages, right? So general-purpose languages are what we actually build our applications in. They tend to be specialized. They can be functional or imperative. They are dynamic. And there's a whole ecosystem of tooling and workflows around how do we define applications using these general-purpose languages? So what the Cloud Development Kit for Kubernetes does is it's an open-source framework that lets you define Kubernetes infrastructure using these general-purpose, popular programming languages. So the CD-Case is really awesome. It lets you go from code to config, defining Kubernetes applications and architectures using popular and familiar programming languages. And it gets rid of a lot of the pain of making sure that you get all the boilerplate right. So you can generate that well-formatted YAML for your applications every single time. Because it actually you're defining your applications in code, you can use code libraries and you can actually define the format for how you want a particular application within a cluster to work as a code library. And then you can share that and you can update it easily without any heavy lifting. So you can go and you can update how do I define a web service at my organization. And let's say tomorrow you come out with, okay, every web service now needs to use endpoint slices, right? And you can actually implement that as part of the library that defines how users at your company use Kubernetes web services. And you can import that. All those developers can import that into their CD-Case libraries and then they can easily begin using those new features within the API without necessarily having to understand how to implement them all perfectly in the YAML and having to really deeply understand the Kubernetes API to get started. I think the most important part of this project and what's really exciting for us at AWS is that CD-Case lets you run everywhere. So CD-Case is not a system that's designed just for AWS. It runs locally on your machine and it generates Kubernetes standard YAML that you can deploy to any Kubernetes cluster running anywhere. And this lets you standardize across on-premises and any cloud. So today we support four languages with CD-Case. TypeScript, JavaScript, Python and Java. And we're looking at supporting Go and .NET and many more in the future. And like I was saying, CD-Case lets you share these best practices as libraries. It makes them easy to maintain and it's easier to share than templates. So you can use CD-Case to standardize how you define Kubernetes across your organization and across any environment. And then what's really cool is that at the end of the day what you're using is just standard Kubernetes YAML. So that means that today if your developers or if you are writing your application in a general purpose programming language and then you're deploying that through a CD pipeline out to your cluster with a CDK for Kubernetes, you can now also write your application definition and your specification for how you want that application to run in the same language. And then you can deploy that out through a GitOps through a CD pipeline onto your cluster. So you can go from having two very different flows from writing your code and getting it into production and actually unify that and use the same set of tools all the way through the process from writing your application, defining how it should run and then using continuous deployment and continuous integration to get that out onto your cluster. So let's take a quick look at how this actually works. So in the CD-Case you have a CD-Case application. This represents the overall application that you're going to be running on your cluster. And then you have a series of charts. And charts are different logical modules within your application for different functions. And within a chart you can have one or more constructs. And a construct defines one or more resources, Kubernetes resources that you want to instantiate and define together. So for example, I may have a deployment and a pod that I'm going to define together as one construct. And then you take that and you synthesize that into a YAML or a home chart. You cube code will apply that to your cluster or use a get-up CD tool to get that onto your cluster. And when that YAML goes onto your cluster, then we instantiate the Kubernetes resources. This is just like normal, just like you would do if you had written that YAML file yourself. And so let's look at the big picture. So the CD-Case application is effectively your source code. This is the source for how you want to define your Kubernetes application. The CD-Case CLI, which is our CLI tool, acts as the compiler. And the CLI executes that source and synthesizes YAML or a home chart. That is your assembly language, right? And then we deploy that onto the Kubernetes cluster, which is your processor and actually instantiates those Kubernetes resources to run your application. And so there are three main components to the CD-Case. There's the core framework, which is all the different constructs in the construct library that makes that up. We also have CD-Case Plus. And CD-Case Plus is a high-level library that defines common constructs in an opinionated fashion. So CD-Case Plus makes it really easy to get started with CD-Case by kind of giving you the core building blocks that you need to start building and running Kubernetes applications. And then we have the CD-Case CLI. And the CLI allows you to define which version of the Kubernetes API that you want to be using as part of your CD-Case app. And there's some really nifty functionality in the CD-Case CLI that we're going to be talking about that allows you to select which Kubernetes version you're using and then ensures that all of the YAML that you synthesize uses the correct format for that version of the Kubernetes API. It also lets you import custom resource definitions and use those as part of your CD-Case app. And so these three components work together to allow a really nice system that lets you go from general purpose language to Kubernetes YAML. All right, so that's a very brief introduction to the CD-Case. Let's go ahead and jump over. We're going to fly halfway around the world to Elad and Ellie joining us from Tel Aviv. And they're going to give you a deep dive into building and running an application using the CD-Case. All right, go ahead, Elad. Ellie, take it away. Thanks, Nate. So let's get going and write some code. I guess that's why we're here and we've got plenty of time. So my name is Elad. I'm a principal engineer at AWS. I've been working on the CD-K project for the past three years. And since this is a recorded session, I figured it's going to be very boring if I'm just going to speak to myself for an hour, both for me and for you. And so I asked my colleague, Ellie, who's working with me on the CD-K for Kubernetes project to join me and we're going to do this together. Let me invite him and he's going to tell you a little bit about himself and we can get started. Hi, Elad. Hi, Elad. Hi, everyone. I'm glad to be here. So yeah, my name is Elad. I work with Elad on the CD-K and the AWS CD-K for almost a year now. Excited to see this, to see, to do this session. We've got a lot to cover, so let's get started. Yeah, we actually plan, you know, we set down a plan and there's so much stuff to talk about and so many rat holes to go in. So I asked Ellie to keep me honest and make sure that I pull me out of those rat holes as much as possible so we can actually get something achieved. What we said we're going to do is first kind of walk through the basics to make sure that everybody's on the same page. I know that some of you are probably used CD-K for Kubernetes. Some of you have never heard of it. And so following Nate's introduction, I hope you have a sense of what it is. But I actually want to show you, you know, hands on how it feels to use it. And then we're going to just try to build a project together and, you know, have some cathartic experience, I think. Yeah, definitely. Cool, so I guess the first thing we need to do or talk about is I assume you have some kind of local setup or some kind of Kubernetes cluster you have so we can play around with, right? Yes, yes, yes. I'm going to assume you have kind because it's great and I use it too. Yeah, yeah, I mean for local development, we love kind. I love kind. Awesome project, really, really stable. I think one of the things that we get asked a lot is whether CD-K is just for AWS. And the answer is no, CD-K for Kubernetes is for Kubernetes. And if you run on any Kubernetes cluster, whether it's on-prem, on the cloud, as Nate said, it basically just synthesizes YAML manifests. If you think about it, it's kind of like a compiler. You write code and you execute and you get a manifest output. Yeah, and then it's your choice what to do with it. You can deploy it anywhere. And we'll see all of that in a second. The second thing that I prepared in advance is just an empty, kind of like an empty TypeScript project. We're going to use TypeScript. As Nate said, CD-K supports multiple programming languages. TypeScript, JavaScript, .NET, Java. Go is coming up hopefully in the next month. So that's going to be a very exciting thing, very exciting for the Kubernetes community, right? Yeah, yeah, definitely, definitely. OK, but this is basically just a regular TypeScript app. And what you see here is a boilerplate, not boilerplate, but basically kind of like a starter application that was created by a CD-CATES init. CD-CATES is shipped with a CLI called unsurprisingly CD-CATES. And it has a bunch of commands and one of them is init. It allows you to just initialize new projects in one of the supported languages. There's nothing fancy about these projects. They're just regular projects. In this case, you can see that it takes the dependency. It just comes with a few presets that will help you get started with CD-CATES. Yeah, quickly, but you can start from an empty TypeScript project. It's not really up. OK, and then the structure that you get here is kind of like what we call the construct tree. And we'll talk more about constructs, I guess, later. But the mental model is a tree, OK? So think, think, there's a root and the root of the tree is the app. And then within the app, you've got charts, any number of charts. And the reason is that every chart synthesizes into its own manifest. So you can decide what you want to do with this. You can just use a single chart and put all your resources in one manifest. You can split them up. You can create different versions of them for development or production or whatever, right? But you control it. And the way the tree is structured is by basically passing in the parent as the first parameter of the construct. And so in this case, I create a chart and I pass in the app. And you'll see this repeat and repeat itself throughout the programming model. And so I see it says define resources here. So let's define resources. We should start defining resources. Which resource? So I know that I want to talk about the kind of a very important aspect of CD-CATES and software engineering in general, which is this notion of layering. And I want us to show, to see all the different types of layering that CD-CATES offers. And let's start by using the most basic layer to define objects. And let's just start with a simple config map, right? That's something that's super, super simple to configure. So let's see how you do that with the most fundamental unit in CD-CATES. OK. And so if you think about it, again, if you look at what the CD-CATES is supposed to generate or to synthesize, it's supposed to generate manifest. And so the manifests are structured as a collection of API objects. And so to that end, CD-CATES is also bundled with a class called API object. It's also a construct. So it needs to bind to the tree. The first parameter is the scope. And as a rule of thumb, I'm always going to pass in this, because I want to add this construct to the scope in which I'm actually defining it. So I know exactly what's going on within this scope. You want to keep the locality of your user. I want this to be local to my chart, in that sense. And we'll see what that means maybe later. And then I get a name for my API object. I can call it config map, let's say. And if I ask the IDE to help me, then I see that I need API version and I need kind, because both of those are required for all API objects, wherever they are. So I'm going to pass in this. So CD-CATES already enforces these kind of requirements, right? You can't configure any API object without an API version or kind, that's cool. But then, as you can see, I don't get any other help here, because API objects are typeless, right? The library doesn't know that this API object is a config map. Yeah, and sometimes you have a spec, but sometimes you don't. For example, config map doesn't have a spec property, right? It just has the data. Right. So I can basically just put whatever I want here, right? I can put here data and I can put here a zoo bar and it'll take it. It'll just take whatever I put here, whether it's part of the schema or not part of the schema. And then what do I do with this? How do I move on? What's the next step? Okay, I got my code right written. So let's see how does the manifest actually get created, right? This is supposed to be translated to YAML eventually. Right. And so the way it works is basically I just run my application, right? This application, you see the last line in my application since. And you'll see this also come repeating in other CDKs like the Kubernetes CD, also the Terraform CDK or the AWS CDK. And so if I run this application... Just like a regular node process, right? Yeah, just as a regular, in this case it's a TypeScript node, but... Yeah. Yeah, just as a regular node process, there's no magic, right? Like no magic tricks. You'll see that it created a disk directory. And I've got my manifest here with my config map. Yeah. Well, two things are bad or not bad or weird. First, it's invalid, right? This is where our thing is not really... And when we try to deploy it, it's going to fail. And the other thing is that we see that it created a name for us for the config map. Yeah. And you didn't specify... I didn't specify it and it wasn't required, which is actually weird because in Kubernetes names are required for resources. And... But this is actually a unique thing about CDK for Kubernetes and a very key ingredient of CDK. I didn't have to specify a name because CDK can allocate a name for this object based on where it is in the construct tree. And so if you look at this thing, you can actually identify the path, right? Like you say, hello, that's the name of my application. Hello, KubeCon. Sorry, hello, KubeCon is the name of my... My chart and config map is the name of my construct. And then we append this hash to ensure that the whole thing is unique across the entire application. Yeah. This is when these construct programming model comes into play. The reason we need the scope and name for every construct is exactly in order to be able to allocate these stable names for resources that are generated by the... Yeah. ...that are generated during execution. And an important thing is that these names, like from my experience, usually when you want to wire components together, you need access to the name. So what I found myself a lot of the times doing is, you know, obviously inputting a specific name and then kind of repeating itself, repeating myself around the YAML. So I do need some kind of programmatic access to this name if it is generated for me so that I can pass it on to other objects. Exactly. And so like any object in an object-oriented programming, constructs also have an API that you can access after the object is created. And API objects have a pretty minimal API, right? Like you can access some of those properties. We actually plan to expand that a little bit in our roadmap. But the interesting one is name. And you can see here, this is the name specified, either specified explicitly via metadata name. You can still specify names explicitly if you wanted. But if you didn't, then you can actually just use this as a representation of the actual name. I'll just give you, you know, just to show you an example of what this can do. Let's create another config map that references the previous one. OK? So let's call it refobj1. Yeah. And this is great because it seems so simple, but you can't really do it inside a manifest, right? There's no inherent referencing mechanism. So you're kind of forced to either template it or just repeat it. Exactly. And it's a very important principle in healthy software engineering. Don't repeat yourself at one end. And the other part is like there's strong binding now between those two things that doesn't exist here, right? Like here, it's actually very loose. You see the, you know, you see this name actually repeating. You see this name actually over here. And again, the beauty of this is that if this resource goes away, then my compiler will yell at me. It will say, hey, obj doesn't exist. What are you referencing here? So we convert this loose coupling that's very prevalent in configuration files and Kubernetes manifests. You'll see that quite a lot in CDK with strong binding, strong coupling between things that represent logical connections. And the compilers can help us enforce those connections, which is very powerful. All right, cool. So this is great. But I know like I mentioned that we have multiple layers. So I want to talk about the next layer of API that CDK is going to offer. And this is an API that goes beyond just the requirement of API version and kind. And it actually lets you interact with a fully strong type API for all of the Kubernetes core objects, right? So instead of creating an API object, we can actually create a specific resource. So let's see how he did that. Yeah, so the beauty of the Kubernetes ecosystem is that APIs are well-typed. They're all schematized. Kubernetes itself publishes an open API specification for all of the Kubernetes API objects. Custom resource definitions are schematized through JSON schemas. And so what we could do is we could basically read those schemas and automatically generate classes that represents each API object and based on these schemas, they offer rich and object-oriented. They offer strongly-typed APIs for accessing these classes. Yeah, definitely. Like I can see myself going to these schemas and just manually writing the code that's needed. But we have a tool that does that, right? That generates the code based on this specification. Yeah, and so this tool is called CDK's import. And it basically accepts a specification. Something to import. And it supports either importing the Kubernetes API from the open API specification or Kubernetes CRDs. And as you all know, CRDs are the standard way to extend Kubernetes. And so any CRD that exists in the Kubernetes ecosystem can be automatically imported into a CDK's application and used through strongly-typed APIs, which is very powerful. Like we've seen people do really beautiful things with this and you get a very nice developer experience for working with all of the Kubernetes API ecosystem, not just the Kubernetes API. In this case, let's start with the Kubernetes API, the core API, just to give you a sense of what that looks like. And so I'm just going to do K8S. And that's going to be importing the Kubernetes API to the default version. You can specify any version and it'll just use that version. And so what import is doing, it creates the directory called import. It's becoming part of your project now. It's just that it emitted this typescript file in it. And if you're using Java, it'll be Java classes or Python or whatever language you're using. That's very cool. And wait until you see how we use it. And so we're going to do import this into my application. I'm going to delete this. And now I get classes for all Kubernetes kinds. So basically, there's a one-to-one mapping between Kubernetes kinds and constructs now. And should we do config map again or something else? No, no. Let's do something more interesting. Let's try to create a deployment and see how that feels. So again, you see the construct signature, which is binding into the tree. Let's call it depth one. And let's see what it means to define the deployment. So again, I see missing selector and template, which is the two required fields. But now if I actually ask my ID to help, you see that it's not just saying, hey, I want a spec. It says exactly what is the scope of that spec. And so I can start using the ID to help me with this thing. OK, so this needs containers. Again, this is courtesy of the specification itself, right? The JSON schema that Kubernetes publishes. Yeah, and anything. And we actually also have seen a few things that are untyped. And obviously, those things will not have strong types. But if it has a strong type in the specification, in the OpenAPI specification, then it's going to be. It still needs a selector. Yeah, this is this thing. I remember doing Kubernetes manually with the ML. And actually, it's always kind of bothered me a little bit. Why do I have to keep repeating this definition? It feels like it should be implicit. You mean the labels and the match labels? Yeah, because what you're essentially doing here is you're attaching labels to pods of this deployment. And you're saying to the deployment, hey, please select these pods. And that feels like the normal thing you would expect. You're creating those pods for me. Of course, I want you to select these. Yeah, obviously, you could do some magic tricks with this loose coupling. There are use cases for it. There are use cases like gradual deployments and weighted whatever. There's a lot of interesting stuff that you could do with this. But I think the common use cases, yeah, I just want to deploy these containers. That's kind of like. Basically, just want to deploy containers. Yeah, right. But this is how the Kubernetes API looks like. And the L1 and the Layer 1 classes constructs, we completely don't know about that. Because all of this is generated from the schema. And so we can just represent the schema through a strong typing, which, again, is extremely valuable. There are many IDE extensions and tools and schema validations and linters that people use to make sure that their benefits are correct. But we actually have all these capabilities in strongly typed languages. And so it's very easy for us to just lift this experience into these IDs. OK, so let's deploy some stuff. You know what, Elad? I'm going to play the time card here. OK. We need to speed things up a little bit. So let's just, instead of deploying, let's just see, let's take it to the next level and deploy the next kind of API. So the last layering in this whole experience is something we call CDKT plus. And CDKT plus is basically a library that we vend as part of the CDKT tool chain. And it provides these higher level APIs for the same objects, for the same Kubernetes objects. So if you take a look at this API, we can see that, yeah, there's a bunch of resources. They're the same as the prior level ones. But they offer slightly different APIs. So let's see how we can rewrite this deployment using these APIs. OK, so I've got deployment here. And now, actually, I can use you. So again, I'll just recap for a second. And then I'll show you exactly how to implement this using what we call L2s, level 2 APIs. And those level 2 APIs, as you saw earlier, the Kubernetes, the CDKT plus semantics is the same semantics as the core Kubernetes resources. We're not inventing a new world, in a sense. We're just offering a higher level set of APIs, not a higher level set of abstractions, if that makes sense. Like, I'm distinguishing between elevating the API abstraction versus elevating the mental model. And we'll actually see what that means in the future. But for this, in this case, I still have deployment. I still need to understand the concept of deployment. And I can actually specify my deployment specifications here. There's no concept of a spec. A spec is actually some kind of a mechanical detail of how Kubernetes benefits are structured. We don't need that in that layer. So I can just specify containers, for example. Yeah, from a user perspective, that's all I want to do. I want to tell the deployment which container to run. Right. And then the other thing that I have here is actual mutation methods. Because in the CDKT, the way we think about it is that you can mutate the tree as much as you want until you synthesize. And when you synthesize everything, it becomes immutable and then goes into the immutable world of desired state-based deployment. But as long as you're inside the CDK application and in the execution of the application, you can reach out to objects and change them. And so it gives you a very, very powerful programming model. And you can do things like passing over the deployment to some library that will add a sidecar container. Right? That's very powerful. And so in that case, I can just call add container. And then I can create a container object. Sorry. Container object. And specify image, which is required. Yeah, so just dig. And I'm just going to use the same command. And I don't even have to specify the name, although it's required over there. Because there's some default that's pretty sane, right? Like call it main because that's the main container. Oh, what happened here? OK. So now I've got two deployments. And that's it, right? Do I need the labels? And so no, because. Well, it didn't require. Previously, when we used the lower level APIs, we had to specify a selector. And if we specify a selector, then we have to specify labels. But here, it doesn't say that we need to. So let's assume we don't. Let's assume that something happens. And so we're executing again. Yeah, now let's see what our manifest actually looks like. OK, so this is my plus. And you see that it actually allocated a label for me, which is pretty nifty. I didn't have to do it. It has the ability to allocate stable unique names, which is coming from the construct tree, the capability the construct tree offers. And so I could basically just describe my intent, right? Like my intent is I want the deployment. I want to deploy a single container. That's it, done, right? Yeah. Yeah. Awesome. Should we deploy this? I don't know. You seem like you're in a hurry. What's the timing like? Yeah, let's deploy this and start. And in the meantime, let's also add the prune labels to our chart, which you can talk a little bit about. So for those of you who are not familiar with prune labels, when you deploy manifests to Kubernetes, kubectl doesn't know which resources you want to remove, right? Because those manifests only contain the desired state, and the desired state contains only the stuff that you want to exist. And so prune labels are a way to basically tell kubectl, hey, this is what I want to deploy. And everything else that's labeled with some label that's not in that list should be erased, because it was basically here from a previous iteration, in a sense. Yeah, it's how desired states the point of view. It fits more nicely into this desired state workflow, where you remove something from your manifest, you're essentially saying, I want to actually delete this from the cluster. OK, let's see. I got to see the logs, right? OK, logs. No logs. Oh, because I did bash, yeah. Bash, no. So now I'm changing, right? And redeploying. So this is kind of like my inner loop cycle. Basically, change my code, synthesize it, deploy. Yeah. And then hopefully, now I've got some stuff that's, yay. All right, cool. Prune labels, right? Well, let's add the prune. Let's get rid of this deployment. Oh, yeah, let's add the prune labels, and then we'll. OK, so the way prune labels work is basically I'm just going to add to this apply. I'm going to say prune, and then I have to specify a label that's basically consistent across all the resources. So I need to actually label all my resources with the same label so this prune can work. I'll just call it prune, and we'll just make up some name. Boo. My boo boo prune label. But now I actually need to label all those things. The nice thing, again, because this is a programming language, and we can do things like traverse the tree and mutate it during runtime, during synthesis, then CDKs offers this ability to specify labels at the chart level. I can also specify a namespace at the chart level. And it's going to apply the labels to all resources inside that chart. To all API resources inside that chart and inside all of the child constructs within that chart. It's like, I wouldn't imagine how to do it otherwise if I have 1,000 resources now, and I need to apply prune labels. How do I do that? Yeah, I don't know. It's not fun, that's for sure. So now all I have to do is basically say prune boo boo. And before I run this, let me get it. Let's put that. I got to just send this as first to see how it looks. I'm sorry. I don't trust this thing as much, and it's yet. But you can see here that I have this prune label here. And I've got this prune label here, which is great. And so now I do. So I'm just going to do this, do it like this. And so it's going to basically configure all my resources to include my label. And I can even do this. Yeah, so it basically applied all the prune label to all my resources. And now we can actually get rid of the deployment here. Yeah, let's get rid of this one, because it's too long. And just run this again. And it should prune one of the deployments, of course. I think you can also add this command to your yarn. Yeah, cool. I'm just going to add this command here as a script. Call it deploy. And then I can do yarn deploy. So that's our iteration, basically. All right, cool. Another thing that I'm going to do to make my life even easier, I'm going to do this. I can do this. Perfect. All right, so now we have that. We have our workflow. We have kind of all the layering figured out. And now we're going to start building our application. And this time, we're actually going to create an abstraction, not just like an API abstraction, but actually a different mental model. Now we want to create something for our users. And our users are developers who don't necessarily know what deployments are or what services are. They just want to write their code. And we want to provide some kind of platform for them to do it. So I think one of the most simplest yet powerful kind of use cases to actually deploy live applications is something like a gateway, like an API gateway, where you can specify HTTP routes, HTTP paths that are backed by Docker applications that the user writes. So let's try to implement a simple counter application, where you have slash counter to return the current value of the counter. And you can also do posts on the counter to increment the counter. I'm not sure I fully understand, but I guess let's start with the API. And maybe that will help me understand exactly what you want. So I cleaned this up a little bit while you were describing. Maybe that's why I didn't understand. All right, cool. So let's start. For me, the API I want to provide my users, right? Is I want my users to write, to instantiate some kind of API gateway, API router. Let's call it a router, all right? Let's call it an API router. OK. Or just router. App router, maybe. Router. Router, just router. And then. It's going to be a construct, so it has to look like this. And then install applications on the router. Basically, yeah, basically map or route different paths, different HTTP paths to different handlers. So we're going to do slash counter, right? Yeah, let's do slash counter. And this has to somehow map or be implemented by some Docker application. I don't know what is it, but it's Docker. OK. So I want to point to a directory with a Docker file that I can build and just have it run my app. So I can do this. Let's call it counter app. And I'll show you that I've written these Node.js demos like many times. Node, Alpine. So you said counter. Let me see. So something like this. And then add dot to app. And then run user bin and node app. Every time I see you write this, I'm amazed. And then it's going to be JavaScript. I'm going to do simple HTTP server. And we have a counter here. Starts with zero, I guess. Server listen on 8080. And then if it's a get method, I guess we can just increment, just print the counter, response write. Counter equals counter, plus, plus counter. Yeah, sure. Right? Yeah. Just like on any. Good enough. Yeah, good enough. And I know you always forget to handle sick terms when you write Docker application. Handled. Should we call? Is it or do you think it's going to work? Should we test it locally before? Yeah, let's just build it quickly. Right. Counter. Yay. All right. Signal handling? No. Oh, maybe. SIG in. SIG in, yes. And I'm going to also add a little log here, because then I all know that this thing is actually working. And then Docker queue, or Docker PS. Yeah, this SIG in handling is not. OK. Great. OK. All right. So now I need to, I want to basically point to this directory, right? So counter. Like that would be the ideal API, right? Because I don't. That would be the ideal. Yeah. Let's try to make this work. OK. So I guess API driven design, right? I'm going to create router.ts. And as we said, it's construct. Creating construct is actually extremely easy. You just create something that extends the construct base class, and then it has to accept it. And again, that's it. This is a construct. So you see from that perspective, we can now import it into my app. Now I need to include it. Do you know that TypeScript has it? You could do this. Are you familiar with this? This is a declare method. And I just jump over here. And it declares a method that supposedly. This is the path, right? Yeah, it should be the path. And this should be the directory. Or here. Yeah. OK. Very cool. OK. So this is coming from the user's perspective. But under this is that nothing. That should be the implementation or the invocation as far as the user is concerned. So for the implementation, I know we'll probably need to use an Ingress. Because Ingress has this capability of routing specific HTTP paths to different services, to different Kubernetes services. So let's start with importing or using the Ingress construct of CDKT plus and see what kind of API it has to offer us. So basically, every router would have an Ingress resource. I also have an Ingress controller installed in my kind, like Nginx standard stuff. So basically, I can use an Ingress and it supposedly should work. Let's see the API behind the Ingress. Yeah. So we're going to use the add rule. There is a bunch of other stuff here. But the add rule API is basically saying, give me a path, an HTTP URI. And the handler of that path is something that's called an Ingress back end. Essentially, a back end is just a Kubernetes service. But I need to call this from here. So from this, I need to basically store this somewhere as a local variable, as a local member. Yeah. And this is nice. This is the post instantiation API, as you can imagine, passing this router to different components of your code and each of them installing its own kind of path. Oh, cool. So it's like the add container of the resource. Yeah. This is now the path is here. And the back end, we ate. So this is. Yeah. Let's give the API. So a few of our patterns is to do the from methods. From methods is when a resource is configured with something that's called the union type, where you can pass in a few properties, but you can only use one of them. They're mutually exclusive. So every time you see this kind of this pattern, you'll see in the CDK, you'll see the from pattern. So an Ingress back end, we can create it from a service. So let's try. Let's do that. It's like a static method that returns an instant. It's basically a factory method. So here, I basically need a service to get there. Well, obviously we need a service. Now I need to create a service. But the service is fronting deployment first. The service serves a deployment. You don't create a service just like that. You don't have to create a service already. Let's create a deployment. And then deployments are at container. New container. Oh, wait, but I need to. So what do we do here? So we actually need to build, right? We need to build the directory, and we need to extract the specific digest of that specific build and use this as our image URL, right? So yeah, I actually published this. I actually published this library a few days ago that does exactly this. As luck would have it, right? Let me show you. It's pretty, it's pretty nifty. So I publish it to NPM. It actually also uses JSII. So we can publish it to all the package managers, like Maven and PyPy. It's called CD-Kate's image. And I'll show you how it works. It's an image construct which takes care of building and pushing Docker images that can be used in CD-Kate's application. And so basically the way it works, you specify a directory, a local directory. You can also specify a registry into which you want to push the image. Yeah, we mentioned we have like a local registry on exactly my setup, of course. And then you can create a deployment and specify that image as URI through dot URI. So it basically gives you the exact URI of that image that it's built. So the nice thing about this is that you don't have to separate the image building and publish at pushing flows with your synthesis flows. And it's very common in the Kubernetes world that image building and pushing is done together with building your application. Because this is basically the build stage of your application. CD-Kate is a build tool, right? So it makes sense for the build tool to actually do this build. Exactly. But you could do anything you want, right? We could plug in any string you want. You could use CI systems to publish your images and wire this information into your CD-Kate application and then pass that information into your container. But this is pretty rudimentary. We can obviously evolve it and make it more sophisticated. But it's definitely going to serve the purposes of this demo. So maybe libraries is just by installing them, right? Like any other library that you'd install, Yarn, add in the TypeScript, JavaScript world, or NPM install. So CD-Kate's image? A lot. I'm going to ask you to go a little bit faster now. OK. So I need the image. OK, let's do this. Image equals new image. See it brought it over from CD-Kate. And then I'm going to do this. And here it would be image here. And registry would be this local host, 1,000. And here I'm going to just put image URL. And now how do I get the service from that? Like do I need to create a service now? Yeah, so we have to create a service, right? Because an ingress backend is a service. But actually, if you look at the deployment and the API on the deployment construct, you'll notice something that is very familiar to Kubernetes users. And that's the expose method. And it's essentially mimicking the behavior of a kubectl expose, right? Where you can pass in a deployment. And the expose will actually wire and create a service that selects that deployment and routes to it. And you can actually get the service by just storing the return value of this. I think I need to specify the container port here, right? Yeah. And then do I need to somehow map the external port and the internal port, or is it all done for me? We'll see it in the manifest. It's done for you, of course. We're talking about it. But yeah, so you can just choose any expose port you want. This is like an external port that you want your users to access. Your service, basically. The service. Port the service expose. Yeah. If you want to access the service, then this is the port you're going to use. We're going to be using ingress, right? So we have another layer of indirection. But yeah. OK. So this is compiling now. I get a service. I can pass it over to the backend and then edit it. Here's a tenant of the CDK. If it's compiling, it should work. OK. Let's see. Should I do D? So now it should actually build our image. Oh, cool. Building and pushing. Very nice. And this is failing because creating now. Nginx denied and request host and tap counter. Request host. Do we have something? It's already defined an ingress. Oh, maybe it's from a different. Let me see if we have some previous thing. Yeah. Oh, we didn't. Again, just some leftovers from a previous test. It's surprisingly same name of the. But that's the thing, right? It's stable names. So it's always going to be the same name. OK, so this looks like it worked. I've got my router deployed here. Should I just curl? Let's just curl. Yeah, let's just curl it. Yay. OK, cool. This is pretty cool. This is this is the experience we want. So now we can't just a quick creep cap. OK, I want to emphasize a few things. Let me put this here. So I think the interesting discussion here is about the mental model, what you described earlier. As far as the users of this router are concerned, they don't know about ingress. They don't know about deployments. They don't know about this image thing. All they know is that they put their code under counter app and they basically mounted in or installed it onto this route. And that's it. That's it. And that's like the next level of abstraction. Now you can publish this as an NPM library or anything. And users can use it without having to know what ingresses are or services or deployments. So it's a completely different kind of mental model. Yeah, it's a different universe. It's not the Kubernetes universe. It's not the router universe. And it's a universe that's more familiar with our regular runtime code, right, applicative code. This is how you would create an express application or a Django application. It's a pretty common mental model. OK, but this is still a bit of a toy, I feel like. I mean, in most cases, it's one more than one replica of something and then stuff like that. So let's do some productization for this little router. So the first thing we want to do is actually add like a readiness probe, right? Most pods need a way to make sure that they're up and running before Kubernetes can send traffic to them. Right. We need to know what my users don't really care about that because they're just like serving HTTP traffic. And so as far as their concern, you can hit that endpoint. It should work, right? Like it's probably fine. We can think about like maybe exposing something later on, but definitely like for the common thing, simple thing. We can just HTTP get slash. Let's see what API the container has. I think the readiness probe should be on the container, right? Readiness. OK, so the term is when the container is ready and it accepts a probe class, which is, I think, another union-like thing. Probe.probe. Oh, you can see two types here, right? So you can either do HTTP or a command. So that way you can slash. And that's it. And it knows the port because I just said it, right? So there's no reason. That's crazy. Exactly. Just a line above that you specify. Now if I deploy this, I should get readiness probes. And you see that the user didn't change anything. So think this library, this router library, added readiness probes, released a new program. The user has just picked that up. And now they have readiness probes across my entire company, right? Nobody had to do anything. Pretty neat. I want to see the readiness part. If you don't mind, part of my deployment, right? Oh, it should be part of the pod. Also. Oh, I see the other one terminating. That's pretty cool. Pod. OK. Readiness HTTP gets less. Oh, that's pretty neat. OK, cool. What else? All right, so what else? We need to replicas. Let's do multiple replicas. Yeah, so if we're going to do replicas, we're going to have to change our implementation a bit because we're doing this naive kind of in-memory counter. So let's implement a persistent counter, like with Redis, for example. Yeah, actually, I think we already have that problem because we just killed our previous counter, right? Yeah, and we obviously reset it. Obviously, we can do a counter without persisting. So you want to add a Redis. I want to first deploy multiple. Let's say we just deploy three replicas every time. Let's keep it simple, right? So I'm just going to go to my router and replicas are here, right? Part of the, OK, yeah. Yeah, all right, so three. And if I deploy this, you know what I mean? Just add one tiny thing here. Yeah. Now we're getting real. Yeah, it's just so we can differentiate between and see the traffic routing to different parts. Yeah, I mean, we've got to do this. OK, but I can already see the two, a few instances, the counters are jumping around, which is pretty cute. And at some point, I'm going to need to see the hostname as those things are. OK, we'll just let that run. All right, Redis. Yeah, so I always install through Helm, everything. All those things. Definitely, let's try and install it from Helm. And we actually have support for Helm in CDKs, where you can. There's something called a Helm construct, of course. And you can specify which chart you want to install. And the Helm construct will actually call Helm underneath for you the same way that CDKs image calls Docker. So all these build tools are kind of encapsulated or abstracted away by using these constructs. It's not going to actually install the chart. It's going to use Helm template, which is. Yeah, sorry. To generate. Helm allows you to basically synthesize the template from a chart, given a set of values and things like that, right? Yeah. OK, so what we required field here is chart. Vietnamese. Redis, right? Yeah, that's the one. And what else do we have here? I see Helm executable, Helm flags gives me some ability to control the execution. And then the release name is optional, which probably says that it's going to allocate it based on where the construct is. I know that I know that Helm charts, like the Redis Helm chart actually generates convey like generates values based on release name, right? That's the convention, for example, arrived from the release name. Exactly. For example, we'll see it later, like the Redis password is going to actually be extracted from secret that has a name with a convention. Yeah, so we'll see that. And you can access that release name if you store the helm in a construct in a constant and. Oh, cool. So if I do Redis, Redis, release, same as with the. So I guess release names are kind of like the scope, right? Like the construct scope for the Helm chart gives the ability to basically install two Redis in the same cluster with different release names. So the resources are not conflicting, but it's basically one level of nesting. There's no treat. It's just a single kind of namespacing for the chart. OK, do I need any value? I see that I haven't seen. Let's just input some values for demo sake to disable the Redis clustering. We're just going to do a single node simple Redis. What's the configuration? Yeah, so unfortunately, this part isn't typed. But so we need to take a look at the configuration. But I can tell you that it's basically you do cluster and then enable false. That's it. And you know, I can actually derive this value from some configuration of my chart. So I can add something like this is a dev environment. And in the dev environment, I'm going to disable the cluster and the production. I can actually write the logic that decides what is the configuration of my Redis cluster. Yes, but we're not going to do that now. Because we don't have enough time, and I'm going to rattle. OK, so now I've got my. So this is it. That's all I need to deploy Redis cluster. So that should be it. Basically, yes. Show me. Now, obviously, we're going to need to change our code, right? The HTTP server that is going to have to connect to Redis is going to have to connect with the passwords. I know that you've been hacking on this earlier. Can you send me the code? I don't want to spend too much time writing that code. Yeah, I'm going to send you that. OK, so it does seem like something happened here. Let me look at this. Pretty cool. This is already running. That's nice. Ready to accept connections, Ellie. That was magical. I want to see this. This is CDK and Helm, right? We have to give a salute to Helm for creating a nice chart. But this is pretty nice because I think it encapsulates the fact that I'm using Helm behind this construct. And I guess I can actually wrap this into a construct, like Redis construct, and then do that. And then you can actually provide for your users, right? You can provide additional typing that isn't available at the lower level. So for example, these values. Do this again. And since you're already doing that, then we should also expose some properties for this construct because we know that we're going to need the password for Redis in order to connect, right? So it would be nice if we have like a property or a method on this class that exposes it. And the same thing for the host name. The password is going to be stored in, I think, let me see, a secret, right? Yeah, I'm going to tell you because I've been dealing with this chart for a while now. So the here. So it's basically the name, the release name, and Redis password. Yeah. So you do, you're going to do, yeah, read only password. OK, so this type is going to be a secret value. We have like this notion of a secret value in CDKs, which is essentially a combination of a secret and a specific key inside that secret to extract the value. I see. So it's basically kind of like a pair. Yeah. And then I guess password equals. So you create a secret. You can reference an existing secret by using secret.from. And again, this from pattern. And the secret name is release name. It's going to be a release name, but it's like Redis. And then the key is what we saw here. I like the. This is like, you know, the home chart creating for us. So that's basically a convention that the home chart has. And we can codify this convention into the construct. And then from the user's perspective, they just say dot password and they get a secret value and they can use it opaquely. They don't understand. OK, that's pretty, pretty strong. All right. So we have five votes, I guess, right? We have five minutes a lot. So we need to. Master. This is the host, right? The host name. And how do you. Yeah, so the host name is actually, again, it's the release name and it suffix through the master. Yeah, cool. OK. So now I'm going to do this. Oh, sorry. Sorry. OK, back. Redis. Yeah. And I know. All right. So OK, give me. Did you send me? OK, I got it. I got your code. I mean, I'm just going to copy and paste that into our app to save some time. And we'll go. Let's go through this for just a second. OK, so I understand what's going on here. So we installs Redis. Oh, I need. I need to install in dependency, right? I'm going to add a packet JSON file with Redis. OK, cool. And don't forget to do this, right? Work your app and then run. This application combines pretty much every tool that I love. Redis, I agree about Redis. Anyway, so it creates a client and it reads the host and the password from environment variable. So we need to actually somehow put those. OK, well, I'll remember this. And then it creates an HTTP server. Oh, and it has this nice get. You can just get the counter or you can post, and then it'll increment. Nice. OK, that looks pretty straightforward. And it's still 8080. OK, so all we need is to basically pass this value to the container. And if I go into the router, it's here somewhere, right? Environment. Yeah. So these values are not here, right? Because that's not the code input. And that's actually OK, right? The user, it's OK for the user to pass these environment variables because it's the user's choice to now incorporate Redis, right? And so with the extent. So it's basically part of that mental model that the user comes through and says, OK, I want to run this container, this image, and pass in these environment variables to the image. OK, so it's going to be install options, I guess. And here is a weird TypeScript syntax for a map. We can use rate record, right? So it's basically a screen. Remember that it's an end value, right? Because I saw that this one's not, yeah, this is basically a. Because I can also pass a secret, right? And then this one. There are multiple ways of creating environment variables in Kubernetes, and we're going to actually use two of them. We'll see. OK, so I'm just going to propagate this over here. Yeah, here I'm going to do this. And so we have first one is the Redis post, which is just a literal value, right? It's just end value from value. It's just a string. And then Redis password. Saw that we have from. I'm secret value. Cool. Wow, this looks pretty clean. This is basically it. Really? That's it? You're telling me this is going to actually work. Well, I am telling you. You're hoping. No, I'm definitely telling you that, but don't hold it against me. Yeah, so that was it. And I guess we can start wrapping up, actually. Let's give it a few seconds to deploy and see. So I guess what we saw here is we saw basically, I would say, three aspects of why CDK for Kubernetes is interesting and why, in general, CDK is interesting. The first thing is just using programming languages, general purpose programming languages, object-oriented tools, classes, inheritance, methods, property, things like that. Very powerful tools, very familiar to most programmers. And so to that end, I feel at home. I feel like I have all those tools. I know what to do with them. The second thing is the ability to create higher level abstractions. And obviously, that stems from the first point. But the interesting part is this composition model, these constructs, because the power of constructs is the ability to create this consistent and deterministic anaining across execution. So when I change something, I know that this thing that was some in my previous execution was X, now it's going to be Y. And I'm able to gather those things together. And it's very hard to implement this desired state mechanism without this stable naming. And I think the third thing was this ability to basically leverage the existing ecosystem of Kubernetes. We talked about CRDs and being able to import them and use them as L1s. We saw the Helm support, which is kind of magical, I guess, and we wrapped it into a construct. We create an abstraction that even hides the fact that I'm using Helm. And nobody actually needs to go and, you know. Yeah, actually, you can just show for a second like how the manifest actually looks like now. OK, I'm actually curious. It's enormous, right? But we don't care. We stopped caring about the manifest like 20 minutes ago because initially I didn't trust it, but now it feels like I don't really care because I feel like it's actually doing what I'm telling it to do. So it's pretty cool. Oh, and the other thing that we saw is we used this image library, which is just random library by some dude in Tel Aviv that helped us with building and publishing the image as part of the CDK experience, which is also pretty cool. Like the ability to publish these constructs and use them as class libraries. I can actually publish a whole application as a class library. It's pretty. Yeah, we can actually publish this counter, right? And we just created a persistent counter that runs in Kubernetes. Right, right. OK, this is just zero now. Oh, am I doing this to get? Yeah, I'm just expecting. Yeah. Hey, look. And different host names, yeah. The cathartic moment, right, is now. Yeah, definitely. I feel cathartic. Cool. Ellie, thanks so much. Is there anything else that you wanted to? I just want to give, yeah, we just want to thank you for inviting me. And I just want to mention that we are currently, you know, we're putting a lot of effort into it. And we really want the engagement of the community. There is a Slack channel. You can maybe show that while I talk. There is a Slack channel you can join. There are monthly community meetings that you can attend and actually brainstorm with us on features and on bugs and whatever you want. And we're really, we're really excited to build this together, right? This is, we want as many use cases as possible and really making these APIs so fun. Pleasant, yeah, pleasant, delightful. Yeah. So yeah, the homepage for CDCase is this one. Obviously GitHub is another whole page that we're happy for you to start from. And you can find resources at the bottom of this. There's a bunch of resources about, there's a mailing list. We've got a weekly, I'm sorry, a monthly community meeting that you're more than welcome to join. We have a Slack channel that's part of the CDK Dev Initiative. It's actually a community initiative that combines all of the CDKs, the bigger CDKs. There are actually other CDKs starting to pop up. But the Terraform AWS and Kubernetes CDKs. And there's a Slack team, Slack workspace that you can join. And there's a CDK channel within that workspace that we monitor and we're happy to talk to you. And I guess that's it. Yeah, we have like 10 minutes, I guess, after this for questions. Maybe a little bit more, but yeah, feel free to join us and ask questions. Cool, thanks so much for joining me. It's been way more fun to do it with you than doing it alone. Hopefully next year we'll get to see the people we were presenting to, right? Not just in the camera. Maybe, yes, hopefully. Cool, all right, thanks a lot. See you later. Bye. Awesome, thank you a lot. Thank you, Ellie. That was an awesome demo. Thank you so much for joining us for this presentation. We hope you're as excited about CDKs as we are. We encourage you to visit us on the web, to join us on Slack and chat us with your questions or the things that you're building. And also join us on GitHub. We really are excited and we welcome contributions. We have a lot of big things planned for the CDKs, including moving to beta in the near future here. And we're excited to see what you build. And we hope that you join us. Thank you.