 Thank you very much. The room is much bigger than I thought, so I'm glad that a lot of people are here. And I want to talk today about a C and CF project called KCP. And in specific, I want to talk about how to build a platform engineering API layer with it. And we're going to walk through how to use it and why it might be something that you want to use in future platforms that you build. First of all, my name is Marvin. I am a team lead at a company called QMatic. I have been in the Kubernetes space for six years now, have been mostly working with Kubernetes platforms, and now transitioning to this kind of internal developer platforms. For today, I want to first talk about Kubernetes as an API layer. So I want to basically recap what a lot of you are probably already doing. Then I want to talk about some solutions that have emerged in the ecosystem. And then we're going to dive into KCP, the difference to existing solutions, and basically walk through managing an API. First of all, Kubernetes as your API layer. So a lot of you are probably already doing this because operators have extended Kubernetes quite a bit. And honestly, that's mostly because the Kubernetes API is kind of awesome. At least, I hope that's the prevalent opinion. So maybe we'll do a show of hands. Who of you likes to work with the Kubernetes API? OK, that's not as much as I thought. But personally, I like working with it. It's very consistent. The declarative approach is something that makes it very easy for people to define what they want. And essentially, it makes building APIs already very, very easy. And it's amazing that this came out of a workload orchestrator. This is more or less a byproduct of managing containers at scale. However, I think most of you have already encountered one or more of these problems. Essentially, managing CRDs in a cluster with a lot of users can be cumbersome because teams want to use their own versions of something. So these are cluster-scoped. Other resources might be cluster-scoped. And basically, running them can create conflicts. And even though scaling the Kubernetes API is great, it's meant, as what I talked about earlier, it's meant as a container orchestration platform. It's not meant to be scared globally. So at some point, scaling it can pose quite a challenge. And one other thing, if you're focusing on providing APIs, on higher-level APIs that you want to make available to your users, you might not even need the workload orchestration part of this. You might not even want to run containers natively on your platform because you want to provide higher-level abstractions. And that is something that you always have the container workloads orchestration in Kubernetes. But it's something that you might not do. So let's take a quick look at what some people in the ecosystem have been doing. Some people in the community have run into, I think, similar problems here. And one of these solutions that people come up with is running many lightweight Kubernetes clusters because they mostly want the API. Oh, sorry. Thank you. They mostly want the API. So one of the solutions that people have built that you have maybe seen in some projects is running many, many Kubernetes control planes in parallel. And this is something that makes a lot of sense with lightweight Kubernetes distributions like K3S. It basically allows you to run all of them on a shared infrastructure, so an underlying Kubernetes cluster. And then you spawn K3S as pods. You basically use the Kubernetes API aspect of it. And you're basically getting a Kubernetes API for a couple of megabytes and CPU share. Now, this is great, but I think it's too isolated because now you have all these different APIs and they don't interact with each other at all. And you kind of want to separate teams, but you don't want to separate them too much because teams on a platform should be sharing resources where it makes sense and should be able to consume services that other teams are providing. So what if instead of running a lot of Kubernetes API control planes, what if we virtualized API service indeed? What if we made Kubernetes API service a commodity or Kubernetes API endpoints? And the solution that we have in KCP and that we're going to talk a bit when we come to actually KCP is a concept that we call logical clusters. So with logical clusters, you have one control plane, which is KCP. And KCP is capable of exposing many different Kubernetes APIs as kind of like virtual clusters. That means you are not spinning up new processes to run a new API. You're basically telling the control plane to give you a new endpoint. And that new endpoint then acts as an independent from the outside as an independent Kubernetes API. We'll take a look at that a bit more. But essentially, the idea is what if we started sharing a control plane that you can scale at once and then also break down barriers where you need to. So where you want to share resources, you could do this with a control plane that is well aware of all the different Kubernetes API service of all the what we call logical clusters. And that's exactly what KCP does. That is the architecture that KCP as a project has come up with. So essentially, KCP builds a control plane for APIs. And the key ingredient here is the Kubernetes style. So if you are building APIs on top of Kubernetes, you will feel right at home with KCP, because it applies all the same concepts. KCP is a cloud-native computing foundation sandbox project. We joined in the end of 2023. The project has been going on as open source since 2021. I am one of the maintainers for KCP. And basically, the idea behind KCP, these logical clusters are implemented as something that we call workspaces. So workspaces are your way in KCP to instantiate new logical clusters. So workspaces are actually Kubernetes resources. And if you want to create a new Kubernetes API service or a new logical cluster, you just tell your Kubernetes API, please create a new workspace for me. And workspaces have two characteristics. First, they are organized in a hierarchy. You can see this on screen. So basically, what you can do here is you have a tree structure. And you take your organization's structure and you apply it with workspaces because you can nest them into each other. And then I talked about these acting as independent Kubernetes APIs to the outside. And for that, I've wrote down some URLs here. So depending on the workspace that I'm in or depending on the workspace that I want to talk to, I basically go on and pass a different URL. So if I want to talk to something that we call the root workspace, which is the start of your workspace tree, then you append this cluster, this root as a name. And basically, you build upon that. So depending on which workspace you talk to, you get a different Kubernetes API. You get different discovery APIs. So the resource types that you can use will be different. And you get different resources. So the resources are stored in a shared data store. KCP uses atcd, just as Kubernetes. But it separates them in the atcd data store by workspace. So if you have a resource in one of the workspaces, then you won't see that same resource in one of the other workspaces because they are separated. And because workspaces are organized in this tree structure, they share a lot of similarities to what you all know as a file system. So basically, with a little kubectl plugin that we support from KCP, you can use kubectl and you can navigate your workspace tree. And you can basically talk to each Kubernetes API by just kind of not switching directories, but by switching workspaces. Very similar to what you do with CD on your command line. And yeah, that's basically it. The plugin basically changes the URL that you're connecting to in the background. And that allows you to go on and basically talk to different Kubernetes API. OK, now we've basically laid the ground for KCP, what its differences are to other platforms that are for other solutions to use the Kubernetes API for platform engineering. So I just want to basically delve into managing APIs with KCP. So I like to call this kind of thing the API marketplace. I don't think it's something that comes from any of the white papers. But essentially, in platform engineering, we have two parties that we want to connect as platform engineers. So we have the infrastructure slash service providers, whatever you want to provide to your users, and you have developers and other users or people doing data science or something. And the way to connect these two sites at scale basically is our APIs. That's what platform engineering basically does, right? And this is where platform engineers come in, because infrastructure and service teams, honestly, I've worked on that side of the equation for some time. And it's hard enough to keep big services up and running. It's like managing infrastructure is a hard job. And making your APIs discoverable and easily consumable is not really something that we can also add to their plates, because they are quite busy already. So this is where platform teams come in. This is where platform teams say, OK, you have your servers. There are consumers on the other side. Let us help you with connecting them. Let's make it as easy as possible for them to basically use your APIs. And this is basically where we come in with KCP. And I'm going to walk you through creating an API in KCP, and how users can consume it, and then also how you can implement it, because, well, all of you know if you create a CD in Kubernetes, that's nice. But you need a controller to power that, to basically reconcile it. And this is exactly what we're also going to talk about here. So we start with something that's called an API export, which defines my APIs. And we start even one step before that. We start with something called an API resource schema. And API resource schemers, I put a schema on screen here. And some of you might instantly recognize this maybe, well, hopefully because you work with it a lot, hopefully not because it traumatized you. But point being is, these are CDs. So if you create a CD, you basically know how to create an API resource scheme already. The key difference between API resource schemers and CDs are that you decouple defining your API from registering it. So when you create a CD, you always get the API enabled already. So when you create a CD, offer a specific resource type, you can then start creating resources from that resource type. And API resource schemer just take the definition part. So they just basically this is just the way to define a scheme that you then later can enable if you want to. So there's more flexibility here because you decouple two aspects that are connected in CDs. So basically, because API resource schemers look like CDs, you can easily transform a CD into an API resource schemer. You can either use a QTJ plugin that we provide, or you can even use the Go SDK that KCP provides, depending on how you want to deliver your APIs. Once you've done that, you start out with your API export. So an API export basically bundles multiple schemers that you've created into a coherent API. So this is basically what you, as a service provider, want to provide to your end users. So let's say when I created this, I wanted to provide certificates to my end users. I also wanted to provide the ability to order pizzas because I was hungry. But basically, the API resource schemer defines how the resources will look like. And the API exports will define how these are delivered in one bundle to your users. This is the most basic API export that you can create. But there are many other options that you can use. For example, if you want to not allow users of your API to delete their resources, maybe because that is a company policy, that important resources cannot easily be deleted, then your API export allows you to limit the permissions that users of your API can have. We're not going to get into that because I only have 25 minutes. But the API export basically allows you to define how your API will be consumed in the future. And speaking of that, let's switch sides. So we were on the service provider side. We were on the side that wants to offer an API. Now we're going to switch to the users, to the developers that want to consume the APIs. And at best, developers want to be, we should enable developers to consume APIs self-service. So if we want to scale platforms, we need to allow them to register APIs whenever they have to. And KCP has a resource called API bindings. And it works the following way. So if you connect to a specific workspace, you can see up here that I am connected to a specific workspace from the URL that I've shared. So this is, again, a specific Kubernetes API, my control plan that I'm talking to, a specific logical cluster. And if I look at the API discovery, I will see that there are a couple of resources that are always there. You recognize them. They are core resources, like config maps, events, namespaces. So we've decided to keep them in because they're generally useful. There are no ports, though. So the core API is scoped down here to what you need. And there's also a resource type called Workspaces in here. And this is essentially the way that you can stack your logical clusters into each other. Because if you're in a specific workspace and you create another workspace on the API, then you get a child workspace. So this is how you basically go down the tree. And Workspaces, this tenancy KCP IO API that you see, it is exactly delivered the same way as you as platform engineering team would deliver additional APIs. So the resource in question is the API binding. And API bindings mostly define, OK, I want to be able, or I want to reference a specific API export. So the resource we created earlier. And that allows us to start using that API. So that enables the resource types in your API export in the workspace that you're creating your API binding in. So you can see there are a couple of API bindings here that are there by default if you create a new workspace. So you always get the tenancy API, which allows you to stack Workspaces. But if I go ahead and do the same thing to the API export that I created with my kind of like two demo resources, then I would not necessarily be able to do that. Because binding APIs means that you start getting access to services. And to limit that, KCP adds additional layers of AirBug. So if you're managing, well, you're hopefully all managing your Kubernetes permissions with AirBug already. But essentially, there are a couple of special verbs that you can add that will allow you to limit who can register to your API. So for example, someone might need to sign a Euler or something like usage guidelines before they are allowed to consume an API. And this would be something that you can manage with these AirBug permissions. Now, if you create an API binding because you're allowed to do so, then the new resources from that API export, they will show up in your discovery API. So we'll see new kinds of resources that you can now create. So certificates and pizzas are my examples. And now I would be able to create certificates and pizzas. Now we have created APIs. We have also started consuming them, but they're not implemented really yet because there's nothing reconciling them. We're going to walk through this a little bit fast, but so KCP has a concept called virtual workspaces, which act as proxies in front of your data, in front of your actual control plane. So this allows you to provide computed views of the data in KCP, depending on who you want to target and who should have access to what data. So you can limit it by resource types. You can limit it by labels. Basically, you can build whatever you want. And KCP provides a framework to build additional virtual workspaces. We, though, only care about the API export virtual workspace, which is something that is built into KCP. And this virtual workspace gives me a view of all the data for a specific API export. So this allows us to access cross workspace, all the resources that different teams have created of a specific API. And this is useful. And again, this is also guarded by AirBug. So you won't be able to use this unless someone gives you the explicit permission. And this is extremely useful to build controllers to reconcile your API. So what you've been doing in Kubernetes, you can do in KCP just at a grander scale, because you get all these different teams in the workspaces creating resources. And basically, creating KCP aware controllers is very simple. There are just three things that you need to know. KCP aware client and cache. Then you want to talk to the virtual workspace that I just showed you. And then you need to set in your context which logical cluster, which workspace to talk to. And there's a fork of controller runtime that we're maintaining that will allow you to build KCP aware controllers. OK, that's where I'm up because I think I'm running out of time. KCP is a global control plane. It allows you to build API-driven platforms. So we're moving past the container orchestration while keeping the Kubernetes API, because you don't seem to love it as much as I do, but it's a pretty solid API that you can easily extend. Workspaces allow you to construct organizational hierarchy. So you can basically model your KCP, your control plane, after the different teams or the structure of your organization, which will look different for each of you. And because it's very easy, because basically, a lot of the concepts that you already know already exist in Kubernetes. So most of you have already implemented something like that. They just need light modifications. And then they basically apply to much greater scale of KCP. And in the end, I want to say, as again, it's a CNCF Sandbox project. It's community-driven. We welcome everyone who wants to join and want to shape and maybe also change KCP, because it's meant for platform engineering. And we need to hear from you. And it would be very interesting to hear, OK, does this work? Or does it not work? So I hope to see some of you maybe at future community meetings or in our Slack channel. Let's see. And with that, I'm done. So thank you very much. There's a session for you back here. Does anyone has a question? So I'm assuming that one of the limiting factors for a scale is the size of the reconciled loop. You get more and more resources, and you can kind of choke up the API. So are there any guardrails that you can put in place so that one workspace doesn't starve everyone else? So can you come again? So I can imagine that a single workspace will pile on more and more resources, and that reconciled loop can get too large, and you can start choking up the API. So can we put guardrails in place so that a single workspace doesn't hog all the resources from the other workspaces? Good question. I think that's a great feature request. So I don't think it's possible right now, but again, it's a great point. So yeah, I think that's something to be added in the future. Thank you. Thanks for the presentation. Working with different clusters also allows us to separate resources, node pools, and networks. Do workspaces allow? Do they expose this kind of differentiation so that we can split them on different node pools or having different ingresses roles? I'm not sure if that addresses your question, but you can charge your KCP control plane so you can scale it across different regions. Since the default KCP doesn't come with all the container management part of it, you can add these APIs to your workspaces, but it's not the core aspect. So I'm not sure that answered the question, to be honest. Fine. Thanks. Thank you very much. Great presentation. I've actually questioned usability. You said we probably all have been traumatized by CRDs. So it seems like a bit of a tall order to impose using CRD like specifications on app developers if they want to expose their API. Is there maybe a feature request, but is there any way to maybe simplify that? It depends, I think, on what you mean by simplifying. At least for me, the traumatizing was a strong word, but the difficult aspect was like managing multiple teams trying to install CRDs and wanting to have APIs available. So that's the part that we solve. Like generating APIs, I think, because you can reuse your existing tooling, because you can generate a CRD and easily convert it to just the schema instead of just registering it. I would say that already makes it easier, but I could definitely see there being tools that can even or internal tools maybe that allow you to build APIs easier because you have a scheme that's already defined and you just add a couple of parameters to generate it. OK, thank you very much. Any other questions? Thank you, Marvin. Thank you.