 All right. So welcome to the last talk, I think, before lunch about CIDs. Everything, I guess, was about CIDs. Those who have seen Joe before, it was about CIDs. This as well. I'm Stefan Schimanski. I'm one of the initial authors heavily involved in CIDs. So lots of the things you see today defaulting. Structural schemas blame me for that. I was all in there. Today we want to talk about something which is CID-based but could be a next step for the ecosystem to use them. So this is how it started. A few weeks ago, I realized that. So it's so hard to get something like a MySQL claim API into my cluster. There are so many providers providing MySQL, but if I want to have one, I have to go to some websites or some cloud provider, UI, to create databases, it's still not native. I cannot say Q-Cuttle create MySQL without installing anything into my cluster. So what I want to have, and this is a mission that's basically the dream where we want to go to as an ecosystem. I don't want operators to do that. Like I want to consume a MySQL from anybody who offers it in some service. I want to have it native, CID-based in my Kubernetes, but I don't want to run anything. I don't want to run an operator just to get a software as a service MySQL. And for eight years, we have built systems like operators for eight years, but in Kubernetes, basically, we don't support software as a service. This is basically the rent here. This started as a Twitter tweet where I talked at the time and many people answered, I guess some of them are here. There is no SaaS support in Kubernetes, while the whole ecosystem around us builds software as a service. So can we change that? And six years we have spent in building operators, and I'm focusing here on operators which don't operate, like operators which don't run local pods. This last class is totally, it's a good use case. Like if you run pods and you want to run your on-premise in your cluster database, that's fine. But if it's a software as a service, I don't want operators. I think we as an ecosystem do something which is not really helpful. Basically, it creates so much complexity when I have to maintain operators for software as a service. Basically, we are building that. Everybody knows those pictures. Many adapters between different worlds, different standards, different API variants. But we're in Qube here. And we love Qube APIs. So we need something else. And that was basically the result of this Twitter rant. We need to build something which gives us software as a service natively in Qube. That's the topic of today. So the future. Let's imagine I want a database. So I chose not my SQL here. I chose MongoDB and trademarks and everything. So MongoDB for today. We want MongoDBs. So what if we had this command in QubeCuttle? QubeCuttle bind against a URL. And MongoDB.com, that's this big MongoDB service provider. They implement this protocol, whatever it is in the background. Imagine this exists. What happens? I'm redirected to my browser by that. I'm a developer. So I want a super easy experience. I'm a GitHub. So I click on authentication, a GitHub. I say, yes, MongoDB should see my identity. So let's click. I'm here. I want the MongoDB CAD, basically. The resource which can claim a MongoDB from a service provider. Imagine there could be more. I mean, there are some companies who have hundreds of different types of objects. MongoDB is really a single focus. They just have MongoDB. I bind to it. I click on bind. I'm asked about additional permissions. So MongoDB wants to create one secret where they put a token to connect to the database. And they want to create a service in my cluster where I can connect to the endpoint of the MongoDB. And I say, yes, they can do it. And we are here. So we have a CAD in the system. My Kubernetes knows about MongoDB. Nothing installed. One command, few clicks, done. There's no MongoDB claim. So get MongoDB. This is empty. But I have one here. So for my port, MongoDB database, very simple, nice API. I apply that. I check. There's nothing running here. So just the infrastructure stuff you have in your cluster. There's no MongoDB pot because the service provider runs that for me. A few seconds later, it's running. There's something provided by the provider. And I'm connected. I can look into it. There's a status, everything we know from normal native CADs. It's running. Type ready is true. Looks great. I have my service. So I gave the permission to create a service, just this service, nothing else. It's there. I can connect my application to it. I can check. There's an API binding. So CubeCuttleBind creates a binding. There's a URL. It's connected. Age is 11 minutes. There's a heartbeat between my cluster and the provider. So everybody, both sides, basically, knows the status of the connection. That's it. That's the future. Super simple extension mechanism for Cube. I don't want operators for that. We can do better. So to look a bit into details, what have we seen? API is consumed in a cube cluster without running anything in that cluster. We have seen more. We have seen API bindings. I checked the API binding, the status of that. And this is the sketch of the API. This will become much more concrete in a later point in the talk. Here it's just a sketch. A binding on my side and an export on the provider side. On the right side, you have the provider persona, service provider, the database provider, MongoDB as a company. They export the API and I can bind to it. So I just say the URL is that one. I want those resources. You remember the selection of the binding. So the resource I want MongoDB in that case. And there's a binding. Security. I don't want to give anybody access to my cluster. I think we share this usually that service provider shouldn't talk to my API server. So in this case, what I want, the service provider needs ways to see the binding. That's the first thing. Maybe access some more things like token secrets and service. But if there's other stuff in my cluster, like other service accounts, deployments, conflict maps, anything which is not connected to MongoDB, the service provider should not even see and of course not access. There's more than that. Everybody knows those screens. Like on the mobile applications can ask for permissions. So they can ask, can I read your GPS position? Can I read your contacts? Whatever. Everybody knows that. And similar things for OAuth based authentication. So if you have seen that earlier on the screen, so I gave the permission to see my identity to MongoDB. Similar thing. And the similar thing we could imagine in this model. Like a service provider could claim access to the MongoDB object. This is pretty natural, right? That's nothing special I would have to agree with. If I bind to MongoDB, of course, service providers would see my MongoDB objects. But we also saw services and secrets, but very, very slimmed down. Like we did a minimal set of access. And we call that permission claim, this pattern. Could be something in such a model where you can claim access beyond the actual resource you're exporting. There are more variants of that. Who has that in the basement? Guess many people know that, right? Fuses in the basement. There's this thing. You cannot access it. I mean, technically you can. You can open the anti-tempaceal with any kind of tool and access it. But next time the electrician comes into your house, they won't be happy, right? So similar thing. We could protect access to certain things. One example here is there's a MongoDB object. There's a MongoDB object. It has the status, right? The user should not touch status. I mean, if it does, it's your LCD, so you can do it technically. But we can do implement measures like admission, for example, which just stops that. I mean, in the usual case. Again, it's like an anti-tempaceal. You can open it. But there's a reconciler and the service provider will see what you have done. So there are consequences for the contract with your service provider. There are more things. You could place objects into the cluster, which the user maybe shouldn't be able to update, like resource quota. Again, it's not super high security. It's more like to prevent bad user experience. If I only have permission to create five MongoDBs at the provider level, a resource quota locally with a number five as maximum makes sense, right? So you get a natural, cube-native message, oh, this cannot be created because quota. Those things can be built in this model. So what we have seen, and there are, of course, more patterns, more ideas, and you can be creative to think about that. We have seen exports and bindings. We have seen permission claims. We have seen those inverse things, which act like this anti-tempaceal. So think about it. The rent from August, it was, I think. There is no sales in Kubernetes, but we can build it, right? This future is feasible. This was August 17, and, well, we took this serious and just sat down and built it, right? So there's cube bind. It's a project in GitHub. Go to it. It implements the binding. It implements a connector, which does a connection between your cluster and the other side. There are no permission claims yet. So everything you saw about permission claims and inverse permission claims, it's not there. We know how to build it because we have done similar things in KCP. So we have explored this domain. So we have ideas how this could look like as an API. But this is the next step. Cube bind for pure resources exists today. And I will showcase it in a second. And after the talk, you will have a chance to do it yourself. All right. So it's cube bind. GitHub, cube bind, cube bind. If you go to the repository, you find those three components, basically. There's a cube-cattle-bind plugin. There is a vendor-neutral component, which does all the logic. So it's hard. So there's a connector, which is a vendor-neutral agent on the cluster, but super minimal. It just zooms the objects back and forth. And there's an example back-end implemented. You can run this example back-end on any cube cluster. It uses namespace validation for tenants. So you can connect many clusters and many namespaces in those clusters and create MongoDBs, for example. Very important, this is an example. There's a protocol mostly based around, yeah, based on APIs. So you can look at the APIs and you will see what the protocol actually is. Lots of the back-end is example code. Like everything about authentication and authorization is just an example. If you want to integrate that with your service provider, code, architecture, whatever you have already, like if you are MongoDB and you have something like authorization and you have user databases and everything, there's no need to replace that. Just take this example back-end, adapt it to your needs, support the protocol. That's it. That's a cube-cattle plugin. So it's bind. There are essentially two parts to that. The first one is basically what we have seen here. So you pass the URL. This is nice for the developer workflow, right? Everything is done by a click in the browser. Very easy. But eventually, you want to use that in production in some sense. So you want to use it, for example, in AguCD and GitOps tools, whatever. And there's a second part, basically, when you call the first command, the second part is implicit. But you can call it explicitly. So you can basically run the bind as a buff, but as a do-I-run mode and output the YAML. It gives you a request object. And you can then take this request object, persist it. It has everything describing what you want to bind to. And then use that as a basis for automation, for example. And it looks like that. So again, the components I just showed. Tube grub bind is here. The connector is running on your cluster on the left. And the back end is running here. It's basically a set of controllers operating on those API objects here. And it uses normal cube namespace as a relation. So it's a dotted line because, obviously, those are namespaces, right? Everybody here in the room knows what namespace can do, what they can't do. And you can run any kind of operator to do the work on those objects. And I will show in a second how to use the MongoDB community operator just as it is offered in GitHub and just built a service. We will see it in a second. Okay. So let's build MongoDB in real. In all of us, it was all a rent, nothing else, but we can do it in real. So I found that. MongoDB. Let's implement MongoDB by using a MongoDB operator. So that's repository. I installed this thing in my cluster. And I found out that's pretty hard to use, actually. There's an API, obviously, to create MongoDB community objects. They define, basically, databases. They're namespaced. And the first thing I realized, oh, you have to create those other seven objects manually here on the right side. So there are two service accounts, two roles, two role bindings. And I don't want to give this to users, right? That's far too complicated. I don't want that. It must be simpler. That's a term I like to use here. This mess which we just have seen on the slide, it does not matter if it's in a warehouse, right? Warehouses are ugly, but they are super optimized. So let's try to do warehouse-style computing in-cube for software as a service. So messy things. You have to maintain them, but the customer will never see the mess. Pretty cool change to the operator model we have seen, I think. So this is what I want to show my users. I want to have this API replica account. So it's HA or it's not HA and maybe a version. That's it. And I want to map that to MongoDB community. So cross-plane to a rescue, I thought, preparing this talk, I wanted to show MongoDB directly, tricky, ugly. So cross-plane can do this templating of the main object plus those seven objects I also need, right? That's what I did. So I want to use cross-plane to make that simple. Okay. That's architecture. And here you see the power of community. Cross-plane is a community project. I use it as is. We have built Cubebind and both of them together give you a service. You as an IT provider in the company, for example, you can build that, right? There's no line of code. It's all defined by Cube, Cube Resources. So what do we see? It's a top in green. You see everything about Cubebind. So here's the binding. Here's the export. And it exports a CRD. So that's a MongoDB CRD, reflecting basically or implementing, specifying the schema or the example API we have just seen. That one. That's a MongoDB CRD. And cross-plane adds some stuff for templating. Think of cross-plane just as a templating engine, but built in Cube Resources. So what do we do? We define something. I can't see it here. Yeah, that's a composite resource definition. That's a term in cross-plane. And out of that comes this CRD we want to expose. And we implement that via a composition. That's a term in cross-plane. Basically template. Template for resources in Cube. So it looks like that. Lots of YAML. I'll just give you a glimpse on that. I mean, look on cross-plane if you haven't seen that pretty cool project. Basically what we have on the left side here is this composite resource definition. It's like a CRD, very, very similar. Some other stuff inside. But basically you see the name of the CRD which we want to expose. That's a MongoDB. And on the right side you see the templating. So basically this is the API definition here on the right side it's implementation. You can imagine you have more than one implementation. You are the service provider for MongoDB in the company. You run your developer workloads maybe in some software as a service instance of that. Because that's fine. It's quick, it's cheap. But your production workloads you want to have on your database clusters locally in your data center because it's cheaper in large scale. You can do that like you can implement the API on the left side via multiple compositions. Pretty powerful concept. Anyway what you see here on the right side is basically a list of resources here at the bottom. You see the first one which is the MongoDB object itself and there are whole binding service accounts. All the things we have seen in this slide before. And out of that if you apply the cross-plane machinery you create those objects in the cluster and you have cross-plane running of course. You get a normal CAD and cube bind will just pick it up and export that. Let's do that. So what I've done here I have a cluster running obviously. Very simple cluster. So it's up bound running here. Up bound is the cross-plane implementation. It's MongoDB operator running and not much more. I have a third manager because I have a web front end here. I have decks running for authentication and I have my example back end running. That's it basically. So let's see. So I bind this command bind to MongoDB DE exports. I open the URL. If the wireless works I was pre authenticated because I clicked on the GitHub link before so that's why you don't see GitHub. I bind and then I have a CAD. And this is real. It's not fake. So I can see how the connection goes. Ready is true. So no, it's not ready. It's not ready. Oh, it's ready. All right. So and now we look on our MongoDB example. Like this one, we apply it. MongoDB demo. And now something on the right side should happen. Just, I mean, you don't have to read it. Just that the resource is created. Cross-plane is templating now the stuff out into the cluster. It creates a namespace. There's a namespace for my cluster. So here you see cluster WH something minus default because I'm in the default namespace here on the left. So I get the namespace here on the right and everything is created. And eventually you see the pot running. That's great. It's already has happened. So there's a demo zero pot. It's a non-HA MongoDB. It's running. And I can also check status of that thing. Where is it? I don't have any. Anyway, so status also thinks I have my MongoDB. Super powerful. No code. Not a single line of code. That's architecture. I'm happy to talk about that later. All right. KCP. I'm from the KCP project. We have explored that a little bit in a different context. Everything you have seen here is on cube. KCP basically is the bigger version of that. If you are MongoDB, like the company MongoDB and you want to build that and you have 20,000 customers, cube is heavy. Namespaces are maybe not enough for isolation. KCP gives you other isolation and efficiency in operating all of that. So there's real isolation between workspaces and everything is very lightweight. That's why KCP plays a role here. That's just a sketch. I'm happy to talk about details. Just showing it very quickly here. This is the data model behind it. You can see the first part of the talk under this URL if you want to share it. Just do it. And it's an open source project. We have a channel in the cubes like cube bind. Come to the project. We love collaboration. This is really just the start. If you are interested in this topic and you want to shape it, you want to get your features in there. Join us there. Tomorrow there's another meeting. We have a room for birds of feather. It's in the schedule. You will find it. So if you want to talk about details, have questions about code and API, come there. We are there to talk about it. Okay. We are nearly at the end. Any good talk needs a task and a t-shirt to take home. I don't have t-shirts here. But we have SAS. So we have built a SAS service for you to use. So I have uploaded the slides. Go to the SCATCOM website and you don't have to make a photo or anything or remember anything. There's an API called t-shirt.kacio. And you combine to it in your cluster. Before doing that, of course, install the bind command. We have a crew index. Use crew is a plug-in installer to get it. And apply your t-shirt, claim. And when you do that, check the status. And you will see a code and a booth. We have to pick it up. All right. Any questions? Questions. This might be a newbie question. But how would you deal with the OAuth or OAuth 2 flow for the authentication in a more kind of controlled environment that a user might not be doing that? The export API, actually, there's a binding provider API which allows different methods to authenticate. So the provider can say I support OAuth, I support token flow, I support those other things, password, whatever. We are prepared for that. Yes. I didn't talk about that. The CAD is really owned by the service provider. So you don't have to care. The service provider sees the CADs. It sees the status of the CAD reflected into the system so they can see when it's, I don't know, storage versions, for example, they see it, right? So you don't have to care. Of course, you can touch it. But then the reconciler will take it over. Yes. Okay. So is this something to merge into upstream? There's nothing forked in Qube. This is purely built on top. Whether Qubectl bind becomes a thing which Qubectl just ships, I don't know. I know some colleagues are here who are involved in Qubectl talk to them. Red t-shirts here. Maybe it's not necessary technically. So this puts the onus on the service provider now for maintaining CRDs. Yes. And that's no longer a problem for the operator. Yes. It gives it back to the people best. I would say we fix the personas. I think this was always one for a long time. Yes. Is there some metadata or anything else? Oh, this is part of the CRDs which are exported. I mean, those are, so, in the example, it's a claim. And a claim which, in this example, it creates a service and a secret. So you would mount the secret into the application and you get a service by like a DNS name to connect to. But this is really part of the binding, not of the binding mechanism. It's part of the modeling which APIs you export. So the service provider has to build something which fits your use case. Hey, I think this was really cool. Thanks for showing us. I was wondering what are the remaining CRDs that are still in a cluster on the client side? So my mission is to get clusters as empty as possible. And there's another threat around that, give people admin permissions again. Often the reason that people don't are admin in their clusters is because there's so much stuff running as well. So if we take out those operators which are operated by another team, we empower the people again who own the clusters. So, of course, again, I said it in the beginning. They are operators which run locally and they make total sense. I think we just, as an ecosystem, have built too many operators which are just glue. You remember this adapter chain in the beginning? We have done too much of that. Is there a question over there? Sorry. I saw the diagram and I was trying to make sense of it quickly. Over on the service provider's side, you had some, I think, red lines delimiting what you call cluster. Yeah, this one maybe that's the heart isolation and I had another one. Yeah, earlier. Somewhere. I don't remember. Is this one? So is it the case that the modeling of this is that for each cluster in which there's a consumer, maybe I show the data structure thing that's maybe better and just very quickly. People are, they want to go to lunch. Anyway, so there's a cluster namespace, so one per cluster and there are more namespaces like it's the one at the bottom here which are per namespace on that cluster. Here in the example in the backend, we just prefix the lower namespace with the cluster namespace name, but this is totally up to the implementation of the backend. We use namespace acceleration here because that's the only thing in Qt we have right. Right. So the service provider would have to think through how many clusters and yes, and of course you can also imagine you use this model in Qt. You don't use KCP. KCP, of course, is much more advanced, but you can even have multiple service clusters like a database team can have 17 clusters running and this backend will point by capacity balancing in some sense will point the one cluster on this on this service cluster and the other one to another and distributes a load. Thank you for the talk. My question regards to the, you can specify what ABAC permissions you want, you need. And I think it's pretty common that a service is created and you have to read or list the research definitions and write to the status. So is there any default setting for this? Yeah, so actually ABAC doesn't play a big role here. So the model about permission claims in the beginning, the idea is that this agent thing, this connector, this basically offers just the view necessary. And this is not ABAC, right? It's much more powerful or more not powerful, let's say more specialized for this use case of a service provider. If you just offer the view into the cluster, that is necessary. It's not ABAC. On the cluster, you can use ABAC as well to limit access. Yes. My question was, if I need this powerful mechanism or isn't it standard that I just want to create a service for accessing the service and to read the customer research definition and update the status of resource? I mean, then you need something in the cluster. The service provider has no access to your cluster. That's very important. It's always called out like the connector connects to the service provider. And there's no way back. Everything is built in this protocol. I can talk about details later on if you like. So it's made in a way it's secure. It's calling out. You can control ABAC yourself, but you have this additional mechanism to protect against what providers can do. So I understand the like the syncing of the spec and the status and the CRDC here, but I was a little confused about how the controller is working. So maybe can you go back to that other picture? The controller on the service provider side is like looking at the resources on the service provider side. Like a MongoDB operator creates deployments and other resources to bring up MongoDB. And so I was confused about how the controller that's running on the service provider side does that part. It's creating the MongoDB resources. It's a stock open source provider. Like it's this community operator for MongoDB. Unmodified. It will see the MongoDB community object. That's an API they create. It sees that and does its job. That's not so special. But like how does it create the MongoDB like deployments and other resources on the client cluster? No, no, it's not on the client cluster. It's software as a service. It's running on the service provider cluster. It's nothing in the client cluster. One could think about permission claims to create a deployment. One could think about those models as well. It's much more complicated. This is also possible in theory with the model, but the very proposition is about software as a service. I see. So when the client wants to like use MongoDB, just connects using the MongoDB client or whatever the normal user or 27,017 or something. Okay, got it. Got it. That makes sense. Thanks. Probably time for one more question. So how does credential or generate like a secret, something that's have to happen on customers, the consumer side cluster in order for the application to connect to, in order to consume the service that's provisioned in the service provider. So is that, which component to actually put that piece of information? Lots about this permission system you implement, like maybe their service accounts around, right? You, maybe you are admin, you bind to the API, but this application using that, I will see, for example, use a token or something. This is completely part of API modeling, what you're exporting. That's not solved, but you bind. We don't dictate any authentication model at all for the services we export. That's completely part of the APIs you export. If you have something like service accounts and service account objects, for example, export them, make them available to the user and just give them a nice API to give the right identity to MongoDBs, MongoDBs. So it's API modeling, which must happen there of the service provider. So I can really get that. So in terms of a MongoDB, so what is the credential actually, in the end, when we provision the service, then how do you connect it? You need a piece of... There's a secret created, like in the beginning I've shown with the permission claims. There's a secret in your name, in your cluster. Which component actually made that happen? Because that's... The service provider, the service provider will get access to sync a secret to create a secret on your site. But just this one secret of a special name, nothing else. That happens on the client, I mean the consumer side of the cluster. So the connector does that? The connector just syncs. It will make sure that the secret provider has created gets into the consumer cluster. That's the connector. So the connector syncs the secret over from the service provider into the consumer side. Yes. And the connector is open source when they're neutral. Everybody can see what it does. That's powerful. It's a community component. Everybody can review and audit and everything. So it's safe. The secret really lands on your side and it's safe. It cannot do anything else than creating the secret. Got it. All right. Thank you. Time for lunch.