 Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm your host today. I'm Whitney Lee. I'm a CNCF ambassador and a developer advocate at Tanzu. Every week we bring new presenters to teach you about Cloud Native. We'll build things, we'll break things, and we'll answer your questions. Victor's here, so we're guaranteed to break some stuff. So here we have my good friend Victor Farsik with us to teach about how to design a database as a service. But before we get to Victor, this is an official live stream of the CNCF and as such it's subject to the CNCF code of conduct. So that just basically means please be nice to everyone. Don't do anything disrespectful. Friends who are in the chat and joining us live, please do say hi and where you're joining from. I think it's so cool that we're part of a global community. And finally, with that, I'll kick it over to Victor. Teach us about building a database as a service. Okay, okay, so database as a service, right? Everybody understands conceptually what it is. You have a service and you're using that service and you get the database, right? And for a majority of people, database as a service means, okay, RDS in AWS or Google something, something, or Azure something, something, or smaller providers. There is database as a service in C or in Lee node, right? There's some third-party provider somewhere that Database is living somewhere else that it does. Yes, or you might be building your own database as a service, right? And it might be on-prem, right? You might be that good. That's great. Now, I believe that at least from the existing solutions that I've seen, there are always some things missing and I feel that we can create a better database as a service, right? Now, what that better means, you're yet to discover, right? So actually, Whitney's about to tell you what that better means and what everything should be and I will be drawing whatever Whitney says. That's a trick. How do you do a live session without being prepared? You just said tell the host, now you do. That's a very easy way to do. So what do we need Whitney to have a database as a service? A database. Let's start speedballing, right? Kind of anything. Yeah, and from the chat too. By the way, I haven't been warned about this, so I'm going to be speaking completely off the cuff. So, I mean, you would need, first of all, do you have a database server, right? Okay, but before database server, we probably need some kind of place where we can create some infrastructure, right? Okay, yeah. We're getting to the nuts and bolts of it. Also, hello to all of our friends in chat. We have Berlin, South Carolina, Alabama, and Sierra Leone. How cool is that? Okay, so first we need a place for the database to live. So we need infrastructure. Yeah. Over there, you will be creating database servers, right? Mm-hmm. Data, oh no, that's not data. DB, this is going to get messy really quickly, right? What do you think going to get messy? It already is. Compared to what it is now. Huh? Okay. So what else do you need? You need somehow to, some networking, right? Some storage. Oh, yeah, okay. Okay. So you need, I don't know, network storage, right? Now, how are you going to create those things? You need like an API or something. Okay, cool, cool, cool. Some way to talk to it. So you need some API that you can communicate with, right? Yeah. Now, how will those things happen? You probably need some orchestrator, some schedule, or something that will figure out where to run those databases and how to run them, right? Yes. So Kubernetes. Okay. Or just any kind of, yeah. No, no, we are now talking about, let's say that this is AWS, right? We're not even getting there, right? Okay. Now, luckily for us, AWS already comes with those things, right? That's how AWS works. Okay, I was going to say, yeah. Right? Now, you need a place from where you will be doing those things. Your laptop, a server, maybe other Kubernetes cluster, right? Uh-huh. Some sort of management, compute. Yeah. I will call it control plane, but it can be your laptop, right? It doesn't have to be anything special. So something from where you send requests here. Uh-huh. API talks with orchestrator. Orchestrator figures out where to run the database and networking and storage, and it comes back to you to the orchestrator. Orchestrator goes back to API and tells you it's done, right? It works or something like that. Hedy, Hedy says, you don't need all this. You just need a credit card. Yeah. Yeah. I mean, but credit card, both to the hyperscaler like AWS, but also to the army of people who are going to do that. Yeah. There are humans involved. Okay, there is a human. There is here a person. Uh-huh. I'm going to call it service consumer. That's a person who comes here and says, I want the database. Yeah. And that's something that's created, right? And the idea is that there are lots of different people who are going to come through and say, hey, I want a database and use this system that we're putting into place right now, correct? Correctly. Exactly. So this is many people, right? Yeah. This is the other people here, right? It's called whole school reunion. Right? Now, how do you do this? You don't SSH to the server. We don't live in 90s anymore, right? There must be some API here as well, right? Uh-huh. With which you will communicate with that control plane that will do something. It doesn't matter whether it's a server or whatever that something is, right? Now, the reason why you are creating the database is probably not because you just like creating databases. You probably want your applications to use that database, right? Yeah. So we probably need another cluster or the first cluster. I'm going to call it app cluster that will use the database. This is my app, right? Uh-huh. It somehow needs to use the database, send queries to the database, stuff like that. But now we're having a problem, right? How does this application knows how to connect to the database? Yeah. Here, I can actually assume that here we created the database and probably we got some secret with the authentication. We need to move that secret here, right? From here to here. To the app, yeah. Now we can copy the secret, right? But that sounds like a silly thing to do. We can start it in GitHub repo. With SOPS? That would be brilliant. Yeah, SOPS would be one option. But without saying which is the option, we probably need a secret manager that can be AWS secret manager or vault or something like that, where we are going to push this secret so that when you need to connect directly as a human, you can get it from here, right? Yeah. Or you can pull that secret from a cluster, right? And put it here. So that your application can connect to that secret, find out where is the database, and connect to it, right? Okay. Do we need anything else? Anything else comes to your mind? By the way, I don't have a razor function in this thingy, so. No razor. Okay. So the user says to the API, I want to make a database. So you're on the control plane, and the control plane is going to help you. It's going to talk with that API that's going to talk to the orchestrator, going to make a database. It's going to return access credentials. It's going to tell you it's made, return a secret, and like how to make the database. And then meanwhile, you need, as a human, if you're making this database, really, you want your app to be able to talk to the database. You probably don't really care about it for your personal use. So to get that, so now you have these credentials, and you need to get those credentials to your application, and you do that with the secret manager. Exactly. And now, oh, probably maybe. In this context, we mean storage, and networking, and all the shenanigans over there, right? The connection details, that's what we mean. No, no, I meant before, when you say database, we mean more than database. There is networking, there is storage, what's the note? Got you. And then what about data-based schema? Ah, great, great. Oh, I'm liking this. I'm going to make this bigger. Schema. We need that as well, right? And that schema is going, but where do we apply that schema? I don't know. To what? We need a database inside of this server, right? Yeah, that's it. This server here will have one or more databases. So this is a server now. RDS is just a server, it doesn't have databases. We need databases there, and then we apply schema to the database. Oh, okay. So those are the little databases. So there is a database server, we can have one or more databases, and then we apply schemas with tables or whatever people do in those databases, right? Yes. You probably need, by the way, this is not another human. This is me not knowing how to draw a database. There we go. We probably need a user at least or two. Like an application user? Yeah, application user or human user or, you know, users with user names and passwords with certain permissions. How you can do anything you're a godlike figure or so this kind of authentication details. Oh, so more like the administrator. Yeah, exactly. Admin user, application user, those are all the users that generate user names, passwords, IPs and ports. That's stored in this secret, right? Okay. Over there. So that's a good start, right? Yeah. Now, if anybody has, I know that it's hard through comments and stuff like that, but if anybody has any suggestions, we are speedballing here, right? So if you see that Whitney misses something, let us know, right? Let me know. Now, let's try to figure out what we're going to do for each of those, right? For this, we can use AWS or Google Cloud or one of those or maybe we, or whatever you're using on on-prem, right? I'm going to put here AWS just to put something, but it can be anything, right? For this control plane, I guess the answer is obvious, right? Kubernetes. Exactly. Kubernetes, right? Is this, and probably let's say this application cluster will be, this might be Kubernetes or it could be some managed service like Google Cloud Run or Azure App Containers or something like that, but let's say Kubernetes, just for the sake of argument, right? It's going to be server. Secret manager would probably be... HashiCorp Vault is the big one, but it could be... Vault, exactly. People can choose different options, but let's say this is Vault and what else do we need? We need something to manage this schema, some applications, some tool. Well, once we're getting into Kubernetes, I'm starting to think, okay, we have a declarative model, so we need... Where are we going to store configuration for anything? Yeah, here. You mean those created database? Yeah, uh-huh. Yeah, I mean, I'm biased, right? This is going to be trust me. This is the only thing that is not up for discussion because nobody wants me to get fired. There's a good reason for that, right? And since we're managing everything from Kubernetes, schema, we need to choose the tool, like one of the tools that I know that you know because we... Schema Hero. Schema Hero, okay? This can be... Even though this is happening in the cluster, I'm drawing it here. This could be Schema Hero. Or another option is Atlas. So to give some context, A, Victor's a developer advocate for up-bounds. That's his job. So that's why he's talking about getting fired if we don't use cross-plane. And number two... Nobody else needs to use cross-plane, but if I don't put it in his diagram, I'm in deep trouble. And then the other thing is that Victor and I host a show together called Uchoose. That's on his DevOps Toolkit YouTube channel. So that's partly why this is especially informal and partly why he's able to say, I know you know about Schema Hero because we covered Schema Hero together in our other show. We have some... Normally we are both very formal, but in this setting... That's why I'm not wearing my bow tie today. I don't have my pinkies up, yeah. We have some opinions coming through in chat. Scott Rosenberg. Hi, Scott. Was AWS cross-plane Atlas Vault in ESO, External Secrets Office. Oh, yeah. We need something to push those secrets from here to the secret manager and to pull it from here from the secret manager here. That's ESO or External Secrets Operator. Yes, good choice. Yes. What's Atlas? Atlas is like Schema Hero, but better. Okay. And then tell us briefly what cross-plane is for those folks who don't know. So cross-plane allows us to manage... To extend... There are two important parts of cross-plane. One is that when we install what we call providers, we get new controllers and new custom resources to manage stuff, whatever the stuff is. We are extending capabilities of Kubernetes. So for example, I can have CRD to manage RDS in AWS, right? Another CRD that allows me to create CRs. So CRD is Custom Resource Definition CR, Custom Resource to manage networking, storage and anything else, right? So it allows us to extend Kubernetes with additional Custom Resource Definitions and controllers to manage something. And what that something is depends on which provider we install. In this diagram, that would be probably two providers, like one for AWS stuff. And another one Kubernetes provider that would allow us to manage that Atlas Schema and ESO and what so not. And then the second part of cross-plane is creating your own custom resource definitions and controllers that will wrap. So all these things, right? If you give a developer or some person, hey, you know, you need to create RDS and networking and storage and Atlas Schema and ESO and this and that, they would freak out. So we can wrap all that together into what we call XRD or cross-plane resource definition. And you say, hey, this is the interface you can use to manage stuff. And we're going to see it in action very, very soon. So to recap, you can bring in any resource and bring it into Kubernetes and interact with it via the Kubernetes API. So you can use a Kubernetes control loop to keep it in your desired state in sync with your actual state. And you can also use it in tandem with other stuff in the Kubernetes ecosystem. And then the second part of that, that XRD part is just a way to offer a simplified interface to anyone who's going to assume to interact with a database. Correct. We have another comment I'd like to address. You'll also need backup management, disaster recovery, replication management, et cetera. Should that be part of the design? Yes, yes. So there should be backup, disaster recovery, all those things, exactly. Absolutely. I'm just running out of space. I don't know where to write here, like backup and whatever else. And we have a cross-plane question. Yes, please. So rather than writing custom operators, I would use cross-plane? Yeah, cross-plane will give you custom operators just that it makes creating those custom operators much easier than if you would just say, hey, let me open Visual Studio Code and start typing Go code that will eventually become operator. So a lot of that code's already in there for you. All the managed resources that you're importing, all the different APIs you can access as part of their providers, which is a managed resource. There's like a whole new set of vocabulary to learn that's relevant to cross-plane. We have another one. We also need events to work and bubble up to the XR, to the cross-plane. Oh, I suspect that Scott watched one of my previous videos where I was complaining about that. Not yet working on it. That's the answer to it. Excellent. Cool, I love all this interaction with that. Let me slightly correct. Not events to bubble up, but selected or transformed or completely custom events to bubble up. We need to figure out how do we propagate events or create user-specific events and bubble them up to the parent resource. What about an identity provider? Yeah, that could. That would be there as well, right? If you want to do some kind of, instead of using the username and password to connect to the database, you can go through some identity provider, or depending on whether we're talking about processes connecting to the database, then it would be one solution. If it's users, it would be another. But that brings me to one thing that I forgot. I would like this code here for me not to have to write anything database-related in the code, like how to connect and various secrets and stuff like that. Nothing hard-coded into the app itself. Another project here, one of the projects I really like is Dapper. If we could create Dapper, Dapper knows from this secret how to connect to the database, and your application doesn't need to worry about it. Dapper will add the sidecar container to your application, and your application said just go to the database. I don't know where it is, what it is, how to connect to it. Just go there, do stuff, and come back to me with the results of a query or whatever you're doing. Nice. We also have, regarding backup, this user says it's in the design because it comes a part of RDS. So it's part of AWS's. Yes, it all depends on some of those things are easier, some of those things are harder. Like, for example, in that design, you might want to say it's part of the RDS, but those people here are using some kind of abstraction to create those databases. You want to expose it maybe at the weekly, daily, or something like that, when to create backups, and then that information gets propagated, in this case to RDS, which already has it baked in. Some other databases don't. Or sometimes you might not be happy with how the backup itself works in whichever provider you're using. And Scott says, we also need a UI to interact with the XRDs, not just an API. Yeah, so for people who don't, I mean, we need API no matter what. Yeah. I will argue that to that. Now, the question is whether those people will interact with API directly by writing YAML and then KubeCut will apply, if this is Kubernetes, or they would push it to Git and then letting Argosidio or Flux do the magic of synchronizing that to the cluster, which is a better method. And then we might want to have some kind of graphical user interface that could be backstage or port or whatever people like that would provide fields and drop-down lists, people fill in, click a button, and then that button would do the same thing. Either it goes to the API here or it pushes it to Git. So yeah, you don't have to. If you're not into writing YAML, a UI can help with that. For the operational side, right? And which is debatable. I'm not the UI type of person myself for operations. But then for observability, 100% UI. Yes, you need to kind of like see statuses and events and logs and whatnot. And regarding the database, UI really depends on how much knowledge the users have. Do they need their handheld? Do they need a UI? Or a UI could be a way for them to show what's available. Yeah, there's a lot of reasons. CLI could be enough. It's true. Exactly. This could be enough. It depends. It depends on the users. Exactly. Now, in all the cases, I assume that these users are not database experts, right? Let's say a developer, that I have an application, I want a database. So, yeah, those users can do CLI or web UI or whatever somebody likes. As long as this interface here, this API makes it easy, right? Now, I would never ask a developer to write thousands of lines of PYAML, but if it's 10 lines or 20 or 30, it should be fine like that, or web UI works as well. And we have a question. What does Dapper do? Okay, so Dapper, we tell our containers, hey, use this Dapper. It's called Dapper Component. And then Dapper attaches itself to a container of your application, like a sidecar. So instead of your application, loading a secret and parsing the username, password, IP and whatnot, you can use Dapper client or directly make requests and say just go to the database and that request goes to that sidecar container and sidecar container's job is to figure out where is the database, how to connect to it and do the cable lifting. It's in a very conceptually similar to service mesh sidecars where with service meshes, you just say, hey, I want to talk with that application and you don't go into details how and why and what, so not, right? Service mesh will redirect those requests wherever or even not service mesh, even service itself in Kubernetes, right? This is similar conceptually to a service except that it works with databases and some other things. Basically it makes your application not care about the details of a database or how to connect to it or many of those things. So run that as a sidecar next to your application container and it helps to connect to some external thing. It integrates with all kinds of stuff and in this case, it's a database. Yeah, let me see if this is me now winging it a bit. Let me... Okay. I just realized that I cannot open a browser without removing the drawing. Okay. Because it's on a screen before drawing for now. Let me try to find an example of of a sidecar of... We can't see anything but a black screen right now. I don't know if that's intentional. Yeah, that was by design. Here we go. Okay. I found it. Here I have a silly example of, let's say, main.go. So this is a silly application but basically let me find... No. Not that one. Let's see. What else do I have here? A cluster function. Go some, no. I know I have it somewhere. I'll find it, I promise. Let's go... Where are now my files? Okay, I'm gonna remove it for a second because it's easier for me to find it on a bigger screen. Dapper. No. So I'll talk... Talk about something. Yes. Yeah, we have a question about whether we want monitoring or whether we want threat management. So now we're getting into full-on production everything. So... Presumably yes, we would want observability. Absolutely. And then threat management. Victor and I are just wrapping up pretty soon now a whole chapter on all the different security tools for you to use. So yes, a threat management tool whether probably adding policy at the cluster level, probably managing threats at the kernel level, then we have scanning for vulnerabilities, all sorts of security stuff that we want to think about too. But here we go. Here's Dapper. Yeah. And I'm just saying, hey, new client, you know very, you're attached to me. I don't need to tell you anything about the database. Just new client. Get the state from this store and then behind the scenes operator will figure out, oh, in staging it will be here and in the... in production over there. Your application is completely oblivious. You just need to initiate a new client and to get the state which is basically attached to your container. It just talks to your container. That's about it. Cool. I really strongly recommend. I'm not sure what's the address probably with Dapper.io or something like that. There we go. Dapper.io. Check it out. It's amazing. Does it work alongside Istio? Yeah, it works with anything. Yes. Cool. You can connect it to anything. Our good friend is actually in that project. Shout out to Mauricio. Mauricio, Salaboi. Salaboi, exactly. Nobody knows him as Mauricio. Everybody knows him as Salaboi. Yeah. This stream is supposed to be about an hour long. We're at the halfway point now. How are we doing in terms of whatever your goal is? Nobody knows. I prepared a track. I prepared that obstruction cross-link claim that will do some of the things that I mentioned before. Not all. I did not add backup to it and a few other suggestions. I will. I'm just human. I cannot do absolutely everything that fast. But here's an example of user-facing interface for some of the things that we were drawing. Right? I'm a developer. I'm not a database expert. I'm saying I want something called SQL Claim. The name can be anything you want. It's whatever the database administrator thought it should be called. I'm saying that I haven't shown this one to anybody. People might have seen glimpses of this, but this is something I've worked on and it's completely new. You're the first people to see it. Parts of it. I wanted it in AWS and I wanted it to be PostgreSQL. Some things that people might have seen before like I want version 14 or 14.10 I have no idea what is medium in AWS but they wanted to be medium. Then we come into more interesting parts. More interesting mostly because I haven't done it before or shown it before. Here people can list databases that they want in their database server. So far RDS is a server. I want in that server to have something called myDB. It could be anything else but it's arbitrary. It's something you're creating. If I put a field there I could edit the file. Let's do this. There we go. I want database myDB and I want database called Whitney. Cool. Now what else is there? Now this comes normally in a real world situation I would have defaults so people would not necessarily need to write this but I want to manage secrets meaning that I will get the secret with credentials to the database wherever I'm running this but I want to be able to push it to external store and maybe to pull it to some different cluster. So here I'm calling I'm creating something called store name AWS. In this cluster I have AWS credentials but I want to move those credentials and configure how to configure external secrets operator the one that was mentioned before to use those credentials to create the database server I need a root password. That's not the password that we will use to connect to the database but at least in RDS if I'm not mistaken root password is mandatory and I don't want to type it so I'm pulling it from secret manager which could be anything in this case AWS secret manager because it's over there it's called dbpass or just get it from secrets manager and use it to create the database. Then I have this parameter here that says whatever secret credentials those two databases so there will be two secret credentials with near my db I want to push them to secrets manager push them there and then there is a different cluster this cluster is called a team cluster it's different cluster somewhere else that's where my application will be running I want you to pull those credentials into the cluster in this namespace and I want you to configure all the stuff so that my application is ready to talk to that to those two databases right? And finally I'm saying here I have some schemas and I want you to apply this schema to the mydv database and this is just simple SQL that says create this table videos and create this table comments those are the fields go and that part will be done by Atlas right now I'm obstructing all that so that people don't have to configure for their own database external secret store or Atlas or dapper or to deal with RDS and subnets and VPCs and what so not right so it's an obstruction in a way right make sense so far any questions no it's good to have gone over it visually so I can picture the management cluster I can picture the application cluster I can picture the cloud stuff exactly it makes it easier I could keep drawing it afterwards I wasn't sure whether to confuse you first with the diagram or confuse first with this I think the diagram first is nice okay so in a real life situation never do what I'm about to do now if you would work with me you would be fired if you do what I'm going to do now never do this cube, cuttle, namespace A team, apply apply whatever is defined in db db, aws right so I'm saying don't do this because you should be pushing it to get huh push it to get hub to get did I not install everything wait let me see cube, cuttle get managed this is now live debugging maybe I didn't install what I'm supposed to install let's see cube, cuttle apply is now going so you showed us a simplified interface like an XRD I believe it will you show us the actual stuff that's getting applied like the real interface behind that simplified one definitely I'm just not sure why this doesn't work what did I do wrong Tata 10, asps, apply this should work, did I make a typo in the command let's do again okay let me see what did I do wrong I will spend a minute trying to debug it you know what CRDs are not like you're working with a lot of custom resources oh yeah like it's not a very helpful message like which resources CRDs might not be installed yeah cube, cuttle get CRDs this is going to be these are all the CRDs I have here yeah but the one that the one that I'm looking for is not there so I will have to install it and I don't know why it's not there it should be called sql it's not, let's see code crossplane packages sql.sql this should be there cube cuttle get I'm sorry for this is it installed is it installed or it's not oh it's not installed why is it not installed probably because it doesn't exist I probably have a wrong version let me see marketplace I don't know what it is sql configurations what is the latest version 0860 0860 that one should exist I have no idea I messed it up so this is a crossplane CRD that's not installed oh I see the problem yes this is the crossplane CRD and what resources are coming into Kubernetes sql what resources are bringing into Kubernetes here we go first things first I'll answer your question in a second let me apply this I just changed I had v in front of the version instead of having just the version I just made the version in trouble I'm going to apply it again where is it cube cuttle apply crossplane packages .sql and now if I apply this come on let's see this one it should take a while probably cube cuttle get packages and I should have it here soon it will be now it's unknown it will soon work I made a mistake I should have done better sorry for that what was your question while waiting for this well first of all user said just imagine how long that would take to debug if you did it like you wanted to get fired like it wouldn't have taken longer to and then I guess I was just trying to understand what what custom resource actually is the one that didn't get that wasn't working make sure your CRDs are installed first like what CRD it's a crossplane CRD that you created so it's a custom CRD yes yes I have I can show an example here so what I did is that I have this is a repo for that sql database and what I did is two things first definition this is a this is a slight variation of Kubernetes CRDs and essentially what I'm saying here this is the schema that should be used if you remember the claim that I used before there were parameters and then there was version and size and databases and all that jazz right so that's the that's the schema that's how we create CRDs in Kubernetes now CRD by itself is not doing anything except extending Kubernetes API to accept the new types of resources now for something to happen exactly and for something to happen you need a Kubernetes controller which in this case with CROSMA it's called Composition and here I'm specifying this is a short version okay I want to work with some with VPCs I want a VPC over here I want some subnets I want more subnets and more subnets because you never have enough subnets and subnet groups and internet gateway and route table so all those things are required for part of what we spoke before right but this is completely hidden from users and if I scroll more down you will even see okay I'm here creating resources but I'm going into that server and creating some number of databases and I'm creating schema which is Kubernetes object and that object should create Tatla schema based on my input and stuff like that right so this is the the whole all the resources combined that are going to be spun up whenever I create a claim that I hope will work so that's what you were just showing was a the full thing the full thing that maybe we need an air horn for you to choose anyway we so you showed us a simplified SQL claim resource that was just a simplified interface and then that long thing you just showed us was what's backing up that SQL claim what's backing up what is creating all the resources we talked about in that diagram not all sorry some of them right so not everything that we drew that I drew and people suggested is there but significant part of it right and we can see the outcome so just as a reminder right this is the this is that YAML and this might sound love this is a lot of YAML but it's actually not a lot at all compared to what it does and the end result I can show you that cross-plane better this is a feature that is not yet graduate if I if I take a look those are all the things that are being managed right now right kind of this is Internet in AWS and route table association subnets VPCs and then it will create the upper component in that specific cluster it actually already created it and it will create Atlas schema in that cluster it will eventually create a secret it cannot create secret right away because the server is not yet operational in AWS it takes 10 minutes or but once some of those things are failing and failing is okay because right for example this one cannot do anything because some other information is missing from some resources but it will be eventually consistent the the main obstacle now that prevents it from being everything available is this instance this this thing that I'm marking here this is RDS itself in AWS how long does it take to come up are we gonna see it yeah we should see it in 10 minutes give or take something like that I can draw another diagram if people if that might be interesting to kind of like what are all the things that happened now when I executed this claim what's behind the scenes that sounds great yeah let's do that okay let me clear the screen right so this is what what happened what is this okay cool no that's not what I want I want this thing does it work yes it does start again okay cool so this is me right I brought some YAML right and I should have sent it to Git but I'm skipping that part I assume that people understand how GitOps work if not I can go through that as well and once I brought that I sent that YAML to Kubernetes that is acting in my case as control plane right so that's a management cluster yeah and that one created what we call composite claim that's that YAML itself right because when I created that X XRD it created custom resource definition in Kubernetes Kubernetes now understands that there is something called SQL claim and I created it now what cross-plane did is and this is simplified version it takes it detected so this is cross-plane I don't know how to write as you can see cross-plane detected that and converted that into not converted but expanding that into a bunch of other resources like this is a subnet this is RDS and I don't know internet gateway and Atlas Atlas schema but more more than that right there is this application cluster up cluster and over there it also it doesn't have to work on your resources that are that are in the same cluster actually it doesn't because this gateway so this is AWS right this this created something in AWS this created something else in AWS but some resources like let's say ESO that configures was created here that was the secret credentials was the secret was created it also created external secrets operator which pushed the secret to secret manager right and then it created this cross-plane also created a resource here also external secrets operator that pulled that secret to this cluster it also created here dapper component right was the app already running app is not yet running I can run it later and yeah that's so basically cross-plane creates expands because compositions expand into individual resources and those resources can be something in AWS can be something in some other cluster it could be something in vault if I use it but I use the AWS secret manager it could be wherever right it can be places wherever you tell them to do and all that is managed through that composition that thousands of lines of YAML I showed that's where all those things are defined now to make it clear I did not write thousands of lines of YAML myself I use Q or pickle to simplify that but nevertheless a lot of things are happening and the lot of things are it also made the database server in the database right? yes let's see that I mean console here in AWS you set server right here if everything works fine I should have RDS database here we go there's the database being created actually it's already available it's done nice and inside of that server it created users and it created databases as well so if I list this so here let me find it so this is the database that it created actually didn't create it yet it will finish soon but when this is available it will create a database called mydb inside of that that database server and same thing goes for whatever else I need right cool makes sense so far any questions there is a question to chat but it doesn't seem related to the matter at hand I can answer unrelated questions as well with the special note that my answer could be I have no idea let's see there's an existing database cluster that hosts a 50 gigabyte size it's on prem and how do you plan for migration what are the considerations from the architect side so migrations are always complicated that's why companies normally don't migrate wherever I'm not sure where you're planning to migrate I'm assuming if it's on prem to cloud maybe or something like that but basically short version is of it the easiest way is to stop serving traffic on that database move the data create the database somewhere else move the data reconnect all your applications to that different database and then enable traffic again and then but that's unacceptable in most cases since that would mean that you're down for some amount of time right then you have specialized tools that allow you one of the options with additional tools is to say I'm going to run another database in parallel it will eventually get the same data the time having in on prem and it will be mirroring all the requests coming to one to the other and then once that other database is fully operational it has exactly the same data with all the transactions because of that synchronization between them I'm going to switch my applications to at one moment speak to that other database somewhere else alright and then so this person who asks the question agrees with your answer and then the interviewer asks what cloud service you select and why I mean most most companies do not make a decision which hyperscaler to choose whether it's going to be AWS or Google or Azure I'm going to assume that that's already decided for you and if that's already decided it depends which one of the three if it's AWS then again depends also on the type of the database if it's Postgres and a few others that are supported by RDS then RDS which is a managed database server by AWS if you don't want to you can also choose to self manage it yourself in AWS which can be cheaper but then also you have more work to do on operational side right and more risk alright I have a couple quick ones that I want to see this application get deployed and connect to everything so let's ask let's do first what's your favorite food cherries what's your favorite food my favorite food probably anything Italian nice pizza pasta something like that what about encryption and authorization standards for the database is that being enforced via the CRDs yeah I mean anything right so you are creating so there are two components right CRD is what is the interface that is going to be presented to end users right and then there is composition that lists all the things that should happen right and whatever that is as long as there is an API endpoint on the other hand that cross-point can talk to that something and if in this case AWS cave and I'm creating a database make it encrypted right or whatever else you want as long as there is a way for cross-point to talk to something to do something it's all about you adding more stuff to your composition right you can bring any API any API into Kubernetes of cross-point you might have to build your own yeah there is a small note there right it can be any API now for those most commonly used ones there is already a provider you just install and use it you might have something exotic and then there is no provider and you create your own provider yep alright let's see this application get you can deploy the application and it's going to know how to connect to the database yes I'm just trying to find I'm not sure whether I have application I can show how it would work without application what I'm going to do I'm going to connect to this cluster so this now I'm not connected anymore to the cluster control plane cluster where cross-point is running I'm going to I connected to the application cluster and if I list production get components components no components dot dapper dot hyo in production here is the the component the dapper component that was created automatically and if I output it to yaml you can see that dapper already knows hey get the connections string from this secret and that secret is pulled from external secrets right I mean a completely different cluster now right actually this is the secret this is the connection the key and this is now dapper configuration configured and from their own application would be the code that I showed before but which I'm looking now at my notes I didn't prepare for this demo I can probably I know we are at the top of the hour so I'm not sure that I would have time to prepare it but also here is the let me show the secret itself secrets now this is the secret that was pulled from from secrets manager AWS in this case and I also have control among other things how we'll what will be the secret right so apart from output to yaml when I was designing how this secret will be created where is it now yeah there is a typical things like endpoint, password port but dapper doesn't like those at all dapper ask for specific format which is this so I actually designed the secrets to among other things provide it in dapper specific format even if I output that right now and decode it you can see that here is the this is the format that dapper expects this is the user, master, user, password trust me this is not the one I use in production so don't try to copy it and so on and so forth so unfortunately no secret there sorry for that sorry no application prepared you'll just have to come back you just have to come back you can do another one I can show application actually that might be interesting because in this case I mean it could be a normal application but then what would make this absolutely awesome and not directly related to databases is that application is some kind of serverless scales up and down and we could use Knative for that to scale the service and with dapper to to provide the connection to the database and maybe sprinkle ship right to the mix so that we just send the code to the cluster and it it's built inside the cluster contender images created push to the registry pulled from the registry run in a cluster as serverless so it scales to 0 when it's not used and connected to the okay that's the next when can we do the next CNCLF live we'll get to the schedule it'll be after KubeCon we have KubeCon coming up so I hope to see everyone there absolutely Victor and I are presenting together at KubeCon we're doing a choose your own adventure talk where the audience votes for what tech gets implemented in our ongoing demo so that's going to be super fun but also KubeCon's amazing so if you know when a book a trip to Paris and go like two weeks out and come join us please do come hang out with us in Paris one last question what is your setup with the whiteboard on the terminal okay so there is there is this I'm not sure whether this exists outside the mech but on mech there is a application called what's the name let me find the name of the application called I don't know about presentify presentify yes so presentify allows me to draw anything you see here you see here that you can use lines or squares and stuff like that change colors and then you just draw draw whatever you want or there is a thing with the mouse to be kind of more visible and a few other things it's absolutely awesome it's one time purchase I don't remember how much it was awesome this is now I don't draw with mouse kind of like full disclosure I mirror my screen on ipad and then draw things I don't like being bigger like that okay this has been yeah we did it this has been really really fun I always love streaming with you friend thanks everyone yeah thanks everyone for joining today's episode of cloud native live it's great to have Victor here showing us how to make a database as a service with everything you could possibly mean this audience interaction today was super great the questions were fun even the favorite food question was fun and here at cloud native live we bring you the latest and cloud native code usually on Tuesdays and Wednesdays at the same time every time so thanks for joining us today and for those of you who watch the recording thanks for watching the recording thanks Victor for sharing your knowledge thank you