 How's everybody doing? I'm between you and the break. So I have the bar high. Excited to be speaking here. We heard a lot about how multi-cloud is becoming increasingly a reality. Applications are becoming portable by virtue of all the work that has been done in the Kubernetes and the microservices ecosystem. As a distributed database, we are very excited as to the promise that it brings when it comes to the final frontier around multi-cloud portability for data-centric applications. So with that, let's drive right in. We believe that there is a fundamental need today to reimagine the database architecture, especially now that we're in the multi-cloud era. There are two aspects that are happening here. First is there is a need that is being brought to us from the fine-grained microservices applications that are increasingly being built. Those paradigms are leading us to two critical needs for databases. First is SQL as the API layer for building these applications, because SQL allows us to model relationships, foreign key constraints, joints. And last but not the least, allows us to model distributed transactions. All that allows a developer writing an application to be highly agile, right fast, using all the data modeling constructs that have been battle-hardened over 40 years of SQL. And then secondly, something that the SQL world has never had at its disposal, which is massive horizontal write scalability, ability to start small and grow as big as we like, something that was previously reserved for no SQL databases, but SQL databases never had it. So those are the two that the applications are bringing in. Then there are two additional needs that the composable cloud or multi-cloud, as we call it, are bringing in. First is the need for tolerating failures in the commodity infrastructure world that we live in today. Discs, compute, network, even entire clouds will fail, and as a result, ability to tolerate those failures becomes very important. And secondly, today, if we take all the clouds combined, if last count it was close to 200 different regions or data centers that are literally at our fingertips to deploy and program our applications against. And those data centers give us a lot of flexibility around how we build low latency applications for our users, how do we tolerate complete regional level failures, how do we meet the needs of various governance and compliance requirements like GDPR. So with these four architectural needs in front of us, we believe the time is ripe for an entire database architecture, and we call it distributed SQL because it allows developers to program applications fast. It allows operations engineers to deploy those applications as and where they need to be deployed with all the right efficient constructs built in. And finally, it allows the business to drive home the velocity around competitive advantages. And that's what we have tried to build at UgobyDB. It's a fully open source project which focuses on low latency applications, as well as it allows the applications and the database to be deployed into Kubernetes and as we'll see into cross-play. And on the open source front, I want to point out that we have actually taken the opposite stance of many of the recent database licensing changes. We are 100% open source database, including enterprise-grade features such as distributed backups, encryption at rest, change data capture, asynchronous replication across multiple regions, and so on. Now that I have highlighted what the database architecture for the multi-cloud world looks like, I want to walk you through the multi-cloud maturity model. Most applications today probably are in this monocloud level one, where multiple instances of the database are deployed across multiple availability zones, but in a single cloud. And naturally, it is a single region. It allows us to tolerate region level failures as zone level failures, but region level failures is hard. The nirvana is actually when we can run the same cluster, few instances in cloud one, few instances in cloud two, few instances in cloud three, because it allows us to diversify our risk. It allows us to exploit whatever each of the clouds give us from a compute and storage standpoint. However, this last one is hard because it requires the operations engineering to get to a level where the same cluster can be orchestrated across three clouds. There are intermediate stages where many of our users and many of the community users are going towards, which look like this. First is monocloud monthly region, where you take a single cloud and just start geo-distributing your clusters across multiple regions. And the second one is the starting point of multi-cloud, where you can run app-specific, best of breed clusters in different clouds. You can even use features like asynchronous replication to use a second cloud as your failover cloud. Obviously, you have to ensure that asynchronous replication continues to move the data at a periodic basis so that when you cut over or failover to the new cloud, you are able to see that your data. Otherwise, your application portability that depends on underlying database layer will not be instantaneous. So where does UGBDB and cross-plane fit into this equation? We see that if UGBDB is the multi-cloud database that is ready to be deployed at the cloud of your choice, cross-plane naturally becomes the control plane to orchestrate such a move or such a deployment on day one. So that is going to be my demo. Here is what the demo setup looks like. As Basam pointed out, cross-plane gets deployed onto a Kubernetes cluster. Let's call it the control plane cluster. And then you install various stacks onto cross-plane which extend the Kubernetes objects that are available to you. And then you have the ability to deploy applications in a cloud-neutral manner to any of the three public clouds. Specifically for Kubernetes, you have the GCP stack, the AWS stack, and the Azure stack, which allows you to deploy application to GKE, AKS, and another AKS. And then if you want to deploy stateful applications, then you deploy the Rook stack onto this control plane cluster. With this as the prerequisite, now we are ready to deploy a multi-cloud application onto one of these clusters. And the way we are going to do that is using this Kubernetes application construct from cross-plane. We'll first deploy a stateful Kubernetes application using the Rook operator for UgobyDB. And that will allow us to instantiate a PostgreSQL instance because UgobyDB is PostgreSQL compatible. And then we'll instantiate another stateless Kubernetes application, which is an e-commerce application, a bookstore. And then through claims, this recently instantiated application can now interact with this Postgres installation instance that had been just provisioned. With that, let me dive into the demo. I am starting off at a point where we have instantiated the control plane cluster. We've instantiated the underlying cloud provider actual clusters where the database is going to be deployed. And now I'm going to simply go ahead and install my Rook operator for UgobyDB. It's as simple as kubectl apply Rook operator. And then I can go and introspect that Kubernetes application and see that I have been scheduled onto the remote cluster, which in this case is the GKE cluster. And I can actually also see the underlying, the various underlying Kubernetes constructs that were installed as a result of this operator now being installed onto the cluster. And there are the namespace, the cluster role, the role binding, all the stuff that the glue code that you would need to start managing your cluster on GKE. Now, all we have to do is claim a Postgres instance out of this operator so that we can power our actual e-commerce application. And that's what we're doing here. Now, a real UgobyDB cluster has been instantiated onto GKE. And that's what we'll see. Let's keep going. And once this Postgres SQL instance is alive, now we're ready to bring up our application. And that application does not know what, whether it is previously, as we saw in other Jared's demo, they were all using managed services of the same cloud provider. In this case, you get an actually an in-cluster alternative, fully open source, a highly scalable Postgres database sitting next to your application so that you don't have to, during all your development testing and even production, you don't have to rely on an expensive proprietary managed service. Now that the database cluster is up and running, I'm ready to instantiate my actual application. And there you go. My application has now been scheduled. I will now find out the UI endpoint of that application by introspecting, I'll introspect the resources first and see all the resources, deployment, namespace, and service naturally. And then finally, I will see what the IP address of the application is. And I get an IP address. And all I have to do is open up that IP address on the particular port that the application is running on. That's it. I now have my database powering this application all inside a single Kubernetes cluster without relying on any external managed service as part of the multi-cloud deployment. Now, if I want to change this cluster over to Amazon's Kubernetes service or Azure's Kubernetes service, all I have to do is go, apply the same command again, and the workload will start moving over onto the next cluster. So without doing any expensive operations engineering and using the common constructs that various open source projects like Crossplane have brought forward, we are able to ensure that a stateful application is getting portable across multiple clouds. So as Basam pointed out, all these open source projects take a village to develop. We would love for all of you to join our community, the Slack channel, the GitHub pages, give us feedback. We're working hand in hand with the Crossplane team to make more of the multi-cloud scenarios documented, demonstrated so that users like you can actually use those as starting points and build your applications from there on. Last but not the least, we're at KubeCon tomorrow at the sponsor showcase. Visit us over at Booth. Thank you so much.