 Welcome, welcome back to the OpenShift TV Coffee Break. Today, we have a special edition for our database as a service series. I'm very happy to be here with our special guests that I will present in a moment. But before everything, welcome everyone for those of you who don't know us. We are OpenShift TV, our web TV, talking about OpenShift Kubernetes Cloud Native. And my name is Natale Vittor. I'm a Product Marketing Manager for OpenShift. And this show talks mostly about architecture, Kubernetes, and coffee. So I hope you had your coffee show to start the journey because I'm going to introduce my guest today, which is Mike Bucham from CockroachDB. Hello, Mike, how are you? Yeah, I'm good. Yeah, good. Fantastic, Mike. So the idea, we invited Mike and thank you for joining us today, Mike, because we would like to talk about how to use, how to consume databases in Kubernetes is a no topic. And also how to connect Kubernetes to databases and services. So this was the idea of the show. And in OpenShift, we have an integration in order to do that. So Mike today is going to do a quick introduction about what is CockroachDB, what is my quick intro context on NoSQL, and then how to connect those databases in Kubernetes for large-scale use cases, intense data throughput. So Mike, you want to start telling us what is CockroachDB and why we should use it? Sure, so I've got a bit of a presentation that I'll run through, give us a bit of background. So if you could just bring that up, I'll hit the presentation mode here, and we'll go through it from there. So first of all, just a little bit of background, really, about Cockroach Labs and how they were formed and the size of the company, et cetera. So Cockroach Labs itself has been around since 2015. We were established by three ex-Google engineers. And basically, the idea for CockroachDB was born out of some frustrations that they experienced at their time. Google around the complexities of scaling out relational databases. But we'll talk about that in a little bit more detail as we go through the presentation. Cockroach Labs itself is around 450 employees in size now. And we're kind of growing rapidly. We're gaining lots and lots of customers across many, many different verticals, particularly banking and finance, gaming, and retail and e-commerce. So why is there a need for another database? There's lots and lots of databases out there. So why do we need another one? So if we think about the rise of the internet and online e-commerce platforms and trading, these types of platforms come with some pretty explicit requirements. First of all, they require massive concurrency. But at that massive concurrency, they require consistency and reliable data. And that can be quite a challenge to achieve that. Secondly, it's about significant scale and distribution. I often think in this scenario of that Black Friday event, if you've got an e-commerce platform or a gaming platform and there's some kind of massive event that's going to drive a considerably large amount of users to your platform, you want that platform to kind of scale out horizontally, seamlessly, ensuring that that service isn't interrupted. Although you've got a large number of users connecting to your platform, you don't want to see any kind of disruption. You want to take advantage of that kind of massive marketing event that you've invested in. You don't want the platform to collapse in a heap. There's a large number of users are flooded to your e-commerce or gaming platform. And then thirdly, always on availability and low latency. I often think about the rise of the iPad and my children in this scenario. When they pick up their iPad, they go on an app, no matter where they are, whether they're at home on holiday, they expect that same user experience, no matter where they are in the world. And if they don't get that user experience, we're all the same. If we see that spinning wheel, if we're waiting for several seconds for a page to be displayed, we'll often abandon that website or that application and we'll just go somewhere else, somebody who can kind of fulfill that user experience that we're expecting. So how do we kind of deliver on those requirements? We've been able to do that with applications for some time. Those goals haven't changed. We want to be able to deploy applications seamlessly at scale across regions and relatively straightforward. If I think back to my time at working as a platform engineer, we used to find that very easy from the application tier. We used to be able to take advantage of many of the cloud providers, services for us to be able to kind of deliver those applications at scale. But where the complexity came, is that that data tier? We often find it extremely challenging to deliver those same goals. But at the database level, it often resulted in very complex solutions, trying to replicate data across regions, maybe complex manual sharding, these types of things. So really, the database hasn't really kept up with the evolution of the cloud that we've seen in the application tier. So how can we help solve that challenge? So Cockroach DB, we basically take the best of the three worlds we see of the database world. So we've got the relational database. So you still need that kind of consistency that the relational database gives you. If you're dealing with financial transactions, order management systems, you're going to require that kind of reliable, consistent, and familiar language that those relational databases give you. But we also combine that with the scalability, the resiliency, and the flexibility of NoSQL. So as obviously people started to adopt the public cloud, distributing their applications globally, we saw this massive shift to NoSQL databases. But it doesn't really solve the challenge of consistency. Some of the cloud providers have attempted to achieve that by taking some of these more legacy, relational, database technologies and trying to put a wrapper around them in their cloud platforms to try and help with the distribution and orchestration. But they haven't really solved that problem. That's where kind of Cockroach DB comes in. We've taken the best of all of those worlds, combined them together to give you those guaranteed transactions, the inherent resiliency and scalability, but all with a familiar SQL front end that people are going to be familiar with and be able to consume easily. May I ask a question, Mike? First of all, good morning, everyone. Sorry for joining late, but... Welcome back, Fabio. I hope you at least hear your coffee, you know, somewhere. Look at the color of the cap. May I ask you, when you speak about obviously consistency of guaranteed transaction, I don't want to anticipate anything from your presentation. How you can, how Cockroach DB can support guaranteed transaction in a highly distributed environment, which could be in general Kubernetes and more specifically OpenShift. You can decalinate more this complex concept, I understand. Yes, so the way that we do that is, although we look and feel like a SQL database at the front end, because we've implemented the Postgres wire protocol under the hood, we're actually a key value store, and we store the rows and columns of all of the tables in a lexographically organized key value store. We then carve that key value store up into smaller chunks, which we call ranges, and it's those ranges that are written in triplicate across the cluster. So each range is actually part of a raft group, and we actually use the same raft algorithm that's used inside SED, which anyone who's familiar with Kubernetes will be familiar with, and it's at that range level that we use raft to coordinate the reads and writes of each range. So if you think about it, when a write comes in to the raft leader of that range, that raft leader then coordinates those writes with other replicas, which would live on other nodes within the cluster. So a transaction would come in to that raft leader, that raft leader would commit that write to itself, it would send it out to the other members of the raft group in the default case, that's three. So as long as it got a acknowledgement back from a majority of the raft group, which would be itself and one other in the instance of three replicas, it would then communicate back to the client that that transaction had been committed successfully. So that's how we kind of coordinate those writes across nodes in the cluster, and also how we keep that latency down, because we're not waiting for all of the replicas to acknowledge that the write's been committed, just the majority, we can then go back to that client in a timely fashion, resulting in quick reads and writes and keeping that latency down to a minimum. Okay, thanks for a very clear explanation. No problem. So just to kind of elaborate on that a little bit more, so how Cockroach itself is architected is, we are a relational database, and we're made up of a number of nodes. In a small deployment, that would be a minimum of three, like I just talked about, that's to support that, consensus algorithm raft to make sure that we maintain a majority, but then if you wanted to kind of scale your database out, it's as simple as adding nodes. You know, in the case of OpenShift, that's just increasing the number of pods within the stateful set that CockroachDB runs under. Every single pod within that stateful set is a consistent access point to all of the data in your database. So when a request comes in, it can hit any pod in that stateful set and the internals within CockroachDB will then just redirect the request to the relevant pod where your data is living for the specific query that you've made. So we don't necessarily need to scale out within a single OpenShift cluster or within a single Kubernetes cluster. We can scale out across regions and although we've scaled out across regions or across Kubernetes clusters, they still appear as one single logical database. The only requirement there is that all of the pods can communicate with each other at an IP level and also be able to resolve DNS queries as well. So as long as you've got DNS in place and you can have pod to pod communication across Kubernetes clusters, then CockroachDB will function as that single logical database, distributing your data across multiple Kubernetes clusters across multiple regions, which is pretty unique in itself. And it doesn't have to stop there. It doesn't have to be a single OpenShift cluster or it could be across multiple different cloud providers. You can have an OpenShift cluster on-premise coupled with a Kubernetes cluster running in GCP and an AKS cluster in Azure. As long as they've got that network connectivity in place, they can function as one single logical relational database serving your data to those apps in those different cloud providers. Next, we talk about being able to survive anything. So because of the way that we are architected, made up of a number of different nodes and with the resiliency that provides, if you've got enough nodes in each of those regions, then you could actually facilitate a complete regional failure. So as long as you've got the logic built into your app that could accommodate a whole region going down, then your database will still function and operate in that kind of disaster scenario of having a whole region go down. Some additional benefits of this architecture as well. If we can accommodate nodes being removed from the cluster, that allows us to deliver a zero RPO. So there would be no data loss during that time. Also, we're able to kind of facilitate zero downtime upgrades. Because we can take nodes out and bring them back into the cluster. We can take them out, upgrade them, bring them back in and they'll just rejoin the cluster seamlessly. And you won't have to kind of do any kind of disruptive maintenance to perform upgrades, et cetera. Also, another benefit is you'd be able to do online schema changes. So before you may have had to like drop databases to add tables and things like this with cockroach, you don't have to do that. You can do that all online. Mike, about this, we have a question in the chat from Twitch and Malt8 is asking, can you please tell if cockroach DB supports tracking, versioning and deploy of database changes? Yes, it does, yes. Okay, I guess it's a log or something? Binary log, okay. Yeah. Okay, cool, thanks. Cool. And then finally, the third kind of pillar that we talk about is being able to thrive everywhere. So what this means is when you start to distribute your data kind of geographically, this can obviously, we can't beat the speed of light. So it will introduce latency. So what can we do to kind of overcome that or kind of increase the performance to take account for the distributed nature of the database? So what we can do is we can look at where data is being accessed from. So if there's specific tables that are being accessed from a specific region, then what we can do is we can move the leader of those ranges to that specific region, ensuring that when those transactions come in from users in that region, that we can ensure that those licenses are kept as low as possible. So I think I often think about, if you've got a globally distributed application, maybe like Uber, if I'm sat at home in the UK looking for a taxi in Uber, query and certain rows and tables in a specific column, I don't need to know about where the taxes are in the USA. So I could kind of pin that data about the taxes in the UK to nodes that are based in the UK or in the near vicinity. And that's just gonna help increase my user experience. A side benefit of that as well is it can help you kind of comply with regulations like data protection, GDPR. If you've got specific geographies that don't allow data to leave the country boundaries, then you can pin those tables to the nodes within that specific locality, ensuring that that data always stays in that locality and helping to ensure your customers data privacy. We have a comment in the chat, Mike, for what you said, the anti-matter say that that would be a cool feature. Use AIML for predictive query result to pin the speed of light. Okay. That was a, yeah, it was a, you know. Yeah, excellent. So, why is CockroachDB architected for Kubernetes? What benefit do we have of running CockroachDB within Kubernetes? So, you know, if we think about the kind of inherent resiliency that Kubernetes brings to applications, you know, CockroachDB because it's been architected to running Kubernetes can take advantage of those types of things. So, you know, in the event of a pod failure, when Kubernetes, you know, respawns a pod to replace it, because of the way that Cockroach is architected, you know, that new pod will just seamlessly, you know, reconnect to the existing cluster and start participating in the database. And then from a scalability point of view, you know, because we're running CockroachDB in a stateful set inside Kubernetes, if we wanna increase the number of replicas, you know, we can just, you know, edit that stateful set, increase the number of replicas and let Kubernetes or OpenShift just take care of, you know, spawning out the relevant number of pods. So really, you know, we're just piggybacking on the kind of orchestration capabilities of Kubernetes to make managing and administering CockroachDB, you know, that much easier. And then if we talk about, you know, what Fabio and Natalie were talking about earlier, you know, about the database as a service element. So recently OpenShift have launched their Red Hat OpenShift Database Access or RODA where this gives you the ability inside your OpenShift container platform to create connections out to third party database as a service partners like CockroachDB. So for example, if you're running, you know, some application inside your OpenShift cluster that requires connectivity to a relational database like CockroachDB, then you can now do that through RODA, reaching out into CockroachDB's cloud platform, creating a database and connecting that database to your applications that are running inside your OpenShift cluster. So I've got a little demo that I will share with you. I will share with you. That's cool, Mike. And just to summarize what you said, basically people that want to use, want to try CockroachDB has two strategy, one, the Kubernetes operator so they can have this logic that you were talking about, you know, multi-side and the resiliency and scaling out to scale in and all the high availability feature inside the Kubernetes cluster, inside the OpenShift cluster because this logic is in the operator, right? Correct, yeah. And the second option is, okay, you have it as a service. So it's on CockroachDB cloud but you can connect from Kubernetes. And this is what we will see right now in the demo with RODA. Cool. Yeah. Perfect, so the first thing that we need to do is we just need to create a project and that's where, you know, our application's gonna live and our configuration. So I'm just gonna give this a name here. And it's all live demos in the spirit of OpenShiftDB Coffee Break. So thanks, Mike, for interpreting this spirit also today. Yeah, I always like to do a live demo if I can. That's the way. Hopefully, you know, everything goes okay and we pray to the demo gods. So next, we switched personas to developer. You see, we've got no resources there within our topology at the moment. So what we're gonna do, we're gonna click on add. And what we're gonna do is we're gonna import our application from a Git repository. I'm just gonna grab this URL here and then I'm just gonna paste that in the box there and that's gonna just go out and validate that that GitHub repo is actually an application that's got some of the correct components into allow OpenShift to build it and deploy it into our cluster. So I'm just gonna hit create there. So what that's doing now is it's pulling down that code from that GitHub repo. There's actually a build inside that repo there which is just happening now, which is just building our Pacman container image. What will happen though, the first time around is because we don't have a database at the moment, this will actually fail. So what we need to do next is we need to go through the process of using Roder then to configure our external database in Cockroach Cloud and then we'll connect our Node.js app to that cloud database and then we'll be able to use our Pacman application. Cool. So this is a Pacman game available on this OpenShift class is a Rosa cluster and now in the while I shared in the chat the link to the repository if the people want to just try it out and also I shared the link for the developer for OpenShift which is a free OpenShift cluster that you can use. You can register for free on developer.com and start using this cluster. So they can run basically the same demo. So there is the repo, there's the cluster, they can try it out. So yeah, in the while, I think this required a little time for starting up Mike. Yeah, we're ready to move to the next stage now. So we can see now that the application has failed. That's because it hasn't got a database to connect to. So if we go back to add here and we can scroll down to the bottom, we see we've got this box here, cloud hosted database. So now we're gonna click on there and you get the three different options. Obviously in this scenario, we're gonna use Cockroach DB Cloud. So we're gonna click add to topology. And then this is using a set of API keys that are pre-configured. This is now communicating with Cockroach DB Cloud. So what I'm gonna do is I'm gonna create a new database instance specifically for our Pacman application. We're just gonna give that a name and we'll hit create. And what that's doing now is using those API keys then to reach out to Cockroach Cloud, create that database instance for us. And then that will be within our topology then. So, okay, so it's created the database instance. Now we can add that to our topology. You can see there, it's the one that we asked it to create and it's gonna pop that into our project. Cool. So you can see now we've got the two different resources now within our project. So what we have to do is we have to create a service binding between the two of them. So what we'll do is if we grab this arrow, drag it into this box, then OpenShift's gonna create that service binding between the application and the database. And that's gonna contain all the information required for the application to connect to the database. So the connection string, the username and password, all the information that's required for that Node.js application to be able to communicate with that Cockroach database. And that's RODA doing all that stuff under the covers. That's pretty cool, Mike. So we can see now that the application is not restarting all the time because now it can connect to the Cockroach DB that you created from this OpenShift cluster thanks to the RODA integration. I've put in the chat the link to the service binding. So this is a Kubernetes operator that let you connect the application with databases or other resources. So you don't have to write stuff YAML manually. It can be done automatically. And the OpenShift topology has this mechanism to help you doing this bind. So everyone looks connected nicely, Mike. You can create your container. You can create the database on the Cockroach DB cloud. So it's a database as a service. And then you can connect it with a, you know, pulling a line under the hood there's the technology but it's all Kubernetes technology but that is a nice UX workflow, isn't it? Yeah, it's lovely, same as... And if we think that most of the times still put in production on some project could fail always because there is some issue with the connecting application to the DB when you change anytime environments from integration dev, integration test and production. The support that we can have from leveraging RODA and the service binding is great because not only developers but the Ops team can safely put in production complex application and highly distributed application. Also as well, those credentials are kind of, you know, they're kind of hidden from those developers as well. They no longer need to know the connection string, the username and password. All they need to know is that service binding that they need to use for their application and it will give them access to the right database. So you're kind of securing those credentials as well and limiting the access to them to prevent anybody who doesn't necessarily need them getting visibility of them. Right. So now we've got our Pac-Man app and let's play. Natalie has shared the link in the chat as well so you can all have a go on that. So what's gonna happen is as you play the game, you know, you're gonna generate a score and that high score is gonna be held in a database table inside your Cockroach DB. So I was very poor there and didn't get a very high score but I'm gonna hit save there. If you high score list, you can see that I'm currently the only one in the high score list. But as you guys- I hope you are all playing. I'm having lots of fun here. Yeah. So yeah. So now as people are playing the game, generating those high scores and inserting rows into that table in our database, we can now, we can go to Cockroach Cloud. If I hit refresh, we should see our new database that's been created. So we've got our new database instance there. So I'm gonna connect to this. When you, in Cockroach Cloud, when you hit connect, you're given like this splash screen here that's got a number of settings on there on talking you through how to connect your, you know, your CLI tool or any other, you know, tool that you have to connect to your database. The first thing that you need to do is you'll have to download this CA certificate. And if you click that down there, we give you just this curl command and that's gonna put that certificate in the right place on your local machine. So I'm just gonna copy that and I'm gonna download that. The next, because I'm gonna be using the Cockroach DB client, which I'll switch the screen share to in a moment, I'm just gonna change that drop down arrow there and that's gonna give me the connection string and the command along with the connection string to connect my CLI terminal to that database. So I'm gonna copy that and then if I switch to my terminal window, I'm just gonna switch to my terminal window with the screen sharing, let me know when you can see that. Yes, we can see it. Perfect. So I'm just gonna paste that command in here and that's gonna, then that's gonna connect me to my Cockroach cloud cluster. I just need to grab the password. Sorry, for some reason it's logging me out. I just need to log back in. So this is the Cockroach CLI that you can download from GitHub or Cockroach DB website? Yeah. Okay. So this is the CLI CLI that you are supposed to use when you implement also DevOps pipeline. No, so this is just like a SQL client for Cockroach. Okay, only SQL client. So if you wanna do some queries on a specific table, if you wanna kinda look at how those kind of ranges are distributed, see if there's any under replicated ranges if you wanna do something like that thing. Just bear with me a second. Yeah, it's trying to connect. Yeah, I just need to find the password. Okay. Well, in the wild, people are attending. If you have any question for Mike, please send in the chat, we will send it to Mike. I think the flow is really nice, Fabio, Mike. At least from a developer perspective, I can try my stuff. I can, with a database that I don't have to set up. So this complexity is not to the developer. I order a database, I deploy an app, I create a container from the source code, I connect them with a nice UI action. Of course, not everyone loves the full UI experience, but their service binding is available also is a Kubernetes object. You can use it again via YAML, but still help in connecting secrets, contic map and an object. Okay, cool. So I found the password, which is just held within a secret inside Kubernetes, which is held within that service binding object. So I just clicked on that service binding object. Within there, there's a secret and in that secret, one of the fields is the password for the connection string. So just copied that, pasted that into my terminal window, and now I'm connected to my Cockroach Cloud DB cluster. I did a show tables, and you can see there that we've got our high score table. So what we're gonna do is, I'm gonna select all the rows from that high score table. And you can see there that here's my high score that I recorded whilst playing the Pac-Man game. And you can see that, from our application that's running inside our OpenShift cluster that using that service binding, it's talked out to Cockroach Cloud and inserted that record into the high scores table in our database in Cockroach Cloud. Cool, so that's the demo. Well, Mike, it's fantastic. Can you do, so I played a little bit. I'm sure everyone can think is playing, and please keep playing because this will fulfill the data. Again, the repo is the one I share in the chat. If you wanna try the same demo that Mike did, you can try by using the repo. I'm gonna put again in the chat. So the Cockroach DB team made this demo and you can use it for developers and bots for OpenShift, which is an OpenShift cluster. You can reduce it for free. And then you have the Cockroach DB operator installed. You have the Rota operator installed. So you can either test the operator for in cluster database or connecting to a database as a service. So those are the two expedients. You can try it out for free. Mike, Cockroach DB has a trial, I guess, any free trial? Okay. Yes, so Cockroach DB is we have an open source version and there's like a self-hosted version, which has some additional enterprise features, but you could use the open source version to perform this demo. Cool, and I'm gonna put in the chat the link, Mike, to the trial cluster. So you wanna try the same demo, get a trial from Cockroach DB, get the trial from the free OpenShift developer-sendable cluster, and get the repository on GitHub for the Backman demo, which is really cool. Thanks, Mike, for that demo. Do you have any other thing to show, Mike, in the slides or any other? Yeah, there's just a couple of slides left. I'm just talking a little bit about the different versions that we have. I can talk about, okay, cool, perfect. So just hit on there. Like we just touched on. So we've got what we call Cockroach DB Core. So that's our free open source version that you can download and use completely free. If you want to use some of our enterprise features for that, then you would talk to us and get a license to get those. And that's what we call Cockroach DB self-hosted. So that's the version that you'd run inside your OpenShift cluster. Maybe if you wanted to run it across different cloud providers, for example, maybe you've got a hybrid environment where you wanna run some OpenShift clusters on-prem, combine that with some Kubernetes or OpenShift clusters in the public cloud, then our self-hosted version would be the one that you'd be able to do that with. And then we've got Cockroach DB serverless there on the left. So this is what we actually use for that demo. That's what Rode reaches out and connects into which is our serverless offering currently. And that's basically a pay-as-you-go Cockroach DB multi-tenanted environment where you can kind of really deploy Cockroach DB in a matter of seconds and get up and running really, really quickly. That's currently in a single region, in a single cloud at the moment. But as we get further to GA, you'll see the capabilities increase over time. And finally, this is really for production workloads and that's Cockroach DB dedicated. So that's a single-tenanted platform which just belongs to you, which is made up of a number of nodes running inside a Kubernetes cluster that's fully managed by Cockroach Labs. We've got a team of dedicated SREs that just spend all their time looking after those clusters for our customers. And that's currently deployed in AWS or GCP today. Then what would you use Cockroach DB for? So we're a general-purpose relational database for system of record workloads. So whether you're kind of modernizing your existing application estate, you're migrating to the cloud or you're building greenfield applications inside Kubernetes that require a relational database, Cockroach DB is a really good fit for that. We're not restricted to one specific vertical, although we do work well in things like financial services, gaming, retail. We've seen used cases across all different sectors and that's just a kind of list of some of our kind of current customers there. Yeah, so and then if you wanna kind of reach out to me, there's my kind of email address. If you wanna kind of get in touch with me, then send me an email. We're happy to kind of answer any questions. Thank you, Mike. So the email is bookhamhadcockroachlabs.com if you can see it in the screen. And good, you also put our partnership team as a Cockroach DB, Red Hat Partner and that's why we have this integration that you can find inside Operator Hub, inside the OpenShift. But Mike, I was looking also on Operator Hub bio, which is the community upstream hub for operators. And I found the Cockroach DB operator based on the Helm chart. So if you have a mini cube kind or any other Kubernetes you wanna try it out, Cockroach DB, I think you can use this operator which is basically using the Helm chart for Cockroach DB, for installing the database. This again for the in-cluster Cockroach DB. Correct, yeah. Yeah, so there's like, if you wanna deploy Cockroach into OpenShift cluster, there's kind of three ways to do it depending on like what your configuration is. So for example, if you've got a single cluster where you wanna deploy it, you can use the operator. We've got an operator that you can use to deploy it. That's got one custom resource. You define your cluster characteristics in a YAML file and you can deploy that via the operator. We've got a Helm chart and then we've just got some Kubernetes manifest as well. So where the Kubernetes manifests come into play is when you're doing things like multi-region. So if you've got multiple Kubernetes clusters and you wanna deploy Cockroach across all three of them, then you can use the Kubernetes manifest. That's the kind of easiest way to do that multi-region deployment at the moment. Cool. I think this goes also to the question you did at the beginning, how do we set up high availability topologies for inside the cluster? Mike, you have shown before some of the topology for deploying the Cockroach to be operator. What about the multi-site? Like I have two data center. Is there any way to sync the data between the two sites with the operator? Yeah, first thing to remember is for Cockroach DB, like two data centers is a little bit of an anti-pattern because we use that raft protocol, really that you have to have, it's either like one data center or three. It's not that we don't work in a two data center configuration, but it doesn't really give you that survivability that you're after because obviously, if you lost one data center, then you'd lose the majority of all of your ranges and you'd lose your whole database. So it's really kind of intuitive to have two. So yeah, certainly we say like three data centers or three regions, you deploy your nodes across those three regions and that would give you the resiliency. How it does that is obviously your ranges would then have one replica in each of those regions. So if one region went down, you would only lose one replica from your range. So you'd still have a majority for all ranges and your data would continue to be served. So those regions could be made up of three different OpenShift clusters in three different data centers. There could be an OpenShift cluster on-prem, AKS and GKE, for example. It could be an OpenShift cluster, some virtual machines in another or an AKS cluster. You're really the kind of just limited by your imagination, really. And that's kind of the rate that you want. Yeah, right. That fits perfectly with the multi-open Ivory Cloud approach I've read that when we are going toward a pattern where you can have OpenShift everywhere and managing that OpenShift from on-premise or some place in the cloud. May I ask you just one question? If there is any specific solution from Cocker, it should be for edge deployment. When you need, for example, I mean, when you have to deploy something with a small footprint on a single OpenShift node or remote nodes, something like that. If there is something that just... We don't have a specific edge version at the moment. But you could still use CockroachDB across the edge. It would be more about the latency in between those nodes. If you wanted to deliver a single database across them, you'd just have to make sure that you'd have sufficient connectivity between all those nodes at the edge to be able to accommodate those rights and make sure that there was a low enough latency to make it practical to do that. Yeah, right. That's makes sense. Cool. Yeah, another question, Mike. So for the OpenShift and Kubernetes deployment, you say that we should have at least three data centers, not three sites. What about if we use CockroachDB cloud? Can we select how many availability zones we want? Is that a parameter we can also set up? So from a single site perspective, it's distributed across the availability zones within that region, but you can also select a multi-region deployment as well. So if you wanted to be able to survive a region failure, with Cockroach cloud, you can select a three-region, multi-region deployment and you can distribute your data more widely across multiple geographical regions. Yeah, cool. Cool. Interesting. I think it's a very nice feature. Also, as a developer, I can select, I can deploy the app, I can connect it, I can select the level of I development they need for tests, maybe just single, it's okay. Then when things goes too proud and the traffic increase, maybe it's good. Another question about the traffic increasing, do we have any relevant metrics we can use inside the cluster or Kubernetes operative cluster? Let's say we have the operator and then the database is hitting hard. How do we measure how we can understand that we should scale it up? So Cockroach DB has its own admin console and within there, there's lots of metrics that are being exposed. So you'd be able to kind of go there and investigate what those metrics are and whether the throughput that you have is sufficient, whether you're seeing any kind of bottlenecks. But also, we expose those same metrics via a Prometheus endpoint as well. So if you're already heavily invested in that kind of cloud native ecosystem and you're already using Prometheus to monitor your other applications, then you can just bring those metrics into that kind of existing monitoring stack that you have inside your OpenShift cluster and just create some Grafana dashboards, maybe create some alerts in Prometheus with alert manager to say, when I hit this particular threshold to admin alert or maybe even do something even cleverer, like do a horizontal pod auto scaler and scale out your pods automatically if you're feeling confident in doing that as well. Nice, that was the question, Mike, if we can use the existing Prometheus ecosystem we have in the OpenShift cluster and adding to the Grafana dashboard, some of the, you know, some American. And about it, you say the scaling the pod, which pod we should scale, the stateful send pod? Correct, yeah. Okay, okay. Yeah, so that is basically there are, there's a number of resources that get deployed, like service accounts and these types of things along with the stateful set. But the stateful set is really where CoCoachDB is running, that's where the application is specifically running. Yeah, so it would be that resource that you scaled out. One thing with scaling is obviously, it is still a database, there's like lots of data involved. So it's not as easy to scale, as an application, you need to put some additional thought into it, because every time you kind of scale out the number of pods, there's a kind of a data movement activity that has to go on. So maybe do it a little less aggressively as you would the application tier, you know, giving the database time to kind of redistribute that data across those new nodes as you scale out. You know, you don't want it to be still, you know, trying to move ranges around into those new pods and you decide, oh, you know, we've gone past the peak in the traffic and we want to scale it back in and you start trying to scale it in when it hasn't already finished, you know, moving data around. So just do it with a little bit more caution because of, you know, the data that's involved that has to be moved around the cluster when you perform those scaling operations. Right, thanks. And I was thinking Mike, another use case, it could be very common in which the Kubernetes stateful set help, which is the upgrade, you know. And for instance, in OpenShift, we have the over DA upgrade. So it's basically automatically upgrading the cluster, draining the nodes automatically. It's all made by this logic, this rail corollates and OpenShift. When we need to upgrade the nodes where CockroachDB is running, I guess it's important to have the stateful set. So Kubernetes knows that this object need to be moved as a single entity, you know, because there is also data attached. Do you have any recommendation for upgrades and also a hot question, which kind of storage do you recommend for running CockroachDB? Is it okay any software defined storage for in your opinion or do you recommend any, you know, more cluster bare metal solution? So we're not really bothered about what storage that you run our, you know, CockroachDB on. You don't need to have any kind of replicated storage because we deal with the replication with inside CockroachDB. So, you know, we write the data in triplicate across the cluster. So you don't need any kind of fancy storage underneath that's doing any kind of additional replication. We're more specific about, you know, the number of IOPS that that storage can provide and you can find that information in our docs, you know, how many IOPS per VCPU we recommend you to have. So it's more about, you know, what's the throughput of the storage as opposed to what type of storage it is. Or we're using Kubernetes is just a PV, you know, with a PVC. Right. It's just simple straightforward Kubernetes concepts. You know, we're not really concerned about what that storage is underneath as long as it provides us with the required amount of IOPS. Right. So when you select the cluster, sides of the cluster, I think you should look at which instance flavor, instance type you are using because I think the number of IOPS are related to the, to the version of the instance. Correct. And that's a great point for me to kind of plug our cloud report. So we've just kind of released a cloud report where we've done different benchmarkings against the different instance types from all the different cloud providers. And in that cloud report, you'll find, you know, which instance types provides the best performance for running CockroachDB. Nice, nice. And cool to mention in OpenShift, if you need to change the instance type, you can go to the machine set and add a new machine set for the instance type you wanna use. So it's really easy with the machine API implementation in OpenShift for the Kubernetes cluster API to change the instance type, to add a new instance type, to scale it up the cluster. So if you want, if you had to change your cluster topology side, it's good to you can do it with the machine API. Thanks, Mike, for the link of the Cockroach Labs, the cloud report. I just added to the chat so anyone can see it. Yeah, Fabio, do you have any additional question? No, no, no. Mike told most of the things that I was expecting. So yes, very, very cool. I didn't know a lot about CockroachDB, so your presentation was really enjoyable, at least for me. Good stuff, glad you enjoyed it. Yeah, Mike, it was really great. I enjoyed playing Pac-Man, I enjoyed seeing live. Yeah, I understood that I need to improve my hold now Pac-Man skills. Yeah, many, many hours of wasting on Pac-Man over the last couple of weeks preparing for this. Long time ago now. Yeah, yeah. Yeah, so, folks, please go to this link to download the demo, the Pac-Man demo for CockroachDB. Go to developer sandbox for OpenShift to try it out and go to the trial of CockroachDB to try the CockroachDB cloud. So you can plug easily the same demo flow that Mike was doing today, and the recording is at the same link you are watching now either from Twitch or YouTube. If you wanna review what Mike did, you can watch it. For this, Mike, we're gonna end the show with some reminder of what we have today in the schedule Fabio, because today we have an OpenShiftDB, we have the level up hour, and then ask and admin. And we will come back next Wednesday with the Red Hat Ackfest office hour. Again, talking about Relfor Edge, MicroShift, and all the open source for that technology for the edge, with Andrea and all the Red Hat Ackfest team. With that thoughts, Mike, and thank you very much for joining us. It was a real pleasure. I hope to have you back in the show in the next future for another awesome demo with CockroachDB and OpenShift. And Fabio, thank you for joining today, nice background, and thank you everyone for joining and attending today. And see you next Wednesday. Ciao. Yeah, ciao. Thanks guys. Cheers.