 Here is we have the last session for the day. It's a Red Hat OpenShift Database Access. It's a database service on top of OpenShift. And we have a presenter, Veda Shankar. He's a Senior Principal Product Manager in the Red Hat, who basically has a solid experience in storage services. And he'll be introducing you what the database service is, how to use it, what are the benefits, and a demo. Yeah, thanks, guys. I'm just making sure, because as you switch networks, I want to make sure everything works. Thank you, Nikhil. And it's always hard following a burst presentation. Anyway, if you guys are getting bored, I might have to break out into a Bollywood song. So how many of you are using databases? Oh, practically everyone, right? Good. So both SQL and no SQL, I presume. Are you using any managed database, like D-Bass as a service? You are? Which one? Oh, OracleDB. OK, cool. Anybody else, like a MongoDB Atlas or Dynamite? OK, yeah, sure. There's RDS, cool. So today we're going to talk about how to make basically access to these database as a service on OpenShift, really easy in five minutes. If you have an OpenShift cluster, then with this OpenShift database access product, you should be able to make use of this free tier that most of these vendors offer, like not just AWS, where you use free tier, but also MongoDB, CockroachDB, Crunchy. So those are the ones that are supported, and we're adding more. So that's what I'm going to show you today. So this is me. So we're going to talk about Red Hat OpenShift Database Access. It's a very long name, so we call it RODA. So if I say RODA, that's what I mean, like it's OpenShift Database Access. And so why do we need that? And what is RODA stand for, and how does it work? And I'm also going to show you a few examples based on Spring Boot, Node.js, and then also I'll show GitOps, basically follow of what Bear showed earlier with Argo CD. So something that you can do just totally based on YAML from the command line. So why RODA? So RODA's main purpose is to make it really easy, both for IT ops and developers, to be able to consume these managed database as a service. So make it easy. So we did this study. It was across many regions, across multiple industries. And what we found was databases and AIML, of course, this is no surprise, are one of the most popular workloads. And a lot of Red Hat's ISV database partners actually all have their own managed database as a service in their cloud. I mean, they may be using Amazon or Google, or they give you a choice. But what they do is they completely manage the database. So that way relieving the IT organization from all the uptime and having that resiliency. So this is a very quick way of getting like a Postgres implementation using, say, RDS or CockroachDB. And now having a managed database as a service is really cool for developers, because you can just simply fire up your browser and, boom, you get access to your favorite database. And you can get cranking. You can create your schema, have your tables, and you're productive. Now, there are a few challenges. Number one is there are some inconsistencies in terms of the interfaces based on the different database vendors. And also there is this aspect of governance from the IT ops perspective. So developers want to get stuff done really fast. They don't want any blocks. They don't want to reach out to IT. But at the same time, data is so important. And databases especially is at the center of a lot of the security issues. So IT would like to know who is accessing which service and from which namespace, what applications are using it. And of course, they also want to make sure that only a certain Kubernetes namespaces can access certain managed services. So this screen kind of captures basically the challenges that developers face, and also from a security ops perspective. So Red Hat Solution to address these challenges is this OpenShift Database Access product. It's an add-on service. It is part of the OpenShift Managed Services, right? So the idea of all these add-on services, which sit on top of the OpenShift Managed Platform, is to make sure that OpenShift is a great productive tool for development, right? And so all these add-ons work very cohesively to be able to give you a very good cloud-native application development environment. Now that's the product definition for OpenShift Database Access. Essentially, it is how do I, you know, make, you know, database, consumption of database, managed database in the cloud really easy for both IT ops and for development. And you can take a moment to just read it out. And before I go further, what I've done is I've also included a Forrester Report. This is something that we wanted to find out, is like, what are organizations really challenged with? And are they really, you know, is this an issue? Like, you know, DBAS as a service, what is their expectation? What do organizations, you know, plan to gain from? So the link is included in the deck. It's a good read, you know. I would encourage you to take a look. So with Rota, what we have done is these are the four ISV vendors. These are ISV Red Hat partners that we have onboarded. MongoDB, that provides no SQL database. Cockroach DB has a Postgres-compatible database, but it is highly resilient. Think of a very recent acquisition by a very bombastic person, right? So that company actually uses Cockroach DB as a back-end. I don't know, I hope this is not being recorded. Same for, you know, like, also one of the big, I think Food Delivery Service also uses them. CrunchyData has a great Postgres database. And then, of course, Amazon RDS has a host of databases, right? And we are going to be adding additional services like Azure Database Service and Couchbase maybe in the future. So the idea here is give ITOps visibility of all the applications that are accessing the database, but at the same time keep the gatekeeping to the minimum and enable developers' self-service that is easy for them without, you know, much IT interaction, they should be able to go and provision a database from these managed services. Now, one other important thing that you will see is when you have, say, a MongoDB or, you know, a Postgres from different vendors or a MySQL, each one has its own different set of connection steps, right? But Rhoda plays really well with Service Binding Operator. So with Service Binding Operator, what you will see is it's a very consistent programming paradigm, right, irrespective of what language you're using. You could be using Node.js, Spring Boot, Quarkus, but they all have extremely good support for Service Binding API. So it is just a question of using the Service Binding API in your code, and you can automatically consume those credentials. You don't have to really get into the nitty gritty. I mean, some languages do it better than the others. For example, Quarkus has very good integration, just like Spring Boot also has. The others, you'll have to use the API explicitly. Again, this slide summarizes what Rhoda achieves for both from a developer's perspective as well as operations. Now, let's get to how does Rhoda work. As you can see from the slide, Rhoda is installed as an operator, it's actually a meta operator that gets installed on OpenShift. And I have a slide later which basically, Rhoda is available to you as an add-on if you have ROSA, which is our managed OpenShift platform. So just like as Burr was just showing you OpenShift data signs, you'll see a tile for OpenShift database access. So all you have to do is just click on that and you can just say install, and the operator will get installed. Now, if you don't have a managed OpenShift, you can still install the operator. It's a single YAML file to install it. It's right here. I have a link in one of the slides. It's a one-step installation for Rhoda operator on your OpenShift cluster. So when you install it, what happens is it pulls in all the operators for the different ISVs that we support. So as you can see, we have the crunchy data, the cockroach, the Amazon, ACK, and then MongoDB. So all those other operators are installed. And what Rhoda does is basically brings in consistency to the control plane for the database access. It's not in the data path. So you will go and create a provider account. We call it as a D-Bass inventory for any one of these providers. And the application will then consume those credentials for the any database instance using service binding. At that point, the application will directly communicate with that respective database cloud. So all your database interactions, your SQL commands, everything will go directly to the cloud. It's not going to go through Rhoda, okay? So again, going back to how the communication works, it's all based on custom resources. We'll get more deeper into the custom resources as we look at the GitOps. But basically, we have a D-Bass inventory CR, you populate it with the provider information and the particulars for the providers credentials. And at that point, the provider operator will take over and then use its CR to go and populate the results. So if you look at a custom resource, you typically have the spec, which is a read-only section, and then you have the status. The status is where you pull in the information is the right area. So the status will have the information on what database instance you found on the provider's cloud. And then you can create a database connection to a particular instance in the cloud, in which case now the status section will contain the config map and the secret, the Kubernetes config map and the secret it'll get. And so now that will be part of the namespace where you're creating the D-Bass connection. We'll look at it as we look at the Argo CD example. Before I dive into the actual OpenShift demo, any questions so far did we understand what Rota does? Okay, I see a few nods so I will proceed. Bless you. So when you install Rota on an OpenShift cluster, what you get is you get an additional menu item called data services and underneath that you have database access. So database, and again this is under the administrator view. So I'm kind of wearing the hat of an IT ops person right now. So as an IT ops person I go to the admin view, I'll see a database access. You'll see that there is a provider account that's been created. So this is a provider account that I pre-created using Argo CD, using the GitOps pipeline. We'll come back to it, but what we'll do right now is manually create a Cockroach DB provider account. And what I've done is, and this is something you all can do. There is no credit card asked. You can simply go to Cockroachlabs.cloud and you can register with your email and they will give you a free account. And once you get a free account, you can just go to Access Management, Service Account and then, for example, in this case, if I go here, I should be, then it'll basically print out the unique key, the API key. Now, this is true for most of the ISVs that we have signed. They all have a free tier, right? So you should be able to get an account. And again, some use API key, some use, you know, not just one key, they'll have three keys. In case of AWS, for example, you have both, you know, the secret, the API key and the secret key, right? So once you have that key, you can come back to the console. In the selector provider, you will then select which one. As I mentioned, we support these four ISVs now. I'm going to select Cockroach DB. I'm going to paste the API secret key here and just give a name. I'm going to just call it Veda CRDB for Cockroach Database and I'll say Import. So at this point, what's happening is I've populated the Dbass inventory custom resource and then I'm handing it over to the Cockroach provider operator, which then goes back into the cloud queries with that credentials and then says, oh, these are the four database instances that are available in the cloud under this account. So now if I go back, you'll notice that I have Veda-CRDB as a provider account. Now, I did this under the OpenShift Dbass operator, which is the admin namespace. But with Rota, it's all namespace code. So you, as a developer, you can go to your own, like, my namespace and you can still create a provider account. The only thing is it'll be only visible to you. It won't be visible to, say, the other developers unless you explicitly enable that through the Dbass policy. The only difference is, as an admin, when I do it under the Dbass operator namespace, it's visible to everybody. But of course, I can always control that also. So now that we have created a provider or a database account and we have four instances, I'll show you how we can make use of it. I'm going to first create a project and I'm going to call it as Veda-Postgres. And if you look in your deck, what I've done is I've included these two tables. So these are examples. The first table is for MongoDB and these are the demo applications. So they're all open source. You should be able to go and access it. Again, I'm sure you'll be provided the deck after this session. Karan is going to send an email. Himanshu, yeah. And so there is another table with Postgres service binding demo applications. So this is with Node.js, Spring Boot, Quarkus, Golang, Python. So pretty much covered the whole spectrum. You should be able to pick any Quarkus users here. Anybody here who uses Quarkus? Spring Boot? A Node.js? Okay, a lot more hands. Okay, cool. So today what we'll see is a Spring Boot and a Node.js. So we created the Veda Postgres project. So I go to the developer perspective and what I'm going to do is, as soon as I get... Okay, let it take its time. I'm going to go to a plus add and then I'm going to import from Git, right? So let's pick the Spring Boot example for example. So there is a Spring Boot Dbass demo app. We are going to choose this one. So I just grab that URL. I go back to import from Git. I'll paste that. And then I'll go to the advanced Git options because and in that I'm going to say OCP build and under context directory, there are two options. One is for MongoDB and another is for Postgres. Since I'm doing a Postgres SQL, I'm going to select Postgres SQL app. And then let it build. So this will automatically create a Kubernetes pod with the application in this namespace. So it should take about 20, 30 seconds. As you can see, it's building it. And while it's building it, I just want to quickly show that in that code, when you access the code, you'll notice that there is absolutely no... We're not using the username, password. All we're doing is we're going to be leveraging the service binding. So and that is under application config service binding config.java. You'll notice that we are using the DB binding and then that will automatically go and pull the necessary information from the binding object. Basically it'll be able to figure out the username, password, the URL for the backend database and connect to it. So that's the cool thing, right? You're not sprinkling your code with any of those credentials or you're not saving it on any file. It's all done dynamically at runtime. So let's go back to our namespace. And as you can see, the application built, but then it is kind of crashing every few minutes. The reason for that is it is expecting a backend database. So I'm going to go back to plus add and now if I look in the tiles, there's a tile called cloud hosted database. I'm going to click on that and then click on cockroach cloud, add to topology and I'm going to pick one of these instances. I can pick the dev instance. Now these are databases that I've been playing with. You can have a brand new database and initialize it also with this application. And what you're doing right now is you're basically creating the debas connection object in the namespace. And what happens when you create that is you reach back into the cockroach cloud and create a new database user with a unique password and that information then gets sent back to the debas connection custom resource and you'll see that there's a config map and a secret that will get saved in this namespace. Now to make it even more super easy, what we'll do is we'll use something called a service binding. You just drag and drop this to create the service binding and then say create and when you do that you'll notice that the pod got restarted. What you have done is even though you have the config map and the secret in the Kubernetes namespace you're actually injecting that information as a mountable file system. You can also inject it as environment variable into the running pod and now depending on the framework, in this case Spring Boot is very smart. It's able to recognize that, oh, I have a service binding information and I can go and extract the username password from that and it'll automatically make that connection and as you can see the pod is not crashing, it's continuing to run and if I go and click on the REST API I should be able to see a simple database of fruits and quantity and this is getting pulled directly from the cockroach cloud. So if I go back to the cockroach cloud, if I go to dev instance and then I click on databases, default DB I'll see that there is a public fruit which has, that's a schema for it, I can then go query and you'll see that as you're adding stuff that is getting updated. So of course when you're in India, mango is the king of fruits and I want to get 100 kilos of that. Let me save that and as you can see mango did get added and you have 100 kilos. Okay, so that was Spring Boot. The next one, any questions? Did you get the flow? So first we created a provider account in the cloud of the provider, right? This could be a MongoDB, we use cockroach. Then we imported that provider so basically created a database inventory on OpenShift. So this is typically done by IT ops and then they will basically enable some of the namespaces to have access to this database inventory. Now you as a developer then go in, create your namespace and then you create your application in whatever language and say you need that database you will simply go and create a database connection just in your namespace which will be a unique user and a password just for your namespace, right? And then you will make this connection using service binding to connect those credentials into your application so that way you can connect automatically to the database. So that's what we did. And I have another cool, I just want to be cool like Burr so I'm going to show you another example. Sorry man, just keep using you. So I'm going to do a Node.js application from Git repository this URL is also there. Love for any one of you to volunteer and say enhance this. This is a Pacman code which was using a MongoDB on-prem MongoDB. So what I did was I changed it to use a cloud based MongoDB and then subsequently I have one that using CockroachDB. So seems to add to some cute factor while you're demoing it. So I'm just going to grab that GitHub URL go back here put that in the repo and as you can see OpenShift recognizes it's a Node.js application and yeah, and then I'm going to say create. And now we should have a Pacman application that will get built and that's looking for and that should also be kind of looking for a back end database. So if you so it'll take about 20 seconds to build and now while it's building I just want to quickly show you on Node.js it's slightly different, right? So if I go to lib under the the previous example if you notice there was so much of code in config.js when you did not use service binding but now in my new code I have just the database.js where I'm simply saying hey go use the kube service binding library and then I'm saying bindings equal service bindings get binding Postgres SQL and that'll automatically populate all the user password, host port number, SSL mode all that stuff into the bindings object and then I can use that to then create a database handle and then that's all I have to do so pretty straightforward so you can take a look hopefully by now the application should be up yes it is so it's built so now what I'm going to do as you can see it's still crashing because it's waiting for the database create a service binding and that should make it happy and yes indeed so now if you go to the REST interface we should see Pacman you can click on that speaker mode so so this one I've actually put a bit basically you guys should be able to go and access this can anybody go and see if this is accessible pacman-dev-nation under bit.ly okay I see somebody's noisy here cool and cool yeah exactly and then you can save the high scores and those scores will go back if you hit a space bar I love you to save your scores and that's the scoring actually goes into the Postgres database so it's a simple application to show you know something interactive but that uses a database any questions sorry link oh okay as a last demo what I'm going to do is I want to show you the Argo CD I showed you earlier was all like you know manual manually going through the OpenShift console right creating the database inventory the database connection but what I've done over here is an Argo CD and you will find that under RodaLab GitOps-test again this link is also included in the deck so you should be able to find it so everything that we did in these previous examples what I've done is I have a MongoDB example here and the MongoDB application is automatically so what we do is a few things right if you go to the templates you'll notice that I have a bunch of templates first I create an inventory that means I go and create a MongoDB inventory by registering with MongoDB like using the registration information from MongoDB then I go create an instance on MongoDB and which is a brand new database instance then I launch an application called MyApp and then after that we that's in the deployment and then we go and create a connection and then finally a service binding so all that stuff is done now the best way to follow this example is going to our quick start guide in the quick start guide you will see there is a link to the reference guide so go to the reference guide and in that there's a full explanation of all the individual steps that you need to do this whole thing using API so there is like two or three examples like how to connecting an application to a known database instance provisioning a new free trial database and then connecting to it so all that stuff is there in the reference with code so you should be able to follow that finally oh can I have some more time ok just last slide is roadmap we are going to be adding more ISVs as I mentioned and then also support for Azure database service and definitely we want feedback from users so that basically dictates how we proceed with product but then it's really easy for you to try it on an existing OpenShift Amazon instance which is either OSD or Rosa but at the same time I put a link here for the manual install so it's a very simple one YAML command to install Rhoda and again as other previous speakers mentioned you can go to the sandbox and just like you'll see data science you'll also see the database access that's all I had, thank you