 Hello everybody, and welcome to another OpenShift Commons briefing. Today we have a new OpenShift Commons member, new ODB, who's going to give us an overview on what they're doing with their offering, and talk a lot about Elastic SQL in OpenShift and how to do all that. So we've got a couple of folks, I'll let Christina Wong introduce her team, and we'll just get started. So thanks, Christina, take it away. Thanks, Diane. Welcome, everybody. Thanks for, you know, attending, listening in on this briefing. We're really excited to be part of the OpenShift Commons community. My name is Christina Wong. I'm the director of partnerships and product marketing here at new ODB. I'm joined by Joe Leslie, our senior product manager here at new ODB. He's going to be giving the bulk of the presentation. However, the other person we have in the room is Ben Higgins, and he's our DevOps engineer. He's the one who's been doing a lot of the Red Hat OpenShift Container Platform integration. And so if anyone has any questions for the team on the last slide in this briefing, we do have contact information. Feel free to reach out anytime. So, Joe, I'll let you take it from here. Great. Thank you, Christina. And happy Friday to everyone. And, yeah, we look forward to presenting new ODB running in OpenShift today. So a good chunk of our presentation is going to be a live demo. But we are going to run through some initial slides just to introduce you to new ODB and who we are, what we're all about, and the problems we're trying to solve. Feel free to ask questions as we go, and you can go about this, you know, interactively and have a nice conversation. So, yes, as just started, it's basically what we want to do, introduce new ODB and the concept of the ElasticSQL database. We'll talk through a little more detail about the OpenShift and container integration that we've built. And then we'll go right into demo. Finish with some QA. All right. So let's first set the stage a little bit as many companies today are seeking to move to the cloud already in the cloud. There are certain challenges. And new ODB, we've looked at these challenges in a couple of areas where there's critical requirements that need to be met. You know, everyone as they move applications to the cloud, we're looking for the elasticity. We want to ensure that we can run our applications on commodity hardware in the cloud. But it's really important that the applications have this scale out and scale in, right? We want to be able to extend our applications, you know, deliver more transactional throughput when needed, but at the same time also scale back if needed. And all the while we want to make sure that our applications today are continuously available. Even the concept of high availability is something that's not quite high enough these days. And many of our customers come to us seeking zero downtime and continuous availability. So in order to achieve all these, in the world of SQL databases, there's often you have to give things up. And the problem that we've solved at new ODB is we've been able to deliver a true SQL database that is anti-compliant and acid transaction property consistent in an environment like this. So you can leverage your existing SQL applications. You can leverage your investment in the people that you have and the skills that they have and continue to run basically in a SQL database environment, which traditionally runs well in a scale up environment, right? We just, we can add more memory, more CPU and more storage, but very difficult to provide a database that can run in a domain of hardware where that hardware exists in, you know, across geographical regions. And all the while presenting a single SQL database. So that's what new ODB has accomplished and provides as a product. So if we continue, when we look into a little more detail, this next slide shows some of the common solutions and what the approaches have been. Either a shared disk solution, which, you know, these can be, you know, very, very complex and expensive when we consider clustering software and sharing the disks. There's a couple of examples here, the Oracle Rack and DB2 PureScale, but there's also the shared nothing or sharding approach. And these also have their challenges requiring applications to have more knowledge about where to get the data. So while there's these solutions out there, none are truly meeting the desired requirements. So introduce the ElasticSQL database. So new ODB, and effectively what we've done, and it's here in this architectural diagram. We see a legacy architecture, which is, you know, much like I described earlier, sort of a stack of processes running in a single architecture where you have your query processing and your storage in a single server. But the new DB architecture is done. It takes this and it breaks it into the two critical components, the transactional component and the storage manager component. So these transactional components are pictured in green. And we call these transaction engines, or TEs, and they're responsible for servicing the application connection request and SQL request. And it is an in-memory version of the database. So it runs a durable cache. So new ODB is delivering high-performance SQL transactions. And when the application runs, for example, a DML statement, like an insert, update, or a delete, needs to make changes, you know, to the database, the transaction is then handed off to the storage manager component, which is pictured here in this sort of yellow color. And this is also a scalable component. It's critical to understand that all the components are scalable in that you can create redundancy. So should you lose a transaction engine or lose a storage manager, it's okay because you have another one running. And that's part of the architecture that delivers the zero downtime capability. So these storage managers as well, these, as I mentioned, this is the durable part. So this is the D and acid, the durable portion of the transaction. So once that transaction is committed to DIV, should there be a failure, you can always reliably go back and retrieve that data. So this is a very flexible architecture, and it deploys nicely across data centers and works very well in a cloud environment, a microservices-type environment, where the database is comprised of processes that you can then spread across a domain of hardware, geographically. So here's just another slide that diagrams an employment example of an active-active true master-master model where we have a multi-data center deployment. And we can see we have two availability zones, and our applications are being serviced in both zones. And we have some number of transaction engines that we're providing, and each availability is owned for local quick connectivity to the database. And then you'll notice they have each has a copy of the database. The storage manager represents an on-disk copy of the database. And NuoDB also allows for larger implementations to effectively fill these storage managers over to other ones, so you can have a single large dataset banning across a storage manager. Call that concept storage groups, and it's a question we get when customers ask, you know, when we scale out the storage, you know, can the storage manager grow beyond a single copy of the database? It absolutely can, and the way we do that is through our storage group abstraction layer. So again, what we're providing here is a single logical database. It spans across a geographical area, and within this domain of hardware resources, the resources themselves can either be on-premise or in the cloud. It just doesn't matter. NuoDB provides that flexibility. So just, you know, recapping some of what we talked about in the high-level areas of NuoDB and why it's a good solution for running databases either on-prem or in the cloud. This elasticity that we've been talking about, the ease of extending out the transactional throughput for your application by scaling out read and write capability. And we talked about the scale in to minimize so you're not over provisioning. And again, this is where we get this elastic concept. And the deployment flexibility we talked about as far as deploying either on-prem or in the cloud in a hybrid environment, NuoDB is quite happy to work in all of those environments and across those environments at the same time. All the while not giving up the SQL semantics that so many of our applications require. We've got SQL applications and our investment in SQL and our resources and people who know the SQL. And we want to continue to run in a kind of a SQL transactional environment. And NuoDB does a great job also meeting those requirements. So now we're just going to look at some of the details about our OpenShift integration environment. So here's a diagram that shows us an OpenShift environment sort of fact diagram showing us our routing layers and our persistent storage layers and the different service layers that lead to our physical and virtual and private and public environments. And we see that, you know, we have the master section with the application authentication, the data storage services, scheduling services, all of the pieces that are there part of the Red Hat Enterprise Linux platform. But then we also see the nodes and how we have created a resilient environment where we run the application and the NuoDB transactional and storage manager components in different nodes or pods. So we have the resiliency and that redundant environment such that should there be a failure, the system will continue to process connections and SQL requests. And all the while the OpenShift environment will enforce those pods and should one fail, it will automatically restart the pod. And we're going to see that in today's demonstration. So just talking through some of the, you know, what's available as far as the different deployment models of single versus multiple, you can have your services that are serviced by a single large database. Or they can also be serviced by, you know, smaller databases per service where NuoDB services those applications. And then through a query federation model, you can then retrieve the data across those different applications. Likewise, if you would like to then roll that data up into a more aggregated view, it's quite simple then to do that and effectively create a single larger logical database view for reporting and analytics. So NuoDB provides that flexibility so you can have dedicated TEs and SMs to the individual microservices. Okay, now we're just going to talk about a little more of the details of, you know, what we've built and how NuoDB works within an OpenShift environment. We've used the Jenkins build process, the package tools that we've built, the OpenShift OC command along, you know, providing the easy environment to work and administer the product, as well as the AWS CLI command. And NuoDB does the monitoring and collecting of data so that information is then available for the operational aspects of monitoring and managing a NuoDB system running in OpenShift. So we have a little more details of the different pieces that are involved. Again, I mentioned the OpenShift OC command. This is, you know, how we can self-deploy and manually deploy the environment. It's, you know, basically you have your choice of running how you like. And as I mentioned earlier, you know, kind of local or an AWS on-prem and in the cloud, you know, all via the Docker run environment. Okay, we're going to get to see all this in action today in our presentation. We do want to talk a little bit though about some of the storage and how this is working. So we've got, and Ferral Storage, two storage managers, which I mentioned earlier, we see that in the diagram where we have multiple storage managers for storage redundancy should a storage manager fail. We know our database still has a consistent copy available for our applications. The persistent storage piece uses the AWS, you know, EBS, the Elastic Block storage. And for the Red Hat storage solution, the CNS storage is what we're using, and you know, we're still in the process of looking at other storage solutions. And we're sure that we provide all the storage solutions that our customers are seeking to use. But those are the basic ones that we're offering today. The last point is an important one, just so we understand that if we should lose a container, that the new container that comes and replaces it will automatically reattach to the archives. Therefore, there's no need to rethink with the data with an existing full copy. Once the new storage manager comes into play, it's already a full working copy of the database. So as far as the auto scaling and monitoring and log stash options, here's just a few more details of what's available from an application scaling standpoint. And this is some of the concepts around auto scaling, the scaling nanny. This is something we're currently working on. It's a work in progress. But we've had a lot of requests in this area, and this is a really exciting piece that we're looking forward to deliver that the system will auto detect effectively when to balance the transactional and storage manager components of the system. So basically auto scale, scaling out, scaling in as needed based on the application load and ensuring a certain transactional transactions per second and ensuring that the latency remains low. So, you know, the log stash environment, the elastic stash, you know, log stash and kibana, this is all delivered together in the framework that's available to provide the monitoring detail. Ben, is there anything else to add on this that you may want to add or is that. We can hold for questions during the demo. That sounds great. All right. In fact, I think we're about ready to go ahead and have some fun and, you know, go ahead and see what new DB looks like running in an open shift environment live demos. All right, we're going to hear. So actually, so let's um, we're going to go ahead and start our demo. We're going to, I'm going to put away the slide presentation and we're going to move over to the. The open shift environment. Okay, so I'm going to go ahead and log into my environment. Okay, and what we see in open shift is we have a new deployment and let's talk about some of the pods that are running. This 1st pod is called the admin service. This is a very important piece of the new DB management tier. It's the part that manages the domain. Several times in our presentation today, we've talked about a domain of resources. And the admin is responsible for managing the new DB transaction engines and storage managers on the host that it's, it's running. So within its process space, it's managing the new DB engine processes. It's also the 1 that's responsible for load balancing. As the connection requests come in, it will determine a TE that meets the service serviceable requirements of the client. For example, if the client is in a particular region. It will deliver a transaction engine in that region so we can reduce any latencies while connecting to the database. The another pod we have here running today is our new app. This is actually just an application pod. We're going to run most of our demo today from this application pod. So when we run the sequels and various things will be running from this environment. And then down below here we see the actual real workings of the new DB system that the 2 components we talked about earlier that the transaction engine, which is labeled here as TE and the storage manager that piece that's responsible for the, the durable on this copy of the database. And that's pictured here. So so far we can see we've got a very basic setup, right? We've just got one pod of each. Before we scale out this environment. We're just going to take a look at what it, the system looks like from the, from the new DB side. So we're going to come over to some command line prompts. And let's take a look. I'm going to run a new ODB command. It's actually the one that's presented on your screen here already, but let's run it again. We see we have a live system. And it's showing us these components. So focusing right here we have this server. This is the one that I call the admin service. And we see that it's, it's last, it's liveliness. It just checked in just 2 seconds ago, and it's state is active. It's the leader. You can have multiple admin services. If you were to have multiple geographies in within your domain, you would have multiple admin services. And, you know, we see this one is connected here. So it's active connected and running our demo today. We just have this single admin service. But down below we also have a demo database. It's our hockey database. So for those of you might follow hockey a little bit, we'll get to see some hockey data. Because we're going to want to show an active running system. So we're going to use this little demo system and we see that it's up and running. It has a storage manager here component. It's running on IP address, say 103. And it also has a liveliness number. We see it's checked in just 4 seconds ago, and we have a transaction engine as well, pictured here. And it's on IP address 104. So let's go ahead and make a connection to our database. And, you know, at this point, we'll notice that it's connecting to IP address 104, right? Obviously that's going to be the case. We only have one transaction engine. So that's all it has to connect to. I'm just going to move this one over a little bit so we can see over there it's connecting. And it continues to process application connection requests. And it says that, yep, I'm delivering them all on 104. Okay. So now that we've seen, you know, a little bit of how it's connecting, we're now going to create that hockey database that I mentioned. So I'm going to go ahead and create that database. So we're just running some SQL here to create that hockey database, loading up some player and scoring data and some team data. And for those of you that are familiar with hockey, you might recognize some of these names that we're going to display. We're going to write some standard SQL, SQL that you may be familiar with. You know, traditional, you know, inner and outer join types of statements with filters. And let's go ahead and do that. Our hockey application is called hockey.sql. I'm going to run that little statement. It runs a couple of SQL statements that we can see here on the screen, right? Running some standard SQL. So for those of you that might wonder, well, what does the SQL look like in new ODB? Well, it looks exactly like the SQL you're used to. It's all ANSI standard SQL compliant. You know, in this case, we were joining a from clause with a list of tables, all of the today's common, you know, inner and outer join and left join syntax is all supported, as well as any SQL functions that you run in common database products like MySQL and Oracle and SQL server. We've taken quite a survey of the arithmetic and date and string functions and we support most all of them. So what that means is when you run a MySQL statement in new ODB, it's going to be quite happy to run in new ODB without changing your SQL. Likewise, if you run an Oracle statement, maybe you were using Oracle concat with the double bar and new ODB would be quite happy to run that. The reason that's important is, you know, whenever those SQLs tend to cross over to other databases, they won't work, right? If you run an Oracle double concat against MySQL, it's not going to work. If you run a MySQL concat function against Oracle, it doesn't work. So for new ODB, we've tried to really survey what's out and available in the SQL function space and support it. And I just wanted to share that because it makes moving your applications to new ODB that much easier. So we see some hockey data, right? This last one here shows some cumulative all-time scoring goals for some popular players. This is the top 10 goal leaders out there, Gordy Howe. He's an old timer. He played quite a long time ago. Wayne Gretzky, another, you know, familiar name probably for many of you. And we can see their goal count, right? So Gordy Howe scored 975. And Wayne Gretzky, not quite as many, 940. But we do see that Wayne Gretzky has appeared three times as a seasoned leader in goal scoring. So that's just some of the data. We're going to look at that in a little more detail as we get a little further into our presentation. But let's go ahead and scale our environment. We're now ready to go ahead and go back to our OpenShift environment. And let's add more transactional throughput to our environment. So we're going to open up the transaction engine pod. And using the familiar, you know, up and down widget spin control here, I'm going to add three more transaction engines. All right. So the system is now adding those. We now have four transaction engines. So if we go back to our environment, we can go ahead and run that same command we did before that shows the domain. But now we can see instead of just one SM and one TE, we can see we have a whole bunch of transaction engines that are all up and running and ready to service our application. So if we go back to that other SQL statement where we're just going to generate a lot of load against the system, each connection is reporting which transaction it connects to. And if you recall previously, we only connected to TE 104, right, this one, because that's all that was available. If you look to the left now, you can see all the other transaction engines are actively responding to the system. I see 69, 105, 104. So we see that they're all processing actively the transaction and SQL request that our application is running. Now what we can also do is we can have some fun here and remove some transaction engines. And let's watch what OpenShift and how it responds to that sort of system or that sort of event. We'll see that OpenShift is going to enforce the database and quickly replace any failed processes. Let's go ahead and do that. We're going to go over here and I'm going to copy the transaction engine pod name. I am going to need to reconnect into another window. Let's do that. And then we'll run an OC, excuse me, delete pod. We're going to be unfriendly today. And we're just going to remove these pods right from underneath the system. There we go. We'll remove them both at the same time. All right, we just removed a couple of pods. But notice all the while to the left, our application continues to run. It has two other transaction engines running and the other two have already been replaced. In fact, they've been replaced prior to new ODB reporting that the other two have been removed. Let's, if we do this, yeah. So if we do this quick enough, we can see it's actually in the process right now of those two being removed and two being added. Okay. This will just take a moment to update and we can see there's three. So it's currently working its way back to four. There we are. We're now hit stable. We've replaced two of them and we have four running and we can see our new ones, 70 and 71. If we look to the left, they're already servicing application requests. So this is a great demonstration to show how OpenShift enforces the new ODB database. Once you set in OpenShift, your policy of the number of transaction engines and storage managers, it will make it so. It will continue to monitor the system and enforce those service levels. Okay. So now we've scaled the transaction environment up. Let's go ahead and scale it back down. We're going to go ahead and bring it down to, we'll bring it back down to two, right? We'll say we no longer need our four and our application is just quite happy to run with two transaction engines. So it's currently scaling back to two. We see how it's doing here. Currently scaling back down. You can see it just usually takes several minutes to scale back down. But we can see it's already done. We've scaled our environment back down. So now what we can do is let's demonstrate the same sort of concept with our storage managers. And to do that, break out of my application. There we go. Let me reconnect to our application. Pocky. Whoops. There we go. Okay. All right. So we lost that window, but now we're back. Now what we're going to do is we're going to show how we can scale the storage environment. But the way we're going to do that is we are going to make a change to the data. We're going to show how that change propagates from storage manager to storage manager. Even in the event of a failure, we'll show that the change will persist in the new storage managers that are being created. So I'll go ahead and, you know, let's walk through this. So let's start back with we're going to run that previous command. Let me get that command back for us where we're going to run the hockey sequel or hockey players hockey dot sequel. Remember this one and show us Gordie Howe has 975 goals. Well, as some of you are aware, Gordie Howe, he was an old timer, right? He played long ago. And even though Gordie Howe is a top of the all-time goal scoring list, largely because if you look at the number of years he played, right, 32 years. But it might be interesting to project out just how many goals, all-time goals Gordie Howe might have scored if he played the standard 82 games per season. So we're going to run a little projection and we're going to update Gordie Howe. Okay, we're going to update Gordie Howe. We're also going to update Bobby Hall. Okay, Bobby Hall is another old timer. He, for much of his career, he also did not play 82 games. So it would be kind of interesting to compare Gordie Howe and Bobby Hall against Wayne Gretzky if we sort of equalize if they all played 82 games. So here's a couple of sequel statements that update our database. And now we'll notice, if we run our hockey application, we see that Gordie Howe would have been projected to score over 1,000 goals. 1,143 goals, right? If you notice Bobby Hall moved into second. He actually moved past Wayne Gretzky. Okay, but for this part of the demo, what I really want everyone to focus on is the new goal count for Gordie Howe. 1,143, because what we're going to do is we are now going to, we're going to scale out our storage manager environment. Okay, just as we did our TEs, but now we're going to scale out our storage managers. Remember, right now we have one copy of the database, right? We have one storage manager. Let's create our second storage manager, effectively creating a second copy of the storage. And we're doing that right now. We're going to go back to our new ODB command line. And this one does Frozen 2. It'll come back. This one's still running. Yep. There we go. Looks like I lost my connection, but that's okay. We're just going to connect right back in. It's a live demo. That's what you get. That's right. That's fun to watch everything work live. Okay. I've reconnected to that application pod. And I'm going to go ahead and run that same command we ran before where we see our new ODB system. Okay. Now, if you remember what we did is we scaled up our storage managers, right? We added one. And here it is. Right? Here is our old one on IP 103. We now have a new storage manager. So our system is now redundant, right? We can take a failure on any process and know that we're still processing applications and still storing the data. And now we have two copies. So what it's done is it's made a copy of the database. And the second one, we're going to go ahead and kill the first one. The first one is the one that originally had the update. Remember when we updated Gordy House all-time scoring records? Again, I'm just going to be mean to the system. And I am going to delete that storage manager right out from underneath OpenShift. Okay. Ouch. Just deleted it. All right. So, but we know since OpenShift is enforcing my environment that it's already busy replacing that storage manager with effectively a second new one, right? Because we scaled out to two. That was our second new one. And now it's replacing the first, which effectively is a third. So if I show the system, we'll see it's already additive. Okay. And it's in the process of removing the first one. Here's the first one. It's still showing as part of the system. Even though we know it's gone, it just takes a moment to update. We're doing all this live. But we can see now that we already have our system enforced again. We have two new storage managers and two transaction engines, which is exactly what we asked for in OpenShift. So if we go back to our environment, if we remember that update was made to the first storage manager, we have now two brand new storage managers. But it doesn't matter because, you know, new ODB has persistent storage. So we're going to now run the query, retrieving the data from the new ODB managers. And we'll see, in fact, there it is. Gordy Howe still shows his 1143 records. So what we've been able to demonstrate is we've been able to demonstrate new ODB in an OpenShift environment demonstrating continuous availability. If you recall all the while earlier when we were running our transaction, never did one fail, even though we were removing transaction engines and storage managers, the applications continue to run. We were able to demonstrate scale out as well as scale in. And we also demonstrated the load balancing portion of the product, where we were using round robin in this demonstration. But it's probably worth also mentioning that we support based on region load balancing as well. So you can prioritize. If I had region, let's say I had regions in Boston, New York and Washington, I can say from my load balancer, I want to connect with priority Boston, New York, Washington, comma star. That would mean I've set a priority. I'm going to connect to a TE first in Boston. If there isn't one, I'm going to connect to one in New York. If there isn't one, I will connect in Washington. And if there isn't one in Washington, well, just give me anyone that you have. The whole idea is to prioritize availability so that the applications can connect, but at the same time offer performance by prioritizing which region you want to connect to. So we've showed a lot today of a new ODB in an open shift environment. Coming back to our open shift main console. And again, we see we have two storage managers and two TE just as we've left it at the end of our demo. So that's about what we had prepared as far as, you know, a show and tell today. And we also wanted to, you know, leave some time for any questions that we might have. This has been a great, and I think for a lot of us, this is all very new. And so there haven't been a lot of questions because I think we're all just digesting this. I wonder if you can go back to your final slide. You said you had some resources. And if there's a place where there's some documentation on what you've done with open shift, was there a blog or a documentation page? No, I don't believe in this presentation. We do have a documentation page. I guess I neglected to put that in. But what I can do, Diana, maybe I can email you some links to, you know, where you can find our Red Hat certified container where you can find a blog that tells you how to deploy an ODB in open shift. And maybe with your, with the blog, we can include that information. And I can also do things like, I don't know if you make the slides available separately from the video, but I can also update the presentation with that information as well. Add it into the slides. That would be great. I do. We'll add that up and we'll upload this, this deck and that there as well. You asked a question or you sort of asked for suggestions around persistent storage. And I know there's lots of choices when you're doing persistent volumes with open shift. So there's some good documentation around that for, on our docs.openshift.com around types of persistent volumes that you can set up. I think that's the answer to your, your question. You guys's question about suggestions for storage is we pretty much let people choose what they want to use, whether it's EBS and AWS or SEP or Gluster or NFS or OpenStack, send your folks or whatever they'd like to use. So. Exactly. Yeah. Thanks. Great. Yeah, I saw that link. Thank you very much. It's very helpful. And so, you know, one thing that I would like to hear from, you know, as people kind of digest the recording and come back and look at the slides and read the blog and whatnot. I'd love to hear from the OpenShift community, you know, what they'd like to see, you know, next in the demo, the demo is an evolving demo. We keep adding additional features and functionality and to showcase various different, you know, kind of benefits you can derive from new to be an open shift. You know, if there's things that people specifically they'd like to see you for database and OpenShift and, you know, I'd love to hear about that. The other thing is if this demo itself, I'm sorry, it's a database demo, where it's the code for it is in GitHub that they could run the demo themselves. But it sounds like there's a little bit of setup and database thing. So if this is something that you can put so that they, if somebody wanted to play and try and doing the demo themselves. Yeah, that's a really great idea. We're working on something that where people can actually play with it and where they can be guided step by step through a demo. This particular demo would be difficult to put up on GitHub to show because it uses, you know, the full enterprise version of our product. We have a free community edition that we highly encourage to go to download and play around with. So we'd have to, we'd adapt the demo to that. It should be very simple to do though. So stay tuned is what I would have to say to that one. Yeah, no, it's wonderful. And this has been very helpful for me to understand what you guys are offering. And I'm sure to the community and I'm sure to all the solution architects inside of Red Hat as well who have been looking for a solution like this to offer. So, and lots of our partners are interested in this. It's quite hilarious for me to see all the old timer hockey people. And in some ways looking at sequel again feels like, oh my God, we're going back in time. But it's a great thing to see because there is so much people, so many people out there still very actively using SQL and making it part of their enterprise offering. So this is a really helpful thing is they navigate to the cloud. So thanks for your time today. And I will get this up probably the next day or so depending on how fast the video editing goes. Wonderful. Thank you very much. Thank you for the opportunity to read the community.