 Good morning. Can everybody hear me? All right, so my name is Matthew Ward. I work for Red Hat. I work part of the vertical partner solutions team at Red Hat. So what that means is that we basically work with partners and we help them understand our verticalized strategies so Red Hat doesn't really have any vertical products. But how do we support these other partnerships? So partnerships become super, super important. I am going to confess that I am not a couch base expert. We supposed to have one with us, but unfortunately he was not able to make it. So you're stuck with me instead. What we're going to do is I'm going to give you just some basic information on it. We're going to talk a little bit about developing this sort of auto healing database and some of the use cases that we've targeted some specific customers with. So everything we're going to show you is something that is actually running in production today with some particular customers we're just not allowed. We don't have approval to say which ones. So you're writing an app, right? Kubernetes is the answer to everything. That's what our sales teams talk about. That's what a lot of these sort of meetups, a lot of these sort of talks are all about is how Kubernetes has changed the landscape, changes everything. But in reality, Kubernetes is really just a small piece of it, right? And then all the supporting CNCF projects that support that product and that project are fit into these tiny little pieces. Where we're looking at is over here is this database section. So if you're developing new apps and you're looking at changing the way that you go about doing things, we need to look at and change the way that these traditional applications work, right? So has anybody moved an existing application to Kubernetes or to OpenShift without doing anything, right? Changing and just moving an application onto OpenShift is not as easy as we like to think it is. Moving a database is even more complicated. So one of the things that we've kind of focused on and what CouchBase has sort of been a critical piece of is helping define that data story for us. So some of the things that people are looking at when they're moving and they're getting into these cloud-native databases, we'll call them, is sort of optimizing the experience for everyone. So how do you get this up and running? What does day two look like? How do you upgrade? How are patches and how all this stuff maintained? We want it to auto-heal. We want to drive the hybrid story. We want to make sure that we can run it wherever we want to run it, and we're not just tying ourselves to Amazon Web Services or Azure and relying on their data services because data has gravity. So getting data into those is difficult, but getting it back out again is even harder. So CouchBase helps us sort of fill in all of those stories. So they are a containerized and cloud-native developed database. They also run and directly interface with all the major cloud providers. They also directly interface with OpenShift, and I'll show you a little bit later how that looks and how that interfaces. So that gives them, we've kind of become their on-prem story around OpenShift, but it also allows them to give the same experience. So when a customer isn't all in on AWS, they can get a similar experience looking at OpenShift as they would running across all these hybrid clouds. Because in the reality of what we see pretty constantly is that individual companies, despite what AWS and Azure and all have up on their slides, they're not just all in on one cloud. Different divisions of different companies choose different clouds because of different reasons. And so giving them a common user interface and a common database and giving them databases as a service offerings on top of that has proven to be pretty valuable. So the main key way that CouchBase has done this is they've operated, they've written a Kubernetes operator. So if we go backwards a few months or a few years, containerization basically was awesome. So they came out with this notion of containers, which has been around for a long time. I used Solaris Zones probably seven or eight years ago putting web sphere applications inside of Zones and running those. But really Docker sort of popularized the X86 version of containers, created a nice format. And so the idea was, hey, we have this great way of reproducing these smaller versions of VMs and we treated them kind of like that. Then it started to evolve. Well, we're going to start writing sort of container native paradigms and we're going to really kind of focus on leveraging the container to do what it's really good at. So we can create builds here, which are really good. You can run them in different in runtime environments. That's really good. It had some nice packaging. That's good. So then we realized, okay, we have a couple containers that are running various applications. They're usually simple things. Now we want to get more complex. So we start going and we say, okay, an application is really more than an individual container. It's really more than an individual piece. So how do we take all of these disparate different pieces together and how do we deploy them? And that's really when Kubernetes came onto the screen. How do we manage multiple containers? Kubernetes is the way to manage multiple containers. So now we've gotten to the point where we're managing multiple Kubernetes environments. So how do we manage multiple Kubernetes environments? How do we get that consistency through there? That's really what the operators and the operator framework helped you do. It helps you define the installation path. It helps guide upgrades, backups. What happens when we fail? All those different pieces and parts. So we've touched largely on this, but really what we're trying to do with an operator is we're trying to take what is in people's inherent understanding and knowledge and codify that. So how much of that can we codify when we're doing certain things? If we are going to be, in this particular case, we're going to be running a demo where we're going to add a new node to a couch base cluster, there's probably eight or nine steps, if I remember correctly, to do that and to add a new node into it. What we've done with the operator, though, is we've codified that. So all you have to do in an OpenShift environment is just change the size from two to three, or three to four, or one to ten, doesn't matter. And the codification of that knows how to add it to the collective and rebalance all of the data. This is just highlighting some of the things we've done with them. They wrote the application for doing this, but we sort of hosted on operatorhub.io. One of the other things that's really nice about couch base, I'm not an expert, as I said before, but it's an in-memory first database, and it is most known for its caching and key value stores. But it also does provide us document stores. So if you're using something like Mongo, you'd be familiar with document stores. It gives full text search. It has analytics capabilities, eventing, that's something we've discussed with them, and they've incorporated, are looking to incorporate into things like the operator and such. If you add a table to a particular database, you can trigger an event to basically go and run for you. They also use things, they have their own language, which is called Nickel, that allows them to give you SQL-like capabilities and things like that. We're not going to explore Delvin to any of that today, but it's more than just a key value store. So when you're looking at things like some of the other databases that we have, they kind of do a more limited skill set of things, which is why couch base is pretty popular. They are out on the operatorhub. We'll show you what that looks like, but they also are a certified container in the Red Hat Container Registry. They were the first certified operator outside of Red Hat, and they were one of the first certified containers. And so this helps us keep track of updates, upgrades, all of those different aspects of it, and it's documented here. So since it's in the Red Hat Catalog, you're pulling from Red Hat Registries, you know, things are tested, you know, they're built. So we're going to go and dive into our live demo. So this is my OpenShift environment. We're going to create a new project. Inside of this project, we're going to go to Catalogs and operatorhub.io. Inside of here are all of the various operators that you can import into OpenShift today. So we have a few of these that I've worked with before. I've done some work with Aqua. I've done some work with Synopsys. I've done some work with Crunchy. I've done some work with CouchBase. So we're going to bring them in, and we're going to install it to this specific namespace. And we're going to set the approval strategy to automatic. So we're going to switch up to installed operators, and we're just going to sit here for a moment while the CouchBase operator gets installed. So there's a number of different components and things that are getting brought down. Some containers that are getting run. And wherever this namespace is, it's getting pushed out. Oh, we have a failure. Let's try that again after this. All right. So one of the things that we're going to do if you want to bring out your phone and pull up Twitter, we will be able to... We're basically going to be deploying a web application as part of this that's connected to that live database. And it is sucking in these particular hashtags. So anything that's running out on Twitter that's using these hashtags will get sucked into our web application. And we're going to fail a node while it's running, just to show you what it looks like and how it's going to automatically detect it, deploy a new one, and then rebalance the data. All right. So we have successfully installed. Next, we need to create a secret. By injecting this secret into our environment, we're basically setting the couch base admin username and password for the couch base console. Now we have the operator running. We're going to go and create a new couch base cluster. I'm going to make some modifications here. Mainly what I'm doing for speed purposes is I'm removing a few different services that they have. So we're removing the full text search, the eventing and the analytics piece of it. And we're taking the countdown, the replica countdown. And so now what we're going to be doing is we're going to be deploying this couch base cluster. So we're going to wait for this to finish running. And then as part of their operator, some of the things that they have built in here is that we can turn on the console for those database nodes. So what this is going to allow us to do is it's going to allow us to sign into that particular database using a web interface that they have. And the way that it works is it turns on that web service for all of them. So it doesn't matter which database you're signing into, whether you're hitting example 0, 1, 2, 3, 4. It doesn't matter because it will route you to the one that's running that particular service. We're waiting for the UI service to come up so that way we can expose that UI container. This particular one is failing. All right, well I somewhat prepared for this. So we'll pull the old Julia Childs method where we're basically going to pull the finished turkey out of the oven instead of putting a raw turkey just into the oven. So what this is going to look like is there is an open shift. There we go. So I have a couch base cluster that I pre-provisioned and had all this working nicely this morning. And you can see here that it consists of four servers. If we want to, and I can probably get away with doing this, we'll go into installed operators. There's a couch base cluster. Yeah, we'll change this from four to five. We'll save it, have this reload. And what should happen, fingers crossed, is that this is going to deploy a fifth node. There we go. And if we go back to the couch base console, this should update it in a moment. It takes a little while for it to refresh. But what we're going to see is we're going to see that a fifth node is going to be provisioned and then it'll start the rebalance process. We're also going to skip some steps here, but what we've done is we've deployed a Twitter analytics engine. So this is just basically a simple web application that's ingesting data from Twitter that's using those hashtags that I put up there earlier. If you were so inclined to send messages to those hashtags, we would see this starting to populate. And we can go into, go ahead and delete pod zero. And in this case, we'll go back to servers. We can see now that the UI finally caught up. It detected that four was being built, was provisioned, and is adding it to the collective. At the same time, we should notice in another minute or two that the zero node is going to fall off. It's going to detect that failure. It's going to reprovision. We'll see if it does this in OpenShift. It's terminating. We should see a fifth one popping up soon. It happened to kill the one I was attached to. Okay, so now it's detecting that zero is gone. OpenShift slash Kubernetes should detect that we have an inconsistent replica state, and it should provision another one soon. It should also detect, upon this booting of the new one, it should kick off an auto-rebalance. That way, it's going to make sure that all of the data spread evenly across all of your nodes. So there's multiple ways that you can deploy the architecture for Couchbase, which is a little bit different than some of the other databases. Each one of those individual services and capabilities I talked about have their own sort of manager to it. So you could choose to have 10 nodes and have them all running that individual service, consuming different pieces of it. But once you start getting into more complex workloads and more defined workloads, what we see is more the bottom architecture, they will have sort of a data service tier that's their probably document and their key value store stuff, and then they'll separate out their specific services, their indexing service. So you can have indexing run sort of on batch periods of time over certain periods, and you're not having it constantly running or you're not having to worry about consuming those workloads. The thing that gives them some benefit to doing this separately versus VMs and things like that is that they can use OpenShift and call individual containers and add more to the container collective as they're using more of these services. So hopefully we should see this new node popping up soon and rebalancing. Container not ready. I've never done this before, but I'm going to try and manually fail over node. All right, other than watching this thing catastrophically fail on me. Does anybody have any questions? All right, well, thank you. If anyone does have a question, please do raise your hand so that we can get it in the microphone.