 Welcome to another demo video of the Red Hat OpenShift Container Platform. My name is Philip Lamb and I'm the DevOps Solutions Architect for Red Hat's Global Partners and Alliances ISV team. Today, I'd like to demonstrate just how easy it is to configure a multi-node couch-based cluster and deploy an app that consumes tweets from Twitter with the hashtag, vacation. For those of you unfamiliar with couch-based, it is a fully-featured, multi-service, no-sequel database. In this video, we're going to first assume that you have already installed the operator and have created a project, but not done anything else yet. We will deploy a two-node couch-based cluster, expose the couch-based admin UI, create a tweets bucket with two replicas, scale the cluster to four nodes, deploy our real-time Twitter ingestion application, its API service and front-end UI, and finally create an ingestion service to pull tweets from Twitter in order to display them on our front-end UI. Let's get started. Here we are in the OpenShift container platform console view. We're going to first verify that our operator has been installed correctly. I did this previous to this video, so everything is running correctly. I also created an empty project called Twitter that we're going to use for the rest of this video. Now that we've verified the operator is running, the next step is going to be to create a secret in our project, Twitter. We will create from YAML. I have some YAML that I made earlier that I'm just going to paste in. The name of our secret is CBExampleOff, and just a random username and password. We'll create. Now that's created. We'll go over and create a couch-based cluster. We're in the installed operators, and now we click on couch-based cluster on the right. Create couch-based cluster, and we're going to use YAML view. Now there are a few small changes to this that are in the notes, mainly around the names of the images. I'm just going to paste those in directly. A few things that we're doing are to change the size of the number of servers from 3 to 2. We're going to get rid of search, eventing, and analytics. We don't need those for right now, and then we're going to change auto-failover timeout to 10 seconds. We click create, and now we'll just wait a few minutes for that to spin up. Now we see that the cluster has spun up, and it is available and balanced. It currently has two workers associated with it. Now we look at the resources tab. You can see the services and pods that have been created for the cluster. Now that we've got the CB Example UI service, we're going to expose the counter-paste admin by creating a route. We'll click on routes, and then create. The name of the service we're going to use is CB Example UI. The service, CB Example UI, and the target port 8091. We'll create that and give that just a few minutes to spin up. Now that our route has been created successfully, we're going to open it up. Take a look at the admin UI. The username and password that I've chosen are a very secure administrator password combo. Now we can see we currently have two servers up and running, and no buckets. Let's get a bucket created. We'll go back to installed operators, and then couch-based bucket. Create couch-based bucket. Then we're going to do this from the YAML view. Now I've got some that I created earlier that we're just going to paste in. You can see we've got a number of replicas set to two, which with the current two-node cluster is not going to be satisfiable. We'll create that and wait for it to be created. Hop over to the web console and take a look at the buckets. You see down here warning additional active servers required to provide the desired number of replicas. We made a mistake in the previous step on purpose. We need four nodes to support the redundancy that we want, so two replicas. In order to do that, we'll go back to our console and look at the cluster. We'll edit the YAML directly here. We are looking to increase the number of servers from two to four. We'll just save that. Take a look at our details and give this a moment to increase. Now we have four members. We can take a look at our resources here. We notice that there are not one, not two, not three, but four CV example pods that are running. If we look at the console, we can see that both we've had this successful rebalance and we've got everything satisfied, so we are ready to move on to the next step. That's going to be to deploy a Twitter app. The goal next is to deploy an application that will ingest tweets in real-time from Twitter. We've got some OC console commands to run. I'm logged in. We're going to create a new app using some copy and pasted information here. We're waiting on this to build. We can watch this build by going to build Twitter API. We'll give that a few minutes to finish. Our build is complete. We also have a service that's been created as a part of that. We need to expose the routes, the API service. We're going to follow the same steps as we did for the couch-based service by going to networking, routes, then create a route. We're going to call this route Twitter API. The service we're going to look at is Twitter API and the target port 8080 create. We've got this spinning up. I will click on it. One thing we need to do is add slash tweet count to the URL to test the API. Make sure it's working. We currently have zero tweets. That is the correct amount to have. Now we want to deploy our UI. This command just points to an existing container image on Docker Hub. OCP is going to deploy it and create a service for us. I'll just paste this in. Once this is created, we can see that we have a new service created called Twitter UI. We want to create a route called Twitter UI using the same steps as our previous routes. Create Twitter UI. The service is this new Twitter UI service that we had created. We want that to run on port 5000. Create. One thing that we're going to want to do, I'm going to open this up in a new tab. We're going to want the URL for our API. I'm going to copy that. Then on the Twitter UI itself, we need to add API base equal to the actual API. That way we know that we're actually pulling from the right source. The final step is to deploy the tweet ingester. We don't need to create any routes for that service because it only communicates internally with Couchbase. I have a new app, the code for which I'm going to copy paste. We'll wait for that build to complete. We should shortly start to see tweets flowing into our tweets bucket and then see the Twitter UI presenting data. This video is being recorded on a Tuesday. Not a lot of people are talking about vacation stuff. I'm going to go ahead and just paste in a tweet myself. Looking forward to a potential vacation at Starbase Texas sometime in the near future with my buddy Elon. I'll click tweet and there you have it. We got a bonus one too. To recap, today we deployed a two-node Couchbase cluster. We exposed the Couchbase admin UI. We created a tweets bucket with two replicas and then scaled the cluster to four nodes. We deployed our real-time Twitter ingestion application, its API service, as well as its front-end UI. And finally, we created an ingestion service to pull tweets from Twitter with the hashtag vacation in order to display them on our front-end UI. Thank you for watching this demo video. We'll create others in the future. See you then.