 Hi, my name is August Siminelli. I'm a Field Product Manager within the Cloud Platform Business Unit here at Red Hat. I work with customers around the world on BU-led technical engagement. Today, we're going to look at a new and an exciting option for deploying OpenShift into the public cloud, Red Hat OpenShift Managed Services. With OpenShift Managed Services for AWS, you can easily and quickly deploy a production-grade OpenShift cluster into your AWS account. It's easy to provision fully managed clusters in minutes which are backed by a rock-solid SLA and 24x7 support from an industry-leading SRE team. Okay, let's go ahead and take a look. Simply log in with your Red Hat account at cloud.redhat.com. Once in, you'll find many options to interact with your Red Hat software. Let's go ahead and open up the Red Hat OpenShift cluster manager to begin. Inside this interface, we can see a bit more about our account. Under subscriptions, it's possible to see the quota allocated for your deployment. This quota is determined by Red Hat and sets the sizing and number of clusters you can deploy. You'll see this in action shortly. We also have the overview section that, once we deploy our cluster, will provide a single view of our installation. Cluster manager provides options for all kinds of cluster deployments, from bare metal in a data center to a developer install on a local laptop. It really is your starting place for everything. There's a variety of cloud providers accessible as well. Today, since we're going to do it in AWS, we're using OpenShift Dedicated. So there were two billing models for a managed OpenShift deployment. One's a standard subscription where we deploy into a Red Hat AWS account. And one is the customer cloud subscription or CCS model that allows Red Hat to deploy and manage OpenShift Dedicated into your AWS account. So you handle the billing with your infrastructure provider and the quota and OpenShift billing with Red Hat. And of course, when you use your own AWS account, there's some very simple basic setup. Really well documented. I've done that already so we can keep things moving. So I go ahead and supply my AWS credentials. Obviously, don't lose or revoke these or you will lose access to your cluster. And then, of course, give your cluster a name that makes sense. I used Test Cluster today. Now we select our AWS availability requirements. For the OpenShift cluster manager, we can choose to deploy clusters into a single AWS zone or across multiple zones in most AWS regions. Next, we set the size of the cluster via the scale dialog. Here, we can choose the type of AWS nodes we'd like to use for our cluster. Notice how some are not available. This is a reflection of our quota allowances and what is required for a multi-zone deployment. I've chosen a simple general purpose instance for ease, but it should be noted that you can't change the instance type for the default machine pool. You'll have to create a new one, which you can do. Next, we can set the worker count per zone. The available options here reflect your quota allowance and the instance type you selected. In my case, I've chosen one per zone for a total of three. You can even easily add node labels for these machine pools directly from OpenShift cluster manager. Now let's go ahead and take a look at networking. For managed services offering an AWS, you can get a pretty specific about your deployment. You can use your own VPC and you'll even be asked for all the required settings. You can also set your address pools for things like the machine network and the service network and of course for your pods and your pod sizing. Do this to suit all your specific network requirements. For this demo, I'm just going to use a basic install and this will set all the networking defaults and create the relevant AWS assets and VPC for me. Next, we set the cluster update frequency. Managed services clusters are updated in one of two ways. You can either choose a preferred date and time or choose to do it manually. With the preferred date and time method, if there's an available update found, your cluster will be updated automatically close to the time chosen. If you choose manual, you must maintain the updates. Of course, critical CVEs are patched automatically regardless of the method chosen. This is done within 48 hours for added assurance. Finally, set no draining specifics and it's time to create your cluster. Cluster installation is pretty easy. It's entirely hands off. The installer does all the work for you using the settings you provided. You can follow along in the comprehensive logging or you can let it run. Go have a coffee. So my cluster took about 40 minutes to install in my local AWS region. So we'll go ahead and speed things up so we can check out the results. And there we go. We've successfully installed and we'll be presented with the overview page so that we can begin to administer our deployed cluster via OpenShift cluster manager. Now that we have a freshly deployed cluster in AWS, the first thing we need to do is configure access to it. OpenShift dedicated clusters are accessed by configuring an identity provider via the OpenShift cluster manager directly. As with any OpenShift install, there are a variety of providers available. Let's go ahead and use GitHub for this example. I've created an OAuth application in my GitHub account, so I just need to configure it to allow my new cluster to use it. In my case, I have a GitHub organization and a team set up for my OpenShift dedicated users. So I'll just add them here. Okay, with that done, I can also add some elevated rights to a user who I might want to administer the cluster. There are two types of administrative roles for an OpenShift dedicated cluster, dedicated admins and cluster admins. As an administrator of an OpenShift dedicated cluster who has the dedicated admin roles, your account has additional permissions and access to all user-created projects in your organization's cluster. This is generally enough to run your managed cluster and is distinctly different from the cluster admins role. In my case, because I'm using a CCS subscription, I can select the cluster admins role. You'll recall a CCS subscription means I'm using my own AWS account. This should be done with caution if you choose to use it, as it allows additional visibility into that underlying AWS infrastructure. I'm going to assign my user to the dedicated admins role as it provides enough for me to administer this cluster. With that done, I can now open the console and I will find the provider available for login along with the Site Reliability Engineer login. So I can go ahead and log in. I can grant permission for the user and I'll be redirected and authenticated into my OpenShift installation. Now that we've configured authentication and can safely log in, let's see what else we can do with OpenShift cluster manager. Remember those machine pools we created during installation? Well, let's scale them. And for added fun, let's create a new one with an entirely new instance size. So we go ahead and scale, and here we're able to see that the node count values that are provided are based on our quota. So based on the instance size and the quota that we were given, it's determining how many we can do. So let's just add another node per zone. It gets us six worker nodes. Next, let's create a new machine pool. Maybe this could be for some special workloads we want to onboard. And when creating a brand new machine pool, we can actually select the available AWS instance sizing. Let's create a new machine pool, say for memory optimized instances. OpenShift cluster manager will present the number of instances based on the size you choose and your remaining quota. I'm going to add just another instance per zone, so three more workers, and I'll label them with memory equals true for later use. And we can now log back into OpenShift and review the results of the work. We can see the memory pool and it's still being created. We can see that the worker pools now are larger. They'll be added to. And of course, the machines are being provisioned. There's some other great features built into OpenShift cluster manager to help you manage your dedicated clusters. You can add additional notification contacts for people to be advised when work is done or issues arise. And you can actually open support cases directly from OpenShift cluster manager. This is great. It uses your account, logs you straight into the Red Hat support system and just makes creating a ticket simple, easy, lets you get on with your day. There's also comprehensive logging for your managed cluster with the history of all events that take place. It's sortable, searchable. It's also downloadable. It puts sort of all the information you need to troubleshoot any issues or share any information right at your fingertips directly from the UI. And finally, when it's time to delete your cluster, it's all handled from OpenShift cluster manager without any problems. Just simply choose delete cluster, give the cluster name and let the process kick off. Installation in my AWS region, it just took a few minutes but I've still sped that up for convenience. Once the cluster is deleted, you have a history of all the clusters you've built. Your subscription and quota counts are obviously updated as well and you're ready for your next cluster. And that concludes this demonstration. I hope it was useful and helped show you just how easy it is to deploy a managed OpenShift cluster with Red Hat OpenShift managed services for AWS. Thanks for watching.