 Hi, my name is Andy Cathrill. Today I want to talk to you about Red Hat OpenShift Service on AWS, also known as Rosa. Before I get started, a little bit about me. So I'm the Senior Director of Product Management looking after managed services for OpenShift. On the right, you see a little Texas flag. So as you can tell from my accent, I'm from San Antonio. And the two Red Hat Fodoras there is because I'm a boomerang. This is my second time at Red Hat after a five-year gap. So glad to be back in the community. And the all-important technical stack. So Fedora on my laptop, Bash, Python, unashamedly Vim, and slightly ashamed of Chrome. So my content details are here. Any questions about the slides afterwards, please reach out to me on email or Twitter. So Red Hat Open Services, that's not something you expect. So we've got to dig into that. So why is Red Hat doing managed services? We'll talk about Red Hat OpenShift dedicated. And then the new Red Hat OpenShift service on AWS. And then we'll go into a demo. So we'll be light on slides, light on marketing, I promise, and we'll go heavy on the demo. So as I said, when we talk about Red Hat and managed services, often people can't look at me funny. I thought you guys did software. Well, yes, we do, but we'll be doing managed services now since back in the 2015. In fact, Red Hat, along with Google, are the only companies being operating Kubernetes as a public service since back in 2015. So we have a lot of history with our own managed service, with our partner services, with Microsoft Azure, with ARO, and with IBM with their rocks offering, or now the OpenShift and IBM Cloud. And we'll talk later about our recent announcements with AWS for the Red Hat OpenShift service on AWS. So why managed services? We're known for our upstream contributions. We're known for development. Why would we work on a managed service? There are only two reasons. The first one, it allows us to deliver better software to the community and our customers. Now, my marketing colleagues will say, don't talk about dog fooding. Don't talk about drinking your own champagne. And honestly, depending on the day, it could be either, but that's the whole point. So we can see in the software as a service, as you would, before we deliver it to customers. If we can't operate at scale and reliably, then how can we expect our customers to? So our SOEs work closely with support and engineering. It's one big team. So we get to give that feedback back into engineering to improve the product. When we operate a managed service, it's not a different OpenShift. It's the same OpenShift banner that you're gonna be running if you deploy it yourself. But this virtuous cycle means that you get the benefit of our operational expertise, our experience, and together we build a better product. So I mentioned two reasons. So the first, again, is to let us deliver better software to our customers. But the second is to let our customers deliver better software to their customers. So if you can spend more time working on innovation development and less time working on operations, that means it's gonna improve your products. You don't wanna spend time at two o'clock in the morning on a Saturday to do an upgrade, or to respond to alert at 11 p.m. on a Thursday night. Let us do 24 by seven operations and you can do development nine by five or maybe 10 to 10, you know how developers are. But the goal here is to take the burden away from you and we carry that operational load so you can develop better software for your customers. So one of the great things to me about OpenShift is, it's OpenShift no matter where you choose to run it. Whether you choose to run on-prem, on IBM's cloud, on Azure, Google, AWS, it's the same version of Kubernetes. It's the same stack on top with Prometheus, Grafano, et cetera. So the same stack with the same lifecycle, the same developer tools, the same operation procedures no matter where you run OpenShift. Sure Kubernetes is Kubernetes, right? And every cloud vendor has their own native service with upstream Kubernetes, but it's a different version, different patch version, a different lifecycle. And that's just Kubernetes. You've got to put everything on top to build your application platform. You need more than just the orchestration layer. But with OpenShift, everything comes in the box. Sure you can take out components, not use our logging, use your own, different monitoring, but you have everything in the box, pre-integrated, supported together, lifecycle and managed together. So you choose where you want to deploy. And nowadays everyone's hybrid, whether that's some on-prem, some on a public cloud, maybe you have multiple clouds that you're running on. You're gonna have more than one environment. And if we can give you consistency with OpenShift, it's gonna improve your developer on operational efficiencies. So with OpenShift, you can pick to deploy and manage yourself, or you can ask us or one of our many partners to manage that for you. So if you're running on Azure, for example, there's the ARO, Azure Red Hat OpenShift offering that's currently managed and supported by Microsoft and Red Hat. IBM runs what was known as ROX, Red Hat OpenShift Kubernetes Service, now Red Hat OpenShift and IBM Cloud. On Google on AWS, we've had Red Hat OpenShift dedicated. I'm gonna talk in a few minutes about the new Red Hat OpenShift service offering. So dedicated, I wanna talk a bit about this because our new Red Hat OpenShift service on AWS is really built on top of dedicated. That's a great foundation. So dedicated offers a OpenShift for, previously we had three, environment that's fully managed for you. You pick your platform, is that Google or AWS? Is it gonna be your account we deploy to or an account that we create and then we manage and build you for infrastructure? Which reason do you want multi AZ, single AZ, managed to OpenShift plus to manage OCM and we do all the management for you? It's not a managed control plane, it's managed everything. So whether it's the work nodes, the upgrades, if something goes bump in the night, at two o'clock in the morning, something breaks and alert is fired, we've configured a alert team, we'll respond to that alert for you or do the mitigation. So by the time you get up, the issue is resolved. In terms of upgrades, so we have a very frequent upgrade schedule, probably more so, you can configure this, you just define the upgrade schedule, probably more so than if you were doing them, because you don't have to worry about, is it gonna work? Who's gonna fix it if it doesn't, right? So our entire fleet is typically not more than six or so weeks behind what OCP has. Sometimes less is available, typically the same week as an OCP release, but we keep our customers in a window, we've got an N and M minus one release philosophy to give you time to test, but you'll have an always patched, always updated, fully managed stack. Now to build ownership dedicated, and of course everything we build is on top of open source, we develop something called Hive. So Hive is an open source project that delivers an API driven cluster provisioning and management system. So Hive is the foundation of this platform along with OCM, OpenShift cluster manager. So it has three resources that it manages, it has cluster deployment, so in the Kubernetes principle or declaratively described cluster that we would deploy, machine pools where we have a notion of managed worker pools or managed on each cluster, and sync sets which allow us to deploy managed resources to the clusters. Think about having a master set of Kubernetes objects, maybe it's operating configurations, maybe it's config maps that are deployed on each of the clusters. So we're managing that centrally from Hive. So it handles the installation and it handles the all important day too. So what happens after you're installed? Everybody can install Kubernetes. It's easy, it's point and click in most platforms, but what really matters is day two, the ongoing care and feeding of that platform, and that's where we use a combination of operators and the infrastructure provided by Hive and OCM. So as I mentioned earlier, we have an offering on AWS. It's really a traditional software service. So like you would have with any other run app product, you would call your local account team, you get a demo, they give you a quote, you do a purchase order, there's invoice, you get the software, you deploy, not exactly rapid. If you want to deploy a new cluster, you wanna onboard new customers in the region, if you haven't got subscriptions already, there's gonna be a lack. It's 2020, but actually it should be a bathroom. It's pretend it's 2020 more. What you want is on-demand software. You want a cluster, you wanna build a cluster. What we're used to today is not the legacy world of invoice as purchase orders, you want on-demand. You want consumption-based billing. So what we built with the Red Hat OpenShift service on AWS or Rosa is just that. So it's a native service. It's gonna be procured through AWS. So you're gonna go into the console, it's in a second and procured the service and build through AWS with just consumption-based pricing. So if you want to deploy a cluster right now, you want to run it for three or four hours to do a test because you're testing something on 4.5.13, how does it look? I wanna be some destructive testing. You create a cluster, you deploy it, you burn it down as soon as you want. You'll be billed only for the worker and those that you're deploying. I wish a subscription point of view for as long as you run them. So if it's just for an hour, it's just for an hour. So flexible billing. That will be billed on your AWS bill. So just as if any other service and integrated into AWS. Support called ours called AWS or we'll work together. In terms of user experience, so you're finding today this is in a preview. So you won't find it in the console but we have a preview program we'll talk more about later. You'll see the Red Hat OpenShift service. You'll click on that link from there. You get a landing page and this is one of the early designs before SGA. You'll download our command line tool that we'll demo in a second. You'll be able to deploy clusters and manage your clusters through the command line or through OCM. And here we've got a screenshot where I'll see this live in a couple more minutes. And then coming next year, there'll be the cluster creation workflow within the AWS console. So today, you're gonna be in OCM or our CLI or working with our partners at AWS on creation through the AWS console. We're adding better integration for IAM. You'll already see in 4.6, pod-based identity that's been integrated. We're working on the ability to sign on to this managed cluster using your IAM credentials. So let me go into a demo. So we're gonna start off on my terminal window. So I'm running Fedora and I have a command line tool called Moa CTL. Moa CTL is our tool for managing the clusters. Now that's gonna be renamed in the future but we've recently changed the name from a managed OpenShift service to Red Hat OpenShift service on AWS. So by the time you get your hands on this, you'll be typing in Rosa, not Moa CTL. So Moa CTL is our tool for provisioning. In the background, you'll also see OCM and we'll look at that again in a second. So I've already configured on my workstation my AWS credentials. So my machine already knows about my Amazon ID as well as my Red Hat ID. Let's have a look. So I've logged in, I've got my default region which is a east one and there's a logging command that could use and log out to log in and out of OCM. But from here, everything I do on this service and the command line is gonna be communicating back to OCM to do the work. So we'll quit like at OCM. So the commands I've got running here. So I'm gonna look at create. So I'm gonna create a cluster. Let's decide where I'm gonna deploy that and let's look at what version I'm gonna deploy. So I'm gonna look at what regions are available and this will show the combination of what regions I have access to with my AWS account. So if I've got a region where I don't have permissions or credit to deploy, the one to appear, I'm ever running the service. In terms of what versions we deploy, we have a curated list. Our goal is to have as much as open as possible, but we will have a slight lag in between an OCP release as we do the extra validation before it goes live. Aim for that to be about a week. We support N and then minus one. So at this point, four, five and four, four until four, six comes out. So let's do a deployment. So I'm gonna say I want to create a cluster. Minus I is gonna pump me in the interactive mode. So I'm gonna give it a name, I'm just gonna call it. Close that. Do I want to be in multiple AZs, yes or no? I'm just gonna stick with no just so I don't burn down any more resources. Which region? So I'm gonna stick to my home region here with ease one. Which version? Pick the size of compute nodes for my worker nodes. How many worker nodes do I want? We have a base of two. And if I want to configure the deciders for the machine, service and part, I can do that. And I'm just gonna leave those as default. And if it's a private cluster, that means the API endpoint will not be accessible on the internet. I'm gonna say no for that just so I don't have to then get into an instance that's connected to a VPC to be able to get to the OpenShift API. So the cluster will now start being created. So it's telling me now that the cluster identifier here with that name is being creative. And I can do Moa, cut all this clusters to look at the clusters being created. You'll see here now there's a few details of the blank, the external ID, the API URL. We're not gonna see that until the cluster is created. Let's have a quick look. So if I do a describe cluster, give it a name or the ID. I'll see those details once it's been provisioned or see the status. So right now it's preparing the account. So it's creating the appropriate users and roles that we're gonna use for the provisioning and that's gonna run in the background. Now an OpenShift install takes 20 or 30 minutes and I'm doing this live. So here's one I created earlier. So I'm gonna say Moa CTL list clusters. And it's gonna show me which clusters are already deployed in my account. So here we have got one that's in the ready state has already been deployed this morning and one that's pending, that's running. So I'm gonna leave that one that's running to go in the background. I should actually let me just pop over here and let's see in OCM. I should see that as well. So I now see I've got one cluster now installed on one that's in the installing state. If I clicked into that, once it starts to install off I'd be able to see the logs as the installer kicks off. Right now it's still creating the DNS entries, the accounts, et cetera. So let me go back into the command line we'll look around there then we'll come back into OCM. So if I describe that cluster, so you'll see it will have filled in some blanks. So we've now got the external ID and we've got the API URL and the console URL. So I should be able to hit that link and be able to log in. So how do I log in? I'm gonna have to create an IDP or touch an IDP or just create a user. So if I look at my command line options, I have a create IDP. I'll pass it the name of the cluster for the ID and put into interactive mode. That will now allow me to attach an IDP. So whether it's gonna get her Google, cloud app, open ID, et cetera, I can go through those flows to create one. Now I already have one, so let's have a look. So let me do MoaCTL and again, my cluster ID. And I'll see that I've already created an IDP connected to this cluster, which I'm using GitHub for auth. I also have the ability to create an admin user which will create an admin user outside of this IDP which will allow you to log into the console if you don't have an IDP you can configure. So the way that would look, I would say I would create admin and pass the name of the cluster and then it would create a cluster admin user and a auto-generated secure password. So let's have a look at back into OCM. Let's look at the cluster. There is my demo cluster. So if you've seen dedicated before very similar view, I get to see the resource utilization, any monitoring, any alerts to the firing, the configurations for the IDP. Again, I can go in the command line or I could go into OCM through the configuration. From a networking point of view, if I wanted to take this public service, make it private. So move to a private Kubernetes API, likewise with my application routers, single click to do that, I can add more application routers. On support, it allows me to set the contact. So there's no opt-in for management. The moment you create a cluster here, it's gonna be managed by us. So our SREs will have seen this cluster come up, they're now been monitoring it. Any communication that we need, we can go through support and we have notification emails that we can send you as well. Maybe it's not just me who wants to get the email but you've got different users, groups, admin teams who want access. You can then add notification contacts here so they can get the appropriate alerts, maybe something, questions that we have about the cluster, maybe notifications about issues, upgrades, et cetera. We've got one more tab that you'll see early in December where we have the upgrade schedule. It's a user interface to allow you to schedule when you have your upgrades for your cluster. So you could pick, I want to upgrade every week or I want to upgrade to four, five, 13 a week on Saturday at two o'clock in the morning. So rather than going through tickets that we do today, we're adding a UI with full automation so you can point, click, and schedule your upgrades to the status of your upgrade that's been scheduled. So what does this look like when it's running? This is probably the most disappointing part of the demo because once it's installed, it's just an open shift. There's nothing different here. There's no special features for on this managed service other than the fact that two o'clock in the morning we're the one who's get the call. We're the ones who are doing the upgrades, the patching, the monitoring. We're the ones who are handling the SRE work. It's still the open shift that you know. There may be some extra guard rails we put in place. For example, we have some emission controllers in place to block some sensitive operations so we don't want you destroying the control plane nodes since we have an SLA and we're managing those. We want to make sure that those stay up. Other than that, it's going to be the open shift that you're used to. In terms of how we do this, and we've got some links in the slides, but the tooling that I showed now is right at the course is open source. So there's a link on the slides that we'll see in a couple of minutes for Moa CTL to be able to look at that to download it. To get access to run the service, your Red Hat account has to be authorized because we're still in a private beta. So if you want access, there's a couple of links from there for you at the end of the presentation. Also, my email is there, spam me. We'd love to get more people on board. All that we would ask that you bring is an Amazon account that you could use because we're not going to pay for your infrastructure. I'm to be able to give feedback on the service. So we'd love to have more people getting hands on and trying this. Again, very quick and easy just to spin up a cluster. I mean, spin up a cluster is the cost of a latte, right, to cluster up a relic. If you want it for longer, it's obviously wrong, but very quick, cheap and easy to be able to do this hourly based open shift. So we can pop back into command line, see where we are. Okay, so second class is installing. So let's have a quick look. So I can ask for the logs, the install log for the cluster, and I'd be able to see that log. It's the same log you could also see in OCM. So if you really want to watch the install going through, you can. Other commands that we have. So you're going to need to have the OC command. Otherwise, how do you access a cluster short of the GUI? So we have an option to download the open shift clients. So whether I'm running this, whether it should be on Fedora or on another Linux or on Windows or a Mac, we've got the same features in the CLI. So it's going to download the clients for you. We can edit cluster resources. One of the interesting commands we have, which maybe isn't interesting to you, but from an SRE to be with certainly is to us, is verify. So if you're going to deploy open shift, there's lots of permissions that are required, obviously. So one of the things that we can do is we can verify whether it's the permissions or the quota. So open shift client, do we have the client installed? Permissions, before we kick off the install, we actually validate you have enough rights to do everything you need to deploy and for us to manage our mission for you. And for quota, in a similar fashion, we're going to make sure that you have enough quota to deploy. What we don't want to happen is halfway through an install, we find out that you run out of, what to CPUs that you can use for instances. So our goal is to make this as stable and reliable as it can. We don't start anything unless we know we're going to be able to finish it. Other commands we have that obviously, we can delete clusters. I'm going to leave this by that, because they're doing a demo, describe, we saw, edit, so edit cluster resources, I want the scale of the number of nodes that we have, logging in, logging out, et cetera. I want to pop over to this. Now we have a link on the last slide. So if you want to learn more, so we announced this one, where we think the relationship with AWS earlier this year, there's a sign up to learn more link. If you click here, we'll contact you and you can request to join the private beta program. Again, you can email me. The other link that we're going to have in the last part of the slides is to Moa CTL to the GitHub repo. So everything's in there. Again, we develop in the open as Red Hat. And you'll see 10 hours ago, we renamed it to Red Hat Open Chief Service on AWS Rosa. It's the brand new name getting ready for launch. So let me stop the screen share and go back to the slides. And again, we have two important links there, the GitHub link and the link for the FAQ we can sign up. Or on the third slide, I've bravely put my email address, so we're handed with care, put on the internet, whatever, email me if you have questions. We'd love to get you on board. So thank you for your time. Thank you for your work in the community. And we look forward to hearing from you in the future.