 Hi, my name is Roland Barcia. I work at Amazon AWS. I am the worldwide specialist director for containers, serverless, and integration. So I have a bunch of solution architects worldwide that report to me, and Rosa is one of our container services in AWS. It's a native service. I'm going to talk a little bit about our partnership with Red Hat, some of the things that we're doing on managed Rosa for AWS. So let's do that. So first and foremost, Red Hat and AWS has had a partnership for quite a while. If you look at the statistics, I don't like these podiums because I'm short, and they're always taller than me, like all the time. And so we have over 60,000 of AWS customers use Red Hat software in a way. It started over 10 years ago with REL, a lot of software that runs on REL, and OpenShift has been running on AWS for a while with dedicated or different permutations. And so a couple of years ago, one of the pain points for our customers is that we're doing OpenShift on AWS was different billing. You had to either work in an account that Red Hat owned, or you had to pay separate subscription costs and separate AWS bill. And so with Rosa, we've launched a fully managed service on AWS. That's a native service. No AMIs to maintain. We have some SLAs 395, and it runs OpenShift. It's a fully, it's a full compliant OpenShift latest version. We keep up with the latest versions all the time. And there's a lot of advantages to that, right? One is the console. So you can kind of very easily spin up a cluster either with a CLI, using AWS console or the Red Hat console, or you can use scripts like Terraform or Ansible, any types of scripts. You can just launch a cluster and you get one bill, right? Your Red Hat costs for the control plane, as well as your consumption all in one bill. You get unified support. You just call and you get support from an AWS or Red Hat person, single support organization. And we've built some integrations. An original version of this before they told me was just 20 minutes. I have demos. So if you come to the AWS booth at KubeCon, you want to see some of the demos that I have. You'll see it and I'll talk about which ones I have that you can see in the booth. And so we have a lot of ways at AWS to write cloud native applications. It's an internal joke that says, how many ways can you write a cloud native app? And Kubernetes is hard if you look at the CNCF landscape, you're going to hear all day about the value of OpenShift, so I'm not going to go into it. But it gives you a validated stack of CNCF choices that work well together. Everything from a service mesh based on Istio, OpenShift GitOps based on Argo. Everything's based on an upstream project, right? So you get a full stack. You get the operator catalog and OLM, which lets you spin up and partners spin up software packages. We use OLM to put some AWS capabilities. I'll show a couple examples of that in a few minutes. But you get this full stack. So I'm not going to get into it. I'm going to talk a little bit more about what does it mean to run in AWS? And so one, what does the workflow look like? Basically, spin up a Rosa cluster. It's one click, spin it up. You add different worker nodes. We'll talk about something called machine pools, which is the way in Rosa, which you kind of match machine types at AWS. Then you launch add-ons. There's add-ons provided by Red Hat or partners that you can launch as well. Things like the OpenShift data, science capabilities, for example. And then you launch your workloads. A very simple workflow. So a couple of things from an integration perspective. One, when you add, so we talked about provisioning worker nodes. So there's this notion of machine pools. If you've worked at AWS, we have hundreds and hundreds of machine types. And a machine pool gives you a configuration of an EC2 instance type. And so everything from general purpose, compute optimized, memory optimized. It's a little bit small print there, but you kind of spin it up. And you could actually use things in AWS like Spot. So Spot is a way to use AWS spare capacity at a up to 70% discounted cost. So there's less SLAs based on that, and you can switch to on-demand or reserved instances if you need the workload. But Spot is something that our customers use for great savings, and Rosa can take advantage of Spot instances. So that's one of the integrations we've built. Red Hat has storage options, but we've also built integrations with Red Hat with our EBS and EFS. So you can use the various storage options, and you can actually provision them in a Kubernetes native way with something called Controllers, which I'll talk about in a moment. Architecture options, and I have diagrams I could show at the booth if you want to see it, but we have single AZ public clusters and multi-AZ. So when multi-AZ, the masters are distributed across three availability zones in a region. You have infrastructure nodes, and then you can have two worker nodes or more running within there. And then we also have private clusters. So this way the Red Hat control plane communicates privately to the clusters to do that management. And then you can use egress options as well to talk to services outside the cluster. And so these are capabilities we've built. And then we've just launched as well Rosa in local zones. So if you're familiar with kind of local zones, edge capabilities in various locations, we can run Rosa in some local zones as well. So these are different architecture options with Rosa in AWS. URSA is our IAM role service architecture that allows you to use a webhook to basically if you create a cluster using STS, which is the AWS token service, to integrate IAM into Rosa clusters, you could then use that same capability to kind of match your service role out to an identity, an AWS identity and call other services like S3, RDS, DynamoDB, call other services like Lambda, and you can do that as well. Now, OpenShift has an observability and logging stack, but we've built integrations because OpenShift is extensible. We've built integrations to our own native CloudWatch, very like out of the box, very simple, infrastructure audit logs and application logs go right to CloudWatch. You can go to third parties like Splunk, and so we've built that integration to make it simple. A lot of AWS customers use CloudWatch for one reason or another combined with other observability tools. So that's one. Another integration we've built is after some time customers are running an observability stack with Prometheus and Grafana. They don't want to manage that. They don't want to pay for workloads. So we have direct integration with Rosa and manage Grafana and manage Prometheus. So the actual Prometheus gateway will push out to manage Prometheus and Grafana if you want to use a managed version of that service. You can. Some customers that run various different technologies together with Rosa might want a unified managed stack, so we've built an integration as well with that. So once you get kind of your security, your resiliency built out, your observability built out, there's various ways to move workloads, right? One of the things that we want to enable customers that are moving to Cloud and they use OpenShift, they might be on legacy versions of OpenShift. The migration tools available with OpenShift work right to Rosa. As an example of a blog right on Red Hat site that shows you how to use their migration toolkits directly to Rosa. So you could one step migration over to Cloud with legacy or existing OpenShift on-prem to Cloud if that's what you desire. And there's various also migration tools, some OpenSource, some from Red Hat, also third party partners like IBM, right? There's tools available in the operator catalog right in Rosa where you can use the migration toolkit we just talked about for containers and for runtimes like languages like Java, et cetera. And you can use OpenSource conveyor project as well. So conveyor has a bunch of tools to kind of move maybe legacy Cloud Foundry applications or Docker applications to Kubernetes. You can run that in Rosa as well. And there's various other flavors of migration tools that you can use. So another integration we built is this notion of, you know, once you're in Rosa, you might want to still use native services such as API Gateway or EventBridge or DynamoDB. You might want to use infrastructure capabilities like FFX, EFS, private link route 53, integrate with VPCs. So we've built this notion of AWS controllers, ACK by Kubernetes. This is main OpenSource project supported by AWS. So basically it gives you a Kubernetes way of defining AWS native resources. So you can use tools like OpenShift GitOps based on Argo to provision YAML and create AWS native services. And so, oops, I pressed the wrong button. There we go. So if you go to Operator Hub, you'll actually see individual controllers. So for example, you can spin up CloudTrail. You can spin up other Kubernetes environments like EKS or EC2 instances with a controller. Some of our customers are maybe building with Kubernetes or with OpenShift, a control plane cluster that has its GitOps tools like OpenShift GitOps, and they want to provision cloud native resources in different clusters and different clouds. This gives you these capabilities to do that as well. And this is supported by AWS. You can use third-party things like cross-plane as well. So some of our customers, as a matter of fact, some pieces of cross-plane use these controllers under the cover to generate that as well. But these are kind of options that are supported as well. Some of the more recent launches are HIPAA compliance, new regions like Osaka, Jakarta, Red Hat GUI provisioning, some new enhancements with Council, local zones, FIPS mode, bring your own keys, encryption with AWS KMS. So very brief talk. I know it's a lightning talk, but you could take some pictures here and get some deep dive videos and hands-on examples. We have a hands-on Rosa workshop as well. So if you click there, you could actually go and provision a Rosa cluster and talk about these capabilities, set up networking, a cluster, set up your identity providers, the nodes and storage options. We talked about different workloads, set up your CICD and observability. So we've built kind of some labs for you to get hands-on. If you want to pass by the AWS booth, I know the Red Hat booth will probably show some Rosa as well. Come by, I'll ask some questions. And with that, I'll talk a little bit with pricing. Right, you have the Rosa service fee, you have the infrastructure free, and you'll pay as you go or reserved instances, the standard AWS models as well. So with that, I'll open it up for any questions with the time I have left. Now you can come by the booth, check out some demos. Yeah, so the operators right now we validate it on Rosa. You know, you could probably make them work outside, but I think the key thing is the integration with IM, so being able to do the security is kind of part of that as well. So in Rosa is kind of the default model. They run in different Kubernetes providers in AWS as well.