 My name is Sai Venom, and I'm a developer advocate with AWS. A lot of you might know me from my days at IBM. I'm the guy that does the lightboard videos. Well, I was a big fan of OpenShift back then, and it's still following me. I just can't shake this thing. But all jokes aside, I am a big fan of OpenShift, and today we've got an exciting demo and presentation ready for you. So again, we're going to be talking about app modernization on AWS, the focus on Rosa that's Red Hat OpenShift on Amazon Web Services. But I do have a couple of really cool demos lined up. But first, I'm going to pass it to my co-speaker Javier to introduce himself, and he's going to do the first part of the presentation today. OK, thank you, Sai. My name is Javier Naranjo. I'm doing the business development for AWS in Iberia. And today we are going to talk about what's I said, how do we see modernization based on the experience of our customers, and also explaining a little bit about Rosa, how we position it, which is the added value, and so on. So first of all, let's take a look to what our customers are asking. So when we are facing projects, when we are facing partners, what they say is the main four things that our customers are asking for, first of all, is to build applications, not to take so much care about the infrastructure, because actually they are looking for an infrastructure that is managing infrastructure. Of course, they want to scale quickly, because when we are talking about modernization, it's not only that you are creating application and features in a quick way, in a fast way, you also need to scale it in a very quick way. And of course, everything that they want to do and that we are doing in AWS actually is around security. So everything that we do had to do with security also. When we are building that application, we have to take a look to which are the key features of a modern application, but we call also cloud-native application. And this is a scale to a million of users has a global availability, because actually every single feature that we want to deliver needs to be available for everybody inside a company, or if we are talking about a service that is for our customers, then to our final users, being able to respond even in mild seconds, because actually every time that we are talking about the modern application, we are talking about two facts. First is velocity, how quick you are. And the other is how you are saving costs. So this is key when we are talking about modern application. And of course, also that cloud-native application still is working with a lot of data. So we need to have that into account. The white boxes with the blue letters that you see over there is just the benefit of the modern application. So luckily, we are just answering to the questions to our customers, because actually with each of any of the features that you see over there, we are satisfying the facts that you see around return of investment, cost, TCO, increasing the business agility. Of course, the customer experience, the user experience is key for us also, and increasing the efficiency of the developers, because we are providing global tools in order that they can use it to create their applications. And they are able not only to create in a quick way, but as far as we are working online with DevOps, it's not only creating the application in a quick way, but also being able to operate it in the day too, also in a modern scenario. What kind of workloads are using our customers for containers? Of course, applications, and this is the kind of applications you see over there. We are talking about web interfaces. We are talking about backends. We are talking about EOT data processing. But around, we see also staff regarding operational expertise, and then we are focusing on stuff like platform as a service, things like that, in which we are providing CI CD, infrastructure as a service, solution around management, security, governance of the staff that we are creating. Of course, what we are doing when we are modernizing sometimes is doing that lift and shift, not only modernizing. So we have to take care about moving applications, legacy applications, even like .NET, Linux, stuff that our customers are using on premise, and they want to use it in a modern way in the cloud. And of course, all around machine learning, because actually, I think every modern company that is trying to create a modern solution is focusing in that kind of stuff. What we see in our customers when they are modernizing actually is the steps that you see over there. Sometimes they discover that they are using some stuff that is not useful anymore, because maybe it's just a couple of users are using it, and they are spending a lot of time operating it, creating it, and just trying to provide new features when nobody is using it, so they try to reduce it. Sometimes they find a solution as a service, or software as a service, that it's better for them. It's not expensive, and they can use it to replace the stuff that they were doing, and they were investing a lot of time on that. OK, I'm going to migrate before being modern. So they do the lift and see, but the key point here is when you focus on modernize, you can decide if you go to replatform, and that way you will do it with containers, or you're going to refactor, and that way you will do it in an event-driving architecture using our Lambda function. You see over there that we are having different solutions for containers, even hybrid solutions in which we are providing the capacity to our customers to run containers on premise and also containers on the cloud. And of course, appears also this last piece that you see over there that is the platform as a service. In this case, what we are releasing is the Rosa service, Red Hat Open C for AWS. And this is because when we are talking to our customers, we see that most of the time, our CIOs, CEOs, they say, I'm investing a lot of time in working or creating the platform that I will use to create my applications. And I'm not focusing in my applications. I'm not focusing in the added value that I want to create to my customers and to my service. And this is something that we need to fix, because when you need to create your own Kubernetes platform, if you are doing it yourself, then you are in a trouble. Because sometimes when we are talking about enterprise, we are not doing IKEA furniture. We are not taking different pieces and putting it together. Because after this is happening, maybe we are doing at home. But when you are going to an office and you see the furniture that you have over there, it's office staff, professional staff, professional furniture. So the difference is key. I mean, it's just not something that you need to create. If you have tools that are already doing this for you, you need to be a professional with your business. And you need to do it using a professional tool, in this case, a platform as a service, which all the pieces integrate. You all know Opensiv. So I'm not going to talk in deep here about Opensiv. But the key difference is that we are not doing a building block solution. Just trying to integrate anything. So when we are thinking about this and we are talking about the alias that we have with Red Hat, it's key to understand that having all these pieces together and working integrates, it's really key. What we are doing is a natural way, a natural roadmap from the self-managed solution with Opensiv that you can run it even on premise. Then you can run that self-managed solution also in the cloud, in AWS. Red Hat was selling their own managed service solution with Opensiv dedicated that was transparent to the customer about what's running on top of AWS. But the relevant thing now is we have a new solution. Everything is integrated. It's a joint managed service. It's like Opensiv dedicated with a new flavor because actually we are providing from AWS support to the infrastructure and to the integration with that infrastructure. And Red Hat is providing support to the platform as a service to the solution that is running on top. And that said, I'm going to pass this now to my colleagues side. And we are going to see a little demo. We'll be coming back on, but it's time for the demo and just want to kind of recap a little bit. I think the way that Rosa is operated and managed, we're going to dive a little bit more into detail there. But I think it's a model that works really well. And that's because even looking at the roadmap presentation earlier, it's very clear there's a lot of complexities that are associated with managing Opensift. Yes, of course, with all of the tools and services that make Opensift easier to run, a lot of that is easier than bare vanilla Kubernetes, but there's still a level of complexity. And so right now what I want to do is kick off a demo. I want to show you what the Rosa experience looks like. And it's an experience that starts in the console, in the AWS console. And for better or worse, I'll say that this is about the extent of which it's integrated into the console today. I think Opensift users like Opensift and Opensift-esque consoles and user experiences. So when you start in the console, you really just have to click one button, you enable the service, and then immediately you'll notice it says download CLI. And the first thing it'll do is it'll take you to a Red Hat documentation page. Let's quickly check that the ethernet is connected here. I think we are good. Well, luckily I have it open in another tab. Regardless, you can see here that it's pretty straightforward. It's a one, two, three, four step. You download the CLI, you make sure that you're kind of logged in, and I'm so confident that this demo is gonna work today. And that's partly because Rosa has this amazing command. It's called Rosa init. And what this does is essentially make sure that the system is ready. It's ready to create a cluster. You have the right quotas in place. I ran the command twice, but just I wanted to zoom in a little bit. You can kind of see it right here. What it's doing is it logs in with my username and validates the credentials. Make sure I have the quota, the policies are in place. Now folks, I know that if you're like me working with AWS, one of the things that trips you up probably is making sure all your IAM policies are in place. And so I think this experience just ensures that all the permissions are set correctly. I'm feeling good. I'm feeling ready about this demo right now. So what I'm gonna do is create a cluster using the Rosa CLI. And it's very straightforward. I will quickly point out that the dash dash STS. This is the security token service from AWS that makes sure that the service is created securely. Obviously Rosa needs a lot of permissions. And so the cluster itself is gonna need to call APIs of AWS on my behalf. So it makes sure that all of that is running correctly. So let's say Rosa demo comments. It's too long. There we go. So pick something a little shorter. We picked the OpenShift version. Now what I'm gonna do is walk you through some of the different options that'll kind of give me. So it's using the ARNs that I've already created. External ID, this is for if you have a special account where you need a custom account ID. Operator roles, prefixes are some more permissions, things for Rosa to have access to AWS. Here's a cool one. You can actually deploy Rosa across multiple availability zones. When we talk about DR strategy, this is gonna be the best possible approach for a majority of customers. The chance that when we look at a deployment across multiple availability zones, you get that high availability that a lot of industries and customers need. Keep going here. Let's pick a region that we wanna deploy it in. Don't worry folks, we're all almost done through this set of kind of prompts here. A private link cluster. Say you wanted to have a cluster that's not exposed to the public internet, workloads that can run within the context of their cluster without needing to reach out or have services reach in. And of course, if you do need to access certain services there's a number of AWS services that have private link end points. Essentially make sure traffic doesn't go on public pipes. Kind of going forward here. If you wanna install into an existing VPC, let's just say no, we'll create a new one. Customer managed key. If you wanna use a custom key for encrypting data, you could choose that right there. Choose the compute nodes. A lot of options here. Let's choose M5 X large auto scaling. Now you can actually configure this automatically here. So essentially it'll automatically scale up the nodes in response to load. And there's some default parameters that are set there. Two compute nodes by default. Some CIDR block information. It's a question for if you wanna encrypt at CD data. At CD data is already encrypted. And by the way, this terminal is so cool. If I want some more information I can just hit the question mark and it'll tell me a little bit more about what exactly it's doing. So by default it is already encrypted. It's kind of an additional encryption on top. So we'll go with the default there. Workload monitoring. This is if you don't want Red Hat SREs to see your application logs. And there we go. And it's gonna start creating that cluster. Now we went through a lot of different options there. What if I wanted to do that again in the future and change just one option? It gives me the entire command. So in the future I could just copy that, change the command a little bit and redeploy it. I think that's one of the, I think cool things about this kind of interface. All right, let's jump back to the console. Now something that I'm sure a lot of you are familiar with here is the Red Hat Hybrid Cloud Console allowing you to see clusters across a number of different environments. So here let's click on the clusters itself which is gonna launch into the OpenShift dashboard specifically and so I created a cluster last week that I'm gonna use for the demo today. By the way that one that we just created we already see it in the console and it's installing. But we're not gonna wait for that today. We're gonna switch over to the cluster that I've already created and here we can see a few things. Now first off, I think this is what I was saying about how although this is OpenShift running on AWS it's the same familiar OpenShift console and experience that you know. And so here we can see the cluster, we can do things like updating the version. I won't go into too much detail here because again, OpenShift is OpenShift. But there is one thing that I wanna quickly show here. If we go to the add-ons tab there's some really interesting things here. Today there's only three but I think there's a lot of potential with what Red Hat and AWS can do together here with these add-ons. For example, there's the cluster logging operator. And if we go into the configuration here we can see that I installed it saying I wanted to stream logs to AWS CloudWatch and collect application logs, infrastructure logs. And in fact, if I go to my CloudWatch console of course I'll need to do a little login again because it's invalidated so we'll give that a second. So if we go to CloudWatch we'll see that the logs have actually started streaming into CloudWatch from my cluster and then go to my log insights. Let's make sure I'm in the right region. Love it, the internet's working great. We'll pick all my log groups. So we'll run a quick query and boom we can see the logs coming directly from my cluster. For example, I can open up one of these and we can see that this particular container is running in Rosa demo and it's the IP controller manager. But there you go, that's the first part of the demo here. I'm gonna switch back to the slides here and I wanna quickly talk about a few things and we'll kick into the next one. So when we look at the number of options that are available on AWS for running compute as a whole it really comes down to a matter of speed versus flexibility versus whether or not the platform is opinionated. And so what we can kinda see here is that the most opinionated gonna be kind of a serverless approach. Really the only thing you're worrying about there is application code. And on the other end of the spectrum we have things like EC2 which have ultimate flexibility. In fact, you can run OpenShift yourself on EC2 if you wanted and a number of customers do that. But when we talk about containers specifically when we look at something like ECS, EKS now there's a number of solutions on AWS that support things like automating the delivery with EKS blueprints, things like AppMesh for worrying about service level networking. There's container registry, there's a number of solutions but with the Rosa which you get is a turnkey application platform and that is it's kind of baked into the platform itself and so I wanna quickly talk about what that means for users. So again with EKS we saw that landscape slide tons of solutions in the CNCF. You can use GitHub, Jenkins, you can use Istio, you can plug and play with all of these solutions and you can get exactly what you're looking for. And this is very powerful. But for many customers, and I think this is a very kind of powerful image here, you flip the switch here and you realize Red Hat provides baked in supported blessed solutions for basically kind of all of the things that you see up there and they're based on open source technologies. And I think this is a very powerful message for customers that need that opinionated approach when building a complete Kubernetes application platform. And so I think one thing that I really wanna talk about here, we mentioned it a few times already is what's the support model? Who's responsible for what? Because when you're managing OpenShift by yourself or even on AWS running on something like EC2 you're actually responsible for quite a bit. The control plane, the data plane, as well as the compute itself. Of course, support and billing will come through Red Hat. Now the model, the difference with Rosa is that you have Red Hat SREs that actually manage that compute data plane and control plane. They're in the back end and they're making sure things run smoothly. When you open a support ticket, you can open it against AWS or Red Hat. Of course, billing is gonna be coming through AWS. I think this is kind of critical to understand that Rosa is this kind of fully managed approach to running OpenShift. Now, of course today we were talking about modernization of applications. What we see here is that there's a number of services on AWS. Really, this is the bread and butter of AWS. It's what we do well. We have so many services that allow you to do things from storage to CI CD, for storing containers, a central managed service. Now, of course, a lot of these things customers may choose to run them within cluster, but also when you're looking at a bigger scope, when you're running multiple clusters in production, you may wanna have a managed service centrally that handles some of these things. And so that's what I wanna talk about in my next demo here. I wanna show exactly how easy it is to work directly with AWS from within your clusters. So going back to my cluster here, let's see. I was actually on the right page. I'm gonna open the console. And so what I'm doing is launching into my OpenShift cluster, my Rosa cluster on AWS. I'm gonna go to operators and go to the operator hub. And I'm just gonna search Amazon. And what we see here is a number of operators for AWS controllers for Kubernetes. This allows us to operate AWS from within OpenShift. This is pretty powerful. So let me scroll down here. Let me find one of these. Let's say RDS and we'll hit install. And we'll see that it gives me the option to go through here and install this operator into my cluster. And notably, and I think OpenShift does a great job with this, it shows me the APIs that become exposed. What can I actually do once this operator is installed? So let's go to my installed operators. I've set up the S3 operator and I wanna create a bucket. So we'll say create bucket. And we'll name this bucket commons demo one. And then the name of the bucket itself actually needs to be completely unique. So seems random enough. And we'll hit create, keeping all the other defaults. All right, so let's take a look. We'll click the bucket, make sure there's no errors. Make sure I didn't mess anything up. Resource sync successfully. Jump to my S3 console. I'm in the right region already. There we go. And just like that, I was able to get a bucket created using a Kubernetes-based artifact. That is, I'm able to work with AWS using a CRD within Kubernetes. I think this is very powerful. And of course, if you wanted to take the information from that actual deployed cluster, of course I lost it. Here we go. And say I wanted to pass in something like the location into a config map or secret. So that my containers could access it. You can do that fairly easily. It's just another CRD, just another resource you deploy into Kubernetes. I really wanted to show this today because it shows that with OpenShift running on AWS, you're able to actually integrate and operate services running within AWS as well. All right, that's my demo for today. I'm gonna pass it back to the slides. Javier's got a couple more that he wants to share with you. Okay, so that was a really good content, really good demo. Finally, we want to talk about which is the consumption-based pricing. Of course, when we are talking about OpenShift self-managed, you will see that the price is much cheaper when you go to a managed service solution. Actually, you have to dimension it in the proper way, but you need to understand which stuff you are going to pay and which part of the price that you are pricing is mapped to any of the stuff that you see over there. First, there's some Rosa services fees, and these fees are compelling or compelling to the worker nodes. Actually, every four CPUs, you are paying a price for the fee in that workload worker node infrastructure. So it depends on the kind of instance that you are using. I mean, if you need to use for your worker, for your workloads in that node's more capacity, then you are paying more. It depends on the dimension that you are doing, and also you have to pay a little fee for the control plane, but actually you will see in the public places that it's a ridiculous price for that, and then of course you have to pay for the infrastructure that you are going to use for the control plane and also for the workers. The good news is that the same way that you are paying for one year, up to three years for the subscriptions with Red Hat, you have a similar model, so if you are quite compromised for Rosa for one to three years, the price will be changing, and of course if you are going to use it for more years, it's going to be cheaper, but the pricing is really simple. If you go to the internet and you just write Rosa pricing, you will see different samples with different control planes, different worker nodes, and so on, and it's really useful. My last slide, can you pass it there because this is not working, please? So I'm here. It's just to talk about the alliance. This is not something that we are just creating now for Rosa. I mean, we have been working with the AWS for many years now. We have a lot of workloads and different solutions on top of AWS. Red Hat is using a lot of stuff on top of us. In fact, you have to understand that the first managed service that they were creating, OpenCiv dedicated, was directly on top of AWS, and what they say, my colleagues and my peers from Red Hat, they say, you know, the first stuff that we are testing for new releases and the first cloud in which we are testing everything is AWS. So I think that is a natural step in the growing of a managed service to go to AWS with Rosa for that integration with OpenCiv in a joint managed service with AWS in order that we can provide a better service and a better solution to our customers. And that's all that we have for today. We hope that was really interesting for you and hopefully see you around during the next days here. Thank you. Thank you.