 Hello everyone. My name is Mohamed Nufal, an architect, cloud native apps architect, that's what they call us now, and I call the global BlackBill team in Microsoft. Pretty cool name. I'm going to talk to you about RedAzure, how to open Chef today and what's possible with it. My goal is just to explain in 10 minutes what you can do, what you can achieve with it. Then the other goal is to bridge Walid to Azure from AWS. It's like we need that passion, man. That's pretty cool. Right. So if you have installed OpenShift in a cloud, any public cloud, you're going to end up with more or less similar or close architecture to the one on that screen. So you're going to have your infrastructure nodes. You're going to have your master nodes fronted by a load balancer so you can load balance the calls to your API server. You're going to have your worker nodes or application nodes. You're going to have another load balancer so you can control the ingress flows to your applications. On top of that, you're going to have Azure DNS or a DNS provider so you can register your FQDN for your API server and you can register the FQDNs for your applications as well. And then on top, you're going to have an identity provider, Azure Cloud Active Directory in this instance or any other identity provider. And you're going to have a key vault of sorts so you can register your secrets and your certificates and you can bring them to your applications. So if you install this in Azure, for instance, the whole infrastructure management piece belongs to you so you're going to operate the whole thing. And on top of that, you're going to do the onboarding for your developer workflows so you can have developers deploying applications to OpenShift. But the only managed piece in that scenario is you can open a code on either Microsoft or Red Hat and say, hey, I have a support ticket, please solve it for me. But that comes with the operational burden that you still have to operate the infrastructure or the underlying infrastructure for the whole thing. So four years ago, after a lot of work that is happening with Red Hat, we decided to build Azure Red Hat OpenShift. And Azure Red Hat OpenShift is a co-engineered, co-developed, co-supported, co-operated service between us and Red Hat, so Microsoft and Red Hat. Even in the sales part, it's co-sealed, right? So both teams will go together and sell that service to customers. It's not just like Red Hat going or Microsoft will go alone. With Azure Red Hat OpenShift, we take the whole infrastructure part and we abstract it for you. So you still have access to it, but it's all abstracted for you. It's fully managed. So all the nodes that you saw, all the load balancers, the DNS part, the scale set parts and so on, all of these parts are managed. You're going to end up with just the part that you need to onboard your developer workflows and the identity provider integration. That's all that you need to do. The rest of this is already operated and managed in the back end by Microsoft and Red Hat. So I'll give you some ideas of what you can achieve today with Azure Red Hat OpenShift. First, on delete working end, you can have a choice whether you can have private clusters, so private control plane and private ingress for your applications or public. You can mix and match between public and private as well, right? Then you have full cluster admin on the cluster itself. So when we first did Azure Red Hat OpenShift, it was on 3.11, but then we didn't give you admin access there because we didn't need to provide too much access that you can break the SLA, right? So we needed to achieve the SLA, but then that led to a lot of restrictions on our end. So we moved to a way where, okay, we're not going to restrict a lot, we're going to open up, but we're going to tell you the things that you shouldn't do, right? So you can technically have the admin access that you require to your cluster do the things that admins would require to do, but at the same time provide a fully managed service that is fully supported by both parties. You can bring your own virtual network, which means that you can own your ingress and ingress flows. So you can ingress your traffic through a web application firewall. You can do whatever you like with your ingress. You can deploy on routers and so on. And you can control your ingress flows as well. Like a lot of enterprises that we work with, what they do is they ingress through a firewall, for instance. So they can audit all their traffic. They prevent the traffic that shouldn't be allowed to the public and so on. So that's something that you can achieve because you have full access to the virtual network. We have multiple availability zone support, so that's if you need to split your traffic across three different availability zones and then you move from 3.9s SLA to 4.9s SLA, that's what you can achieve with availability zones. You can also bring your own identity provider, so we recommend Azure Active Directory, but if you have your own identity provider, like you're working with Okta or some other identity provider, you can bring your own and then integrate with it. We don't force anything there. You can choose your desired billing model as well. So by default, you go for a pay as you go model, which is the on-demand model. You also have the reserve model, where you're going to say, hey, I'm going to pay upfront, but then you're going to get a discount for whatever you committed to upfront. And then there is the pretty cool model, which is the spot model. And the spot model is you're going to bid on the unused capacity in Azure. So in each data center, we have some unused capacity, so you're going to bid on it. And that bid will give you 80% to 85% saving on the compute price, so on the on-demand compute price. The caveat there is once we need that capacity, we're going to evict your node, right? Which means that the spot instance is a good candidate for anything that is a femoral workload, so developer clusters. A lot of most, like a lot of customers that we work with, we onboard their developer clusters on the spot instances because that's how they're going to save. And then we save the state in a short file system in a desk or whatever that is, so assuming that we evicted the cluster, we can just get the state back. You can think of any batch workload, any HPC-type workload, any transit-type workloads, jobs, and so on. That's a very good candidate for spot instances, and it's a way for you to cost-optimize these type of workloads. We maintain good compliance. So we have PCI DSS, SOC123, ISO 27001, and a lot other things. It's all in the public Azure docs that you can follow and we keep updating these compliance. We do encryption that's FIPS compliant as well. And we, Azure Red Hat OpenShift or OpenShift in general in Azure is a first-party service. We don't treat it as a ISV service. We work with Azure Red Hat OpenShift as a first-party service. As such, we're integrating Azure Red Hat OpenShift in the whole Azure ecosystem, right? So that integration is done using a service called Azure Arc, and Azure Arc is bringing the Azure control plane to any data center, right? So our control plane in Azure is called Azure Resource Manager. That's the API. That's the control plane that you can control everything in Azure with it. And Azure Arc is the one that brings this one outside Azure to anything else, right? So to your own data centers on premises or God forbid to some other cloud providers, right? So with Azure Arc, we brought Azure Monitor. We brought Azure Log Analytics. We brought Azure Policy, so you can deploy Azure Policy on OpenShift running on Azure and on your premises. And you can deploy one policy to both clusters that says my containers can only pull images from this registry, right? So you're going to have a single pane of grass and Azure Policy that says deploy that policy to those clusters, right? So these are the type of things that you can achieve with Azure Arc and the Azure Policy integration. And we're adding all of these like type of things, GitOps and Azure API and so on, all of these on Azure Arc that you can deploy on OpenShift either on the cloud or on premises. You also have a choice to work either with the OpenShift tooling that you like and love, like OpenShift, like the pipelines or the registry or whatever. Or you can have a choice to use Azure services like Azure Container Registry or GitHub Actions or Azure DevOps and whatever that you choose to work on with Azure Red Hat OpenShift. There is no restriction on the tooling. That's, I think, the message. It's unified support, right? So whether you're going to open, you have a problem with your cluster and you're going to open a ticket on the Azure portal or on the Red Hat portal, there is a back-to-back support system that the support engineers will exchange tickets on, assuming that you open an OpenShift ticket with Red Hat and then it turned out to be like an Azure underlying VM issue. They're going to exchange the ticket at the back end. They're going to talk together and they're going to solve the issue for you, right? So that's unified support. There are other ways as well, sitting there and monitoring the cluster 24-7 to, in order to ensure that there is, if there is any failure, any note failure, underlying failures, that they're going to fix it. And they have a lot of other fixes that are in place. So if there is something that transit error, they're going to fix it automatically. Also, as a plate, we brought, bring your own DNS. So that's bring your own DNS of Korean. That's bring your own recursive or resolving DNS. So you can, like, I'm using my specific DNS resolvers in my own premises or in the cloud. And I'm going to use them instead of the Azure ones. You can bring your own with Azure Red Hat OpenShift. And you can do the same for your certificate authority. So you can bring your own certificate authority and then sign all your applications using that certificate authority. So just a small couple of examples on the things that you can do. We spoke about this one. So this is like networking, of course. Everybody's favorite topic. You can just have an out table from the subnets where the masters and the infrastructure nodes are running or the application nodes are running that goes to a firewall. That's Azure firewall in this instance or any firewall of your choice. And technically get all the egress traffic there. And from there, you can decide what's whitelisted and what's blacklisted in terms of traffic. You can take this a bit further and say, okay, my egress traffic, so traffic originating from my nodes will go through Azure firewall. But my egress traffic shouldn't go through a firewall because the firewall isn't supposed to take or handle web traffic, right? So what you can do is have an application gateway that's Azure web application firewall, but you can bring your own as well. And then ingress all the traffic through the web application firewall, and then that's a stateful traffic, right? So it's going to maintain the same path back. But any traffic originating from the node will go through the firewalls. That's something that you can do as well. We have all sorts of like complex scenarios that customers are doing with Azure that have to open shift. On the regional availability, we have customers doing this, for instance, where you can deploy your clusters across a couple of regions. If you need, you really want to go for an active, active type scenario, you cannot do this with an election databases, right? So you need something no-sequel. So that's Cosmos DB, that's our no-sequel offering. So if your state can be stored there, you can have an active, active across region type setup for Azure that open shift. And yeah, we do have public references. Alpega is in the logistics industry. They're focused on the logistics for the transportation. They had this, you know, sudden increase of traffic every now and then. They couldn't accommodate for this on premises because they needed to build the infrastructure for this. So as such, we imported them to Azure Red Hat OpenShift, and now they just scale on demand whenever it's needed. And that's technically how they saved cost and achieve the cost, achieve the scale that they require. Andriani is the same in South America. They are in the logistics, but in the shipping. During COVID, as you know, everybody went online. So the traffic or the number of shipments during the days increased dramatically for them to accommodate for this increase. They also, we accelerated on boarding to Azure Red Hat OpenShift as well. And now they achieve the cost and, sorry, they achieve the scale, the on demand scale that they require with the cost that is desired from them, right? We do have some other examples which I'm not going to go through, but all of these are public references that are documented in the Microsoft site. And lastly, if you want to learn more, you can either visit the ARO or Azure Red Hat OpenShift docs in Microsoft or the ones in the Red Hat site.